id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
54,277,050 | https://en.wikipedia.org/wiki/Radio-86RK | The Radio-86RK () is a build-it-yourself home computer designed in the Soviet Union. It was featured in the popular Radio () magazine for radio hams and electronics hobbyists in 1986. The letters RK in the title stands for the words Radio ham's Computer (). Design of the computer was published in a series of articles describing its logical structure, electrical circuitry, drawings of printed circuit boards and firmware. The computer could be built entirely out of standard off-the-shelf parts. Later it was also available in a kit form as well as fully assembled form.
Predecessors
The Radio-86RK is the successor of earlier build-it-yourself computer of the same designers, the Micro-80, and has limited compatibility with it. Its description was also published in a series of articles in the Radio magazine in the early 1980s. But its complex design, consisting of several modules and containing about 200 chips, lack of printed circuit board drawings and most importantly lack of chips on sale made the assembly of the computer hard to accomplish. Micro-80 computers were assembled by only a few enthusiasts.
Assembly process
To assemble the computer, it was required to acquire the necessary electronic components, to make two printed circuit boards and mount all components on them. It was mostly a single board computer, as the second board served only as the base to mount the keyboard keys. The main board used a single large connector for power, keyboard, tape recorder and even video output. Hence it was easy to disconnect the board and work on both sides of it outside the case.
Next, the firmware has to be written in two erasable ROM chips using a chip programmer. Also a power supply unit, a keyboard and a computer case were to be made. The computer used a normal domestic TV set connected to a composite video input as a display. As most Soviet TVs of the time did not have video inputs, it was necessary to install a special module or modify the TV's electronics to implement it. The approximate cost of all required components was about 260 rubles.
The circuitry of the Radio-86RK contains only 29 chips and was relatively easy to assemble. However, finding the chips to buy was difficult, as they were scarce and sold in small volumes in major cities of the USSR. It was particularly difficult to find the KR580VG75 video chip, which was produced only in small quantities. This led to the development of a replacement video circuit which contained 19 chips on a separate board, and was similar to the display module of the Micro-80 computer.
The editorial board of Radio magazine received a large amount of mail in response to the publication. In almost every letter, readers noted how difficult it was to find the necessary electronic components. The editorial board published an appeal to the Soviet electronics industry, proposing they begin producing Radio-86RK kits commercially. By the end of the 1980s manufacturing of computer cases, keyboards and main boards for the Radio-86RK, as well as selling electronic components were carried out by numerous cooperatives.
Technical specifications
CPU: KR580VM80A (Intel 8080A clone, until mid-1983 was designated as KR580IK80A) clocked at 1.777 MHz. For simplicity's sake the clock generator KR580GF24 (Intel 8224 clone) is used both for CPU and video controller. As 16 MHz clock generator frequency is chosen to generate television compatible signal, the CPU is unable to run at its maximum speed of 2.5 MHz.
RAM: 16 KiB in original version, using K565RU3A chips (4116 clone). It is possible to double memory size by mounting additional RAM chips on top of the chips installed on the main board.
ROM: 2 KiB erasable ROM of type K573RF5 (2716 clone), contains monitor firmware
Video controller: KR580VG75 programmable CRT controller, interfaced with KR580VT57 (Intel 8257 clone) DMA controller. The DMA controller is also used for dynamic memory refresh. Video controller KR580VG75 is a clone of Intel 8275, a rare chip not used in any mainstream system, and originally proposed for terminals.
Text mode: 64 x 25 characters, monochrome. Images for the upper case Cyrillic and Latin characters in KOI-7 N2 encoding are stored in the KR573RF1 (2708 clone) erasable ROM.
Semigraphics: 2 x 2 dot matrix combinations in the graphic character subset – 128 x 50 dots total. Higher resolutions are available by appropriate video controller programming.
Keyboard: 66 keys. The keyboard matrix is attached via programmable peripheral interface chip KR580VV55 (Intel 8255 clone) and scanned by CPU.
Sound: CPU pin INTE used as a sound source. This pin is usually used to interface with programmable interrupt controller, but since the computer did not have any interrupt sources, the pin was used for sound generation. CPU commands EI and DI allowed switching the pin state.
Storage media: cassette tape. With DMA controller turned on the CPU is unable to measure time intervals precisely, that is required for tape reading and recording. Therefore, during tape operations, the DMA controller turns off. This results in stopping the video controller and memory refresh, so the CPU performs memory refresh programmatically. Signal from recorder is amplified by the К140УД6 (analog of MC1456), negative part is cut away by diode and then the signal is fed to the dedicated TTL input of the same KR580VV55 that serves the keyboard.
Recording format: 0 was written as a pair of values 0,1 and 1 was written as 1,0. Hence the overall signal had no constant component and could be stored within the frequency range supported by the tape recorder. A synchronization byte (E6) was written first to synchronize the reading frame. A simple second layer that featured leading zeroes, offset, length and checksum was implemented on the top.
Additional I/O: The computer also has a slot for the second chip of the same type. This second chip is meant for various specific projects (amateur radio constructions, consumer electronics controllers, sensors, etc). As long as only the keyboard and tape recorder are required, this second chip does not need to be mounted on the circuit board.
Address space: the address space consists of 8 large slots, addressing 8 Kb each. Two or four of them are dedicated for RAM (so 16 or 32 Kb), one for ROM and DMA controller (during write operations, ROM is disconnected from the bus, and data is transferred to the DMA controller; during read operations, DMA controller is disconnected from the bus, and data is transferred from the ROM), one for video controller and two for the interface chips, main and optional. Only RAM actually uses all available addresses, IO devices only use a few cells within they dedicated 8 Kb segment. Separate IO commands that Intel 8080 has are not supported.
Stripboards: to give more creative possibilities for amateurs, the main board has the two small stripboards next to the main connector.
Software
The only software available to the user after turning on the computer is a monitor contained in ROM. The monitor supports basic debugging functions, it allows viewing and modifying memory cells, loading and saving memory contents to the tape, entering and running programs in binary code. The monitor is also HAL: programs that access the hardware only by calling the monitor library support both 16Kb and 32 Kb RAM versions and often also Micro-80 predecessor.
Initially, the Radio magazine distributed programs for the Radio-86RK in the form of hexadecimal dumps. After entering the program dump into the computer's memory, it could be saved to the tape. It was easy to make a mistake when typing in large dumps, so the magazine published checksums along with the dumps. It was necessary to execute "O" monitor directive to calculate the checksum.
The magazine published two versions of the BASIC interpreter: an adapted version of Micro-80 BASIC and a version specially developed for the Radio-86RK featuring enhanced editing capabilities, new functions, and other improvements.
Other software published in the magazine included assembler, debugger, disassembler, text editor, voice recorder, music editing system. Also, a lot of BASIC programs were published, including calculations for electronic circuits design and games.
Another way of obtaining software was the tape exchange among Radio-86RK owners. In 1988, the law on cooperation in the USSR came into force, which made legal to produce software for profit by individuals and cooperatives. From that moment it became possible to buy software for the Radio-86RK.
Operating systems
In 1989 the RAMDOS operating system was developed for the computer. It uses part of computer's RAM as a RAM drive. The contents of RAM drive can be loaded and saved to the tape. The operating system has a minimalistic user interface with only seven commands; it also adds support for file operations to the BASIC interpreter.
In October 1992, the Radio magazine and TOO Lianozovo company announced a floppy-disk controller for the Radio-86RK and the Microsha. The disk operating system (DOS) was stored in erasable ROM on the controller board. The Radio magazine published only the electrical circuitry of the controller but not the firmware. Radio-86RK owners were invited to buy the fully assembled controller or a kit along with two floppy disks containing external DOS commands, programming languages and text description of the operating system.
Industrial made versions
The first industrially produced version of the Radio-86RK was the computer named Microsha (an abbreviation for the words Microcomputer and School). Initially, the authors had given that name to the original computer, but the editorial board has changed the name to Radio-86RK. Eventually, the name Microsha was given to the industrially produced version of Radio-86RK.
Microsha preparation for serial production went in parallel with Radio-86RK articles publication. The changes authors made to the design and firmware made Microsha incompatible with Radio-86RK. In 1989, the Radio magazine had published new firmware for Microsha that improved software compatibility.
Following magazine publication, a number of factories started industrial production of several home computer models using the Radio-86RK design. Not all models were fully compatible with Radio-86RK and included different improvements, such as expanded memory size, additional character sets, rudimentary color support.
The list of models includes:
Alfa-BK
Impulse
Microsha
Electronica KR-01, Electronica KR-02, Electronica KR-03, Electronica KR-04 (electronic kits)
Partner 01.01
Spektr-001
Apogey BK-01
Krista
UMPK-R-32
Sogdiana-1
Mikro-88
Volume of production for a number of models:
Successors
The technical capabilities of the Radio-86RK were very modest. It did not have a graphics mode. The RAM expansion was impossible without serious modifications and loss of compatibility. As the volume of production of home computers was small, and the demand for them kept increasing, the editorial board decided to publish a new design for the build-it-yourself computer.
Although the designers of the Radio-86RK had developed a new 16-bit computer, the Micro-16 (based on the K1810VM86 microprocessor, with a CGA-compatible graphics mode that was capable of running software for the CP/M-86 and MS-DOS), the editorial board again opted for a computer based on the 8-bit processor KR580VM80. The main reason for this was the availability of electronic components for purchase and their cost. The publication of articles on the new computer Orion-128 began in January 1990.
References
External links
Radio-86RK (Russia) The Centre for Computing History.
Radio-86RK emulator written in JavaScript
Software catalog for Radio-86RK
skiselev / radio-86rk — modern redesign of the Radio-86RK as a single board.
Home computers
Soviet computer systems
Computer-related introductions in 1986 | Radio-86RK | [
"Technology"
] | 2,524 | [
"Computer systems",
"Soviet computer systems"
] |
54,278,396 | https://en.wikipedia.org/wiki/Skeletocutis%20yunnanensis | Skeletocutis yunnanensis is a species of poroid crust fungus in the family Polyporaceae that was described as a new species in 2016. The type specimen was collected in northern Yunnan Province, southwestern China, where it was found growing on decaying angiosperm wood in a temperate forest.
Description
The fungus is characterized by a white, resupinate (crust-like) fruit body with a cream to buff pore surface, and the near absence of a sterile margin when mature. The angular pores number 5–6 per mm with entire mouths. It has a dimitic hyphal structure, containing both generative and skeletal hyphae. The generative hyphae in the subiculum and the trama are covered by fine crystals. The spores are allantoid (sausage-shaped), hyaline, smooth and thin walled, measuring 3.5–4.5 by 1.0–1.2 μm. They typically contain two small oil droplets.
References
Fungi described in 2016
Fungi of China
yunnanensis
Fungus species | Skeletocutis yunnanensis | [
"Biology"
] | 218 | [
"Fungi",
"Fungus species"
] |
54,285,001 | https://en.wikipedia.org/wiki/Cyclocycloid | A cyclocycloid is a roulette traced by a point attached to a circle of radius r rolling around, a fixed circle of radius R, where the point is at a distance d from the center of the exterior circle.
The parametric equations for a cyclocycloid are
where is a parameter (not the polar angle). And r can be positive (represented by a ball rolling outside of a circle) or negative (represented by a ball rolling inside of a circle) depending on whether it is of an epicycloid or hypocycloid variety.
The classic Spirograph toy traces out these curves.
See also
Centered trochoid
Cycloid
Epicycloid
Hypocycloid
Spirograph
External links
Plane curves | Cyclocycloid | [
"Mathematics"
] | 161 | [
"Planes (geometry)",
"Euclidean plane geometry",
"Plane curves"
] |
54,285,127 | https://en.wikipedia.org/wiki/Landau%20Gold%20Medal | The Landau Gold Medal () is the highest award in theoretical physics awarded by the Russian Academy of Sciences and its predecessor the Soviet Academy of Sciences. It was established in 1971 and is named after Soviet physicist and Nobel Laureate Lev Landau. When awarded by the Soviet Academy of Sciences the award was the "Landau Prize"; the name was changed to the "Landau Gold Medal" in 1992.
Prize laureates
1971 - Vladimir Gribov
1974 - Evgeny Lifshitz, Vladimir Belinski, and Isaak Khalatnikov
1977 - Arkady Migdal
1980 - Aleksandr Gurevich and Lev Pitaevskii
1981- Eva Jablonka
1983 - Alexander Patashinski and Valery Pokrovsky
1986 - Boris Shklovskii and Alexei L. Efros
1989 - Alexei Abrikosov, Lev Gor'kov, and Igor Dzyaloshinskii
1992 - Grigoriy Volovik and Vladimir P. Mineev
1998 - Spartak Belyaev
2002 - Lev Okun
2008 - Lev Pitaevskii
2013 - Semyon Gershtein
2018 - Valery Pokrovsky
See also
List of physics awards
Prizes named after people
References
Awards established in 1971
Civil awards and decorations of Russia
Civil awards and decorations of the Soviet Union
Physics awards
Awards of the Russian Academy of Sciences
USSR Academy of Sciences
1971 establishments in the Soviet Union | Landau Gold Medal | [
"Technology"
] | 283 | [
"Science and technology awards",
"Physics awards"
] |
61,854,053 | https://en.wikipedia.org/wiki/Metal%20assisted%20chemical%20etching | Metal Assisted Chemical Etching (also known as MACE) is the process of wet chemical etching of semiconductors (mainly silicon) with the use of a metal catalyst, usually deposited on the surface of a semiconductor in the form of a thin film or nanoparticles. The semiconductor, covered with the metal, is then immersed in an etching solution containing an oxidizing agent and hydrofluoric acid. The metal on the surface catalyzes the reduction of the oxidizing agent and therefore in turn also the dissolution of silicon. In the majority of the conducted research this phenomenon of increased dissolution rate is also spatially confined, such that it is increased in close proximity to a metal particle at the surface. Eventually this leads to the formation of straight pores that are etched into the semiconductor (see figure to the right). This means that a pre-defined pattern of the metal on the surface can be directly transferred to a semiconductor substrate.
History of development
MACE is a relatively new technology in semiconductor engineering and therefore it has yet to be a process that is used in industry. The first attempts of MACE consisted of a silicon wafer that was partially covered with aluminum and then immersed in an etching solution. This material combination led to an increased etching rate compared to bare silicon. Often this very first attempt is also called galvanic etching instead of metal assisted chemical etching.
Further research showed that a thin film of a noble metal deposited on a silicon wafer's surface can also locally increase the etching rate. In particular, it was observed that noble metal particles sink down into the material when the sample is immersed in an etching solution containing an oxidizing agent and hydrofluoric acid (see image in the introduction). This method is now commonly called the metal assisted chemical etching of silicon.
Other semiconductors were also successfully etched with MACE, such as silicon carbide or gallium nitride. However, the main portion of research is dedicated to MACE of silicon.
It has been shown that both noble metals such as gold, platinum, palladium, and silver, and base metals such as iron, nickel, copper, and aluminium can act as a catalyst in the process.
Theory
Some elements of MACE are commonly accepted in the scientific community, while others are still under debate. There is agreement that the reduction of the oxidizing agent is catalyzed by the noble metal particle (see figure to the left). This means that the metal particle has a surplus of positive charge which is eventually transferred to the silicon substrate. Each of the positive charges in the substrate can be identified as a hole (h+) in the valence band of the substrate, or in more chemical terms it may be interpreted as a weakened Si-Si bond due to the removal of an electron.
The weakened bonds can be attacked by a nucleophilic species such as HF or H2O, which in turn leads to the dissolution of the silicon substrate in close proximity to the noble metal particle.
From a thermodynamic point of view, the MACE process is possible because the redox potential of the redox couple corresponding to the used oxidizing agents (hydrogen peroxide or potassium permanganate) are below the valence band edge at the electrochemical energy scale. Equivalently, one could say that the electrochemical potential of the electron in the etching solution (due to the presence of oxidizing agent) is lower than the electrochemical potential of the electron in the substrate, hence electrons are removed from the silicon. In the end, this accumulation of positive charge leads to the dissolution of the substrate by hydrofluoric acid.
MACE consists of multiple individual reactions. At the metal particle, the oxidizing agent is reduced. In the case of hydrogen peroxide this can be written down as follows:
H2O2 + 2H+ -> 2H2O + 2h+
The created holes (h+) are then consumed during the dissolution of silicon. There are several possible reactions via which the dissolution can take place, but here just one example is given:
Si + 6HF + 4h+ -> SiF6^{2}- + 6H+
There are still some unclear aspects of the MACE process. The model proposed above requires contact of the metal particle with the silicon substrate which is somehow conflicting with the etching solution being underneath the particle. This can be explained with a dissolution and redeposition of metal during MACE. In particular it is proposed, that some metal ions from the particle are dissolved and eventually are re-deposited at the silicon surface with a redox reaction. In this case the metal particle (or even larger noble metal thin films) could partially maintain contact to the substrate while also etching could partially take place underneath the metal.
It is also observed that in the vicinity of straight pores as shown in the introduction also a micro-porous region between the pores is formed. Generally this is attributed to holes that diffuse away from the particle and hence contribute to etching at more distant locations.
This behavior is dependent on the doping type of substrate as well as on the type of noble metal particle. Therefore, it is proposed that the formation of such a porous region beneath the straight pores depends on the type of barrier that is formed at the metal/silicon interface. In the case of an upward band bending the electric field in the depletion layer would point towards the metal. Therefore, holes cannot diffuse further into the substrate and thus no formation of a micro-porous region is observed. In the case of downward band-bending holes could escape into the bulk of the silicon substrate and eventually lead to etching there.
Experimental procedure of MACE
As already stated above MACE requires metal particles or a thin metal thin film on top of a silicon substrate. This can be achieved with several methods such as sputter deposition or thermal evaporation. A method to obtain particles from a continuous thin film is thermal dewetting.
These deposition methods can be combined with lithography such that only desired regions are covered with metal. Since MACE is an anisotropic etching method (etching takes place not in all spatial directions) a pre-defined metal pattern can be directly transferred into the silicon substrate.
Another method of depositing metal particles or thin films is electroless plating of noble metals on the surface of silicon. Since the redox potentials of the redox couples of noble metals are below the valence band edge of silicon, noble metal ions can (like described in the theory section) inject holes (or extract electrons) from the substrate while they are reduced. In the end metallic particles or films are obtained at the surface.
Finally, after the deposition of the metal on the surface of silicon, the sample is immersed in an etching solution containing hydrofluoric acid and oxidizing agent. Etching will take place as long as the oxidizing agent and the acid are consumed or until the sample is removed from the etching solution.
Applications of MACE
The reason why MACE is heavily researched is that it allows completely anisotropic etching of silicon substrates which is not possible with other wet chemical etching methods (see figure to the right). Usually the silicon substrate is covered with a protective layer such as photoresist before it is immersed in an etching solution. The etching solution usually has no preferred direction of attacking the substrate, therefore isotropic etching takes place. In semiconductor engineering, however it is often required that the sidewalls of the etched trenches are steep. This is usually realized with methods that operate in the gas-phase such as reactive ion etching. These methods require expensive equipment compared to simple wet etching. MACE, in principle allows the fabrication of steep trenches but is still cheap compared to gas-phase etching methods.
Porous silicon
Metal assisted chemical etching allows for the production of porous silicon with photoluminescence.
Black silicon
Black silicon is silicon with a modified surface and is a type of porous silicon. There are several works on obtaining black silicon using MACE technology. The main application of black silicon is solar energy.
Black Gallium Arsenide
Black Gallium Arsenide with light trapping properties have been also produced by MACE.
References
Etching
Chemistry
Research lasers
Semiconductors
Engineering | Metal assisted chemical etching | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,697 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
52,959,107 | https://en.wikipedia.org/wiki/Kinematic%20diffraction | Kinematic diffraction is an approximation for diffraction of waves. It assumes that the waves are only scattered once, neglecting multiple scattering. For linear wave equations, it involves summing the contribution of the partial waves emanating from different scatterers, where only the incident field drives the scattering. As a consequence, the far-field amplitude essentially corresponds to the Fourier transform of the scattering length density, which would be the charge density for x-rays and the electrostatic potential for electrons. It is typically understood as the Born approximation applied to a number of scatterers, and as such is often used for X-ray crystallography. The corresponding full theory is called the dynamical theory of diffraction. For x-rays and in electron diffraction different approaches are used to calculating the dynamical diffraction for transmission with high-energy electrons as well as for low energy electron diffraction or reflection high-energy electron diffraction
References
Diffraction | Kinematic diffraction | [
"Physics",
"Chemistry",
"Materials_science"
] | 203 | [
"Materials science stubs",
"Spectrum (physical sciences)",
"Diffraction",
"Crystallography",
"Condensed matter physics",
"Condensed matter stubs",
"Spectroscopy"
] |
39,992,948 | https://en.wikipedia.org/wiki/Bent%20pin%20analysis | Bent pin analysis is a special kind of failure mode and effect analysis (FMEA) performed on electrical connectors, and by extension it can also be used for FMEA of interface wiring. This analysis is generally applicable to mission-critical and safety-critical systems and is particularly applicable to aircraft, where failures of low-tech items such as wiring can and sometimes do affect safety.
How Connectors Work
Electrical connectors carry signals and power between parts of a system that may need to be separated during manufacturing, while in use, or when maintenance is required. Each connector that is part of a mating connector pair may be part of an electrical cable assembly (in which the unmated connector has some freedom of movement), or part of a chassis or other assembly (in which the connector position is fixed). In most pairs of mating connectors, one connector is fitted with an array of socket contacts and the other connector has a corresponding array of pin (or other shaped) contacts, as illustrated in Figure 1. These are sometimes called female and male contacts. Contacts are held in fixed positions within the connector body by a solid rectangular or cylindrical block of insulating material called an insert (shaded red in the illustration). The insert includes holes to accommodate the contacts. In many modern connectors used to pass signals and power in wires, contacts are supplied separately from the connector body. The non-mating ends of the contacts are crimped or soldered to wires, and then the mating ends of the contacts are pushed into the connector inserts with a special tool. A properly inserted contact locks itself in the insert, and another special tool must be used to extract it. In some kinds of connectors the contacts are permanently captured in the insert, so it might be necessary to replace the entire connector if one contact becomes damaged.
Not all connectors are joined to wires as shown in the figure. For example, some connectors may be populated with contacts whose non-mating ends feature printed circuit (PC) tails rather than openings for wires so that the contacts may be joined directly to a printed wiring board.
Most connectors also include an outer metal jacket, called a shell (shaded blue in the illustration), which retains the insert in a fixed position with respect to the shell. A shell provides a means to handle the connector while providing the contacts with some protection against damage. Shells in a mating connector pair are designed to mate in exactly one orientation with respect to each other such that their inserts align the socket and pin contacts for mating without damage as the connectors are pushed together. Shells in most kinds of connectors also provide a mechanism to lock mating connectors together to prevent unintentional de-mating due to stress or vibration. Metal shells are often electrically connected to chassis ground for safety purposes and for control of electromagnetic interference (EMI).
A connector whose shell fits inside the shell of the mating connector is called a plug, and the other connector is called a receptacle. The figure shows a plug with pin contacts and a receptacle with socket contacts, but the opposite arrangement is also common.
How Connectors Fail
Connectors, like any other system parts, are subject to failures. Metal shells can fail mechanically such that connector pairs fail to remain mated. Bent pin analysis examines more common connector failure modes associated with connector contacts. These include loss of electrical conductivity along an intended path due to corrosion on mating surfaces of electrical contacts, wires that have broken away from the contacts, and physically damaged or bent contacts. A bent contact cannot mate with the corresponding contact in the mating connector. These bent contacts are usually called bent pins. While some contacts are not truly pins with circular cross sections, any bendable male contact is typically called a pin.
In most connectors, and as shown in Figure 1, socket contacts are held completely within the insert, with just the mating end of the socket contact accessible at the insert mating surface. With this arrangement, socket contacts have good protection from unexpected damage during handling, and socket contacts with this arrangement are therefore not subject to inadvertent bending. In contrast, the mating ends of pin contacts protrude above the surface of the insert, and mishandling can bend one or more of these pins. For example, bending can occur if a person fails to carefully align the shells of two mating connectors before pushing them together because the shell of the socket connector can sometimes be pushed against exposed pins on the pin connector. Or, a person handling a cable assembly may let an end of the cable with a pin contact connector brush against the corner of a workbench, leaving some bent pins. While one or more pins may suffer only a slight bend due to mishandling, an attempt to mate the two halves can force a slightly bent pin – which no longer aligns with the opening of its socket contact – to slide between the mating surfaces of the two inserts and wind up lying flat between them. Unfortunately, pin contacts are thin and can easily be bent in many kinds of connectors, and the effect of this bending on mating usually isn't noticeable when a human applies the relatively strong force necessary to mate a connector pair. Rather, the damage becomes known only when the system fails to operate as expected.
(Some newer connectors are designed with the exact opposite arrangement – protruding socket contacts and recessed pin contacts. The idea is that the more vulnerable pins are protected and the more rigid sockets are exposed, and if a rigid socket is bent by mishandling the damage becomes immediately obvious because it is virtually impossible to mate the two connectors. Since the damaged connector cannot be mated, and presumably the system would not be operated, there is no reason to apply bent pin analysis for this kind of connector.)
The effects of a bent pin on system operation may or may not be immediately obvious, but they are potentially catastrophic. There are several possible failure modes. If a pin that normally carries a signal or power has bent, the electrical path is now broken. If the bent pin does not touch a neighboring pin or a grounded shell, then there are no shorts to other paths. Figure 2 shows how pin spacing and diameter in one common military-type connector are such that a bent pin can fall between two others without making contact. If the bent pin touches the grounded shell, then the pin's signal is now shorted to chassis ground. If the bent pin is touching another contact (or two other contacts), then there is an electrical short between two (or three) paths (Figure 3). In some very commonly used miniature D connectors, it is possible that a bent pin can touch two neighboring contacts plus a grounded connector shell, thus shorting chassis ground to three electrical paths. The connector in Figure 3 is an example: its mating plug connector (not shown) fits inside the Figure 3 receptacle shell, and the plug's shell is therefore closer to the pins than the receptacle shell. This means that the bent pin in the figure can touch a shell.
Special Considerations in Bent Pin Analysis
As with any kind of FMEA, bent pin analysis considers only one failure mode at a time. A simple (and traditional) bent pin analysis looks at consequences of each pin bending to each of its neighbors and to the shell. However, as noted above, a bent pin can sometimes touch more than one electrical path at once, so a more complete analysis also considers multiple simultaneous failures caused by the singular failure mode of one bent pin.
Bent pin analysis also determines effects of unused pins that can bend. An unwired but bent “spare” pin may cause no noticeable effect at all, but it may also short two other paths together, or it may short a neighboring path to a grounded shell.
Non-Bending Failure Modes
Bent pin analysis also considers open paths between mating contacts. While an open path may be caused by a bent pin that doesn't touch any neighboring contact (depending on pin density, this is possible in some connectors and impossible in others), but an open path may also be caused by failure modes other than bending. As noted above, one common failure mode is corrosion of the mating surfaces of contacts, but corrosion may also affect the interface where the wire is joined to the contact. Another failure mode is an improperly seated contact (one that has not been properly locked into place in its insert during manufacturing, or one whereby the contact's locking mechanism fails), so that the contact is pushed out of the insert during the mating process, or it can "walk out" as result of pull from its attached wire. At some point in time, the improperly seated contact moves away from its mating contact and breaks the electrical path.
Performing Bent Pin Analysis
As with any other FMEA, bent pin analysis consists of two parts: determining failure modes, and determining the consequences (failure effects) on system behavior.
Determining Failure Modes
The failure modes of a particular pin always include (a) open circuit due to corrosion or other non-bending failure, and at least one of the following if the pin is bendable: (b) bending to nothing, (c) bending to one neighboring pin, (d) bending to one neighboring pin and the shell, (e) bending two neighboring pins, (f) bending to two neighboring pins and the shell, and (g) bending to the shell.
In bent pin analysis, as it is usually performed, failure modes of each pin are determined using a scaled drawing of the connector and its pins. The analyst considers each bendable pin, one at a time, and determines which neighboring pins (if any) the selected pin can reach if bent, and whether the selected bent pin can reach the shell. The analysis usually does not include failure modes in which a bent pin simultaneously touches more than one other pin or a pin and the shell. If the analysis requires failure rates, an approximation is usually made by assigning an average failure rate to each failure mode based on the overall connector failure rate and the number of pins.
Since this approach relies on human judgment there can be errors in the conclusions. Even with a conservative approach to cover "worst case" outcomes of bending, concluding that a bent pin can reach another pin (or the shell) when that failure mode is physically impossible is just as much an error as concluding that a bent pin cannot reach another pin (or the shell) when that failure mode is in fact possible.
A more mathematical approach can be applied to determine bending failure modes and the failure rate of each. The approach is to compute the maximum reach of a bent pin as a radius from the pin's center in the insert, then to compute the distance from the bent pin's center to the closest part of each neighboring pin (and the shell). If the bent pin's radius can reach a neighboring pin (or the shell), then the probability of contact with that item can be computed given that the pin is bent. Probability is computed from items 1, 2, and 3 in the following list. Failure rate is computed from the probability and items 4 and 5.
1. Shell and pin dimensional data from military or manufacturer's drawings.
2. The ratio of open path failures to shorting failures from published data (e.g., FMD-97).
3. Ground rules listed in the following section.
4. Connector failure rate (specifically for the pin connector of the mating connector pair) from published data (e.g. MIL-HDBK-217.)
5. Exposure time (the period for which the failure rate is computed).
Even with mathematical analysis, however, results can be subjective, particularly since determining the reach of a bent pin requires some engineering judgment. Nothing specifies the characteristics of a bend or its location along the mating surface of the insert. Some connectors also include a thin soft rubber seal (called “mating seal with pin barriers”) on the insert's mating surface to minimize moisture flow from the rear of the insert to the contact mating surfaces (Figure 2 is an example), and this seal adds some unpredictability to the pin's bend radius and location.
Engineering judgment is also sometimes required to determine dimensions of the inside shell surface of a connector. For example, a common miniature D socket connector, which always fits inside a pin connector (Figure 3 is an example) when mated, is the closest shell surface to the pins. The dimensions of this inside surface determine whether a bent pin can reach a grounded shell, and the likelihood of that event, but these dimensions are not always published. It would be necessary to derive them by considering published outside dimensions and the shell material thickness, or by making actual measurements.
Additionally, since published drawings usually include minimum and maximum values for each dimension, engineering judgment is required select one appropriate value from the given range of values for each dimension needed in the analysis.
These kinds of subjectivity are relevant only in connectors where it is not clearly obvious that each bent pin will or will not make contact with each neighboring item.
Ground Rules for Mathematical Analysis
A mathematical approach requires ground rules for handling input data for each pin in a uniform way.
1. A pin is designated as either bendable or not bendable.
2. All pins are equally likely to fail in the same way.
3. A pin, if inadvertently bent, is equally likely to bend in any direction.
4. A bent pin that has been pushed flat against the mating surface of its insert may be slightly curved.
5. An unwired bent pin that can touch two or more electrical paths simultaneously has open and shorted failure modes.
Ground Rule 1 means that a pin can be bent to lie on the mating surface of its insert, or it does not bend at all. Certain pins that are thick (have large cross-sections) and certain kinds of contacts may be designated as non-bendable, although some organizations require that every pin must be considered bendable. However, a pin designated as unbendable is still part of the analysis because other pins may bend to it, resulting in the bent pin's shorting to the unbendable pin's path. An unbendable pin may also fail due to corrosion.
Ground Rule 2 means each pin is equally likely to bend, each pin is equally likely to cause an open path due to surface corrosion, etc.
Ground Rule 3 applies to pins with symmetrical cross sections (i.e., circular or square). In contrast, blade contacts that are sometimes used in high-density circuit board edge connectors have cross sections that are thicker in one dimension and thinner in the other. Blade contacts may be considered equally likely to bend in either direction of their narrow dimension.
Ground Rule 4 accounts for the fact that a pin may curve as the mating surfaces force it to bend 90 degrees from its normal direction. This means that a bent pin might touch a pin whose line of sight is blocked by a third pin standing between them, or that a bent pin might simultaneously touch two neighboring pins whose separation is greater than the bent pin's diameter. The characteristics of such bending are subjective.
Ground Rule 5 means that an unwired “spare” pin that can cause system effects when bent (for example, if it can short two neighboring paths together, or if it can short a neighboring path to a grounded shell) must be analyzed like a non-spare pin. It will have both open and shorted failure modes, although the consequence of an open circuit (without shorting to anything) is “no effect,” and the consequence of shorting to other pin(s) or to the shell without system effects is also “no effect.”
With these ground rules and the information cited in the previous section, each possible failure mode and its associated failure rate can be computed such that the sum of the failure rates of each failure mode equals the failure rate of connector assembly (for contact failures). A list of each possible failure modes is the basis for the next part of the analysis: determining the effects of each failure mode.
Determining Failure Effects
As with FMEA in general, there are typically three levels of failure effects for each failure mode: local or low level, mid-level, and system or end level. For bent pin analysis, local level failure effect descriptions can be precisely stated in terms of the bent pin's signal role (e.g., "input" or "output"), signal name, action (e.g., "shorts"), and affected signal path (e.g., "xyz normal path"). This means that low level failure effect descriptions can be composed without considering any other parts of the system. Since this text is independent of other system activities, local level failure effect descriptions can also be generated by software. Mid- and system level effects usually require investigation of other system parts.
For example, a failure mode might be listed on the FMEA worksheet as “Pin A shorts Pin K,” and the corresponding local level failure effect might be “Input Signal X shorts Signal Y normal path.” (Here, bent Pin A carries Signal X and undamaged Pin K carries Signal Y.) Note that the failure mode “Pin A shorts Pin K” is very different from “Pin K shorts Pin A,” and the failure effects in general would also be very different.
Signal Roles
When determining consequences of a bent pin that shorts to another electrical path, it is important to consider whether the bent pin is connected to the source of the signal or power, rather than connected to the destination or load. In the former case, the bent pin connects its signal or power to a neighboring path; in the latter case, the signal or power of the normal path feeds the destination or load of the broken path. Consequences of these two cases are, in general, vastly different. For example, a bent pin may be part of a path labeled "+5VDC," but if the pin is connected to the load end of the path, then it would be an error to assume that the pin will put 5 volts on whatever it touches. To prevent this kind of error during analysis, it is useful to identify each signal's role on each pin. In the example of the previous paragraph, the signal role was “input,” and this meant the bent pin was connected to the load or destination. If the cited role were “output,” that would mean that the bent pin was connected to the source of the signal or power. The list of useful roles to aid the analysis might include input, output, bidirectional, power, ground, spare, and shell.
Other Considerations
Grounds. The role “ground” may be ambiguous in systems that isolate different kinds of grounds (typical isolated grounds are analog signal ground, digital signal ground, AC power ground, DC power ground, and chassis ground). If different kinds of ground paths are in separate paths in a connector, the analysis should treat them as separate signals. Also, paths that connect shields associated with twisted pairs and coaxial paths should be treated as separate signals even though they are all "ground" paths because a disconnected shield may affect the associated twisted pair or coax path.
Redundant Paths. Two paths with the same name aren't necessarily redundant. Multiple paths can be considered redundant only if (1) loss of one path doesn't cause the remaining path(s) to have an unsafe current load, excessive voltage drop, or excessive impedance, and (2) the paths are both connected at each end. For example, multiple paths with the same name may originate from the same source but if the paths terminate at separate loads then a bent pin may cause one load to see an open circuit.
Equivalent Effects. In many analyses, there are multiple signals whose failure effects are identical for identical failure modes. For example, in a connector carrying paths of 32 data bits of equal importance, the mid- and system level effects of any one open path are identical to the mid- and system level effects of any other open path. The implication is that the analysis must determine the mid-level and system level effects for only the first occurrence of an open data bit path on the worksheet. The remaining 31 open path effect descriptions can be made identical to the first by setting each to the corresponding values of the first. That way, a correction is made in only the first line where the failure mode appears on the worksheet, and the others will be corrected automatically.
A Bent Pins FMEA Worksheet
Figure 4 is a simplified sample of a typical FMEA worksheet for bent pin analysis. Additional columns of information may be added as shown in the separate article on FMEA. This sample is based on a format generated by a bent pin analysis software package and using data for a 79-pin connector. (Some columns of information have been removed from the original format to limit the table size for this article.) Information shown in the figure is derived from connector-related information as described above. Mid- and System ("Hi") level effect descriptions are not shown but would be supplied by human analysts. In cell A2 of this sample, “P5-1@” means that Pin 1 of connector P5 has opened a path due to causes other than bending. In cell A3, “P5-1” means that the Pin 1 path has been opened due to bending (but not touching anything else). While the effects of these two failure modes are the same, they are listed separately on the worksheet because their failure rates are different and reflect the fact that open path failures are far more likely than shorted path (bend-related) failures. The failure rates in column G are per million hours and the sum of all failure rates equals the connector failure rate. (The individual failure rates are derived from the connector failure rate.)
Extensions to Bent Pin Analysis
Variations of bent pin analysis include FMEA of wiring rather than connectors. Cable Matrix Analysis is one variation that is used to determine effects of shorts in electrical cables between each conductor and its neighbors due to failure of wire insulation, given the ground rule that no paths are broken when such shorts occur. Cable matrix analysis may also include effects of non-shorting but open paths, and shorts between wires and chassis ground caused by failure of wire insulation.
References
Further reading
C. A. Ericson II, “Hazard Analysis Techniques for System Safety,” Chapter 20, John Wiley & Sons, 2005
Electrical tests
Reliability analysis | Bent pin analysis | [
"Engineering"
] | 4,557 | [
"Electrical engineering",
"Electrical tests",
"Reliability analysis",
"Reliability engineering"
] |
39,993,538 | https://en.wikipedia.org/wiki/Ehrenpreis%27s%20fundamental%20principle | In mathematical analysis, Ehrenpreis's fundamental principle, introduced by Leon Ehrenpreis, states:
Every solution of a system (in general, overdetermined) of homogeneous partial differential equations with constant coefficients can be represented as the integral with respect to an appropriate Radon measure over the complex “characteristic variety” of the system.
References
Mathematical analysis | Ehrenpreis's fundamental principle | [
"Mathematics"
] | 75 | [
"Mathematical analysis",
"Mathematical analysis stubs"
] |
39,993,849 | https://en.wikipedia.org/wiki/BIOPAN | BIOPAN is a multi-user research program by the European Space Agency (ESA) designed to investigate the effect of the space environment on biological material. The experiments in BIOPAN are exposed to solar and cosmic radiation, the space vacuum and weightlessness, or a selection thereof. Optionally, the experiment temperature can be stabilized. BIOPAN hosts astrobiology, radiobiology and materials science experiments.
The BIOPAN facility is installed on the external surface of Russian Foton descent capsules protruding from the thermal blanket that envelops the satellite.
Design and features
The BIOPAN program started in the early nineties with an ESA contract for the a joint development by Kayser-Threde and Kayser Italia. It was based on the heritage of a low-tech Russian exposure container called KNA (Kontejner Nauchnoj Apparatury). The BIOPAN facilities are installed on the external surface of Foton descent capsules. It has a motor-driven hinged lid, which opens 180° in Earth orbit to expose the experiment samples to the harsh space environment. For re-entry, the closed facility is protected with an Ablative heat shield.
The BIOPAN facilities are equipped with thermometers, UV sensors, a radiometer, a pressure sensor and an active radiation dosimeter. Data acquired by the sensors is stored by BIOPAN throughout each mission and can be accessed after flight. The possibility of overheating during atmospheric re-entry was acknowledged early during the development, therefore, a quite massive heat shield was designed for it. While the total weight of BIOPAN is close to 27 kg, including the experiments, the heat shield is responsible for 12 kg of that figure.
The BIOPAN electronics consists of the following units: signal acquisition board, microcontroller board with its flight software, memory board and EGSE.
Missions
The missions flown so far are:
See also
Bion
Biosatellite program
EXPOSE
List of microorganisms tested in outer space
O/OREOS
OREOcube
Tanpopo
References
Astrobiology space missions
Cosmic rays
Extremophiles
Microbial growth and nutrition
Molecular biology
Space exposure experiments
Space-flown life
Space hardware returned to Earth intact | BIOPAN | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 446 | [
"Physical phenomena",
"Space-flown life",
"Astrophysics",
"Organisms by adaptation",
"Extremophiles",
"Radiation",
"Bacteria",
"Molecular biology",
"Biochemistry",
"Environmental microbiology",
"Cosmic rays"
] |
39,996,190 | https://en.wikipedia.org/wiki/Aluminium%20acetylacetonate | Aluminium acetylacetonate, also referred to as Al(acac)3, is a coordination complex with formula Al(C5H7O2)3. This aluminium complex with three acetylacetone ligands is used in research on Al-containing materials. The molecule has D3 symmetry, being isomorphous with other octahedral tris(acetylacetonate)s.
Uses
Aluminium acetylacetonate can be used as the precursor to crystalline aluminium oxide films using low-pressure metal organic chemical vapour deposition. In horticulture it can also be used as a molluscicide.
References
Aluminium compounds
Acetylacetonate complexes | Aluminium acetylacetonate | [
"Chemistry"
] | 142 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
57,594,608 | https://en.wikipedia.org/wiki/Concrete%20hinge | Concrete hinges are hinges produced out of concrete, with little or no steel in the hinge neck, which allows a rotation without a significant bending moment. The high rotations result from controlled tensile cracks as well as creep. Concrete hinges are mostly used in bridge
engineering as monolithic, simple, economic alternative to steel hinges, which would need regular maintenance. Concrete hinges are also used in tunnel engineering. A concrete hinge consist of the hinge neck, which has a reduced cross section and of the hinge heads, which have a strong reinforcement.
History and guidelines
Freyssinet invented the concrete hinges.
Leonhardt introduced guidelines in the 1960s which are still used till the 2010s.
Janßen introduced the application of concrete hinges in tunnel engineering.
Gladwell developed another guideline for narrowing cross sections, which predicts a stiffer behaviour than the Leonhardt/Janßen-model
Marx and Schacht translated Leonhardts guidelines for the first time in the nowadays used semipropablistic safteyconcept.
Schlappal, Kalliauer and coworkers introduced for the first time both limit caces (service-limit-states (SLS) and ultimate-limite-states (ULS)).
Kaufmann, Markić und Bimschas did further studies on concrete hinges.
Stresses, rotational capacity, bearing capacity
Due to triaxial compression, strength in the neck region is much higher than for uniaxial compression, because lateral expansion is restricted.
Eurocode 2 suggests for typical dimensions a compressive strength equal to about twice of the unixalial compressive strength.
Also the concrete hinge neck has no, or almost no reinforcement, but the concrete hinge heads need a dense reinforcement cache, because of tensile splitting.
Literature
Fritz Leonhardt: Vorlesungen über Massivbau - Teil 2 Sonderfälle der Bemessung im Stahlbetonbau. [Concrete hinges: test report, recommendations for structural design. Critical stress states of concrete under multiaxial static short-term loading Springer-Verlag, Berlin 1986, , S. 123–132. (in German)
VPI: Der Prüfingenieur. Ausgabe April 2010, S. 15–26, (bvpi.de PDF; 2,3 MB). (in German)
References
Bridge design
hinge | Concrete hinge | [
"Engineering"
] | 484 | [
"Structural engineering",
"Bridge design",
"Architecture"
] |
57,601,067 | https://en.wikipedia.org/wiki/Solar%20Terrestrial%20Probes%20program | NASA's Solar Terrestrial Probes program (STP) is a series of missions focused on studying the Sun-Earth system. It is part of NASA's Heliophysics Science Division within the Science Mission Directorate.
Objectives
Understand the fundamental physical processes of the complex space environment throughout the Solar System, which includes the flow of energy and charged material, known as plasma, as well as a dynamic system of magnetic and electric fields.
Understand how human society, technological systems, and the habitability of planets are affected by solar variability and planetary magnetic fields.
Develop the capability to predict the extreme and dynamic conditions in space in order to maximize the safety and productivity of human and robotic explorers.
Missions
TIMED
The TIMED (Thermosphere Ionosphere Mesosphere Energetics and Dynamics) is an orbiter mission dedicated to study the dynamics of the Mesosphere and Lower Thermosphere (MLT) portion of the Earth's atmosphere. The mission was launched from Vandenberg Air Force Base in California on December 7, 2001 aboard a Delta II rocket launch vehicle.
Hinode
Hinode, an ongoing collaboration with JAXA, is a mission to explore the magnetic fields of the Sun. It was launched on the final flight of the M-V-7 rocket from Uchinoura Space Center, Japan on September 22, 2006.
STEREO
STEREO (Solar Terrestrial Relations Observatory) is a solar observation mission. It consists in two nearly identical spacecraft, launched on October 26, 2006.
MMS
The Magnetospheric Multiscale Mission (MMS) is a mission to study the Earth's magnetosphere, using four identical spacecraft flying in a tetrahedral formation. The spacecraft were launched on March 13, 2015.
IMAP
IMAP (Interstellar Mapping and Acceleration Probe) is a heliosphere observation mission. Planned for launch in 2025, it will sample, analyze, and map particles streaming to Earth from the edges of interstellar space.
References
External links
NASA Goddard Space Flight Center - Solar Terrestrial Probes Program
NASA Science Mission Directorate - Solar Terrestrial Probes Program
NASA programs
Space plasmas
Space science experiments | Solar Terrestrial Probes program | [
"Physics"
] | 429 | [
"Space plasmas",
"Astrophysics"
] |
59,204,782 | https://en.wikipedia.org/wiki/Peptide%20loading%20complex | The peptide-loading complex (PLC) is a short-lived, multisubunit membrane protein complex that is located in the endoplasmic reticulum (ER). It orchestrates peptide translocation and selection by major histocompatibility complex class I (MHC-I) molecules. Stable peptide-MHC I complexes are released to the cell surface to promote T-cell response against malignant or infected cells. In turn, T-cells recognize the activated peptides, which could be immunogenic or non-immunogenic.
Overview
A PLC assembly consists of seven subunits, including the transporters associated with antigen processing (TAP1 and TAP2 – jointly referred to as TAP), the oxidoreductase ERp57, the MHC-I heterodimer, and the chaperones tapasin and calreticulin. TAP transports proteasomal degradation products from the cytosol into the lumen of the ER, where they are loaded onto MHC-I molecules. The peptide-MHC-I complexes then move via a secretory pathway to the cell surface, presenting their antigenic load to cytotoxic T-cells.
In general, preliminary MHC-I heavy chains are chaperoned by the calnexin–calreticulin system in the ER. Together with β2-microglobulin (β2m), MHC-I heavy chains form assemblies of heterodimers that act as receptors for antigenic peptides. Empty MHC-I heterodimers are recruited by calreticulin and form short-lived macromolecular PLC where the chaperone tapasin further provides stabilization in the MHC-I molecules. Furthermore, ERp57 and tapasin form disulfide-linked conjugates, and tapasin is crucial for maintaining the structural stability of the PLC as well as facilitating optimal peptide loading. After final quality control, during which MHC-I heterodimers undergo peptide editing, stable peptide–MHC-I complexes are released to the cell surface for T-cell recognition. The PLC can serve a large variety of MHC-I allomorphs, thus playing a central role in the differentiation and priming of T lymphocytes, and in controlling viral infections and tumour development.
Structure
The structure of the human PLC has been determined using single-particle electron cryo-microscopy (cryo-EM). The PLC, measuring 150 Å by 150 Å and with a total height of 240 Å, is organized around the Transporter associated with Antigen Processing (TAP). It includes molecules such as tapasin, calreticulin, ERp57, and Major Histocompatibility Complex class I (MHC-I), arranged in a pseudo-symmetric pattern.
TAP
TAP is a heterodimeric complex, consisting of TAP1 (ABCB2) and TAP2 (ABCB3) members of the ABC transporter superfamily. The common feature of all ABC transporters is their organization: 1) into two transmembrane domains (TMDs) and 2) into two nucleotide-binding domains (NBDs). Both intramolecular domains are coupled to each other and when ATP binding is in progress, conformational changes in the TMDs allow proteasomal degradation products to move across the membrane. TAP recognizes and transports the antigen peptides produced in the cytosol straight into the ER, while tapasin recognizes the kind of peptides that have the ability to form stable complexes with MHC-I. This process is known as peptide proofreading or editing. Peptides selected through proofreading improve MHC-I stability; tapasin also contributes to the editing of immunogenic peptide epitopes. However, only lately it was proven via biochemical, biophysical, and structural studies that a key function in adaptive immunity, the catalytic mechanism of peptide proofreading, is performed by tapasin and TAPBPR (TAP-binding protein-related, a tapasin homologue).
Tapasin
Cresswell and co-workers first discovered tapasin (TAP-associated glycoprotein) as a 48 kDa protein in complexes isolated with TAP1 antibodies from digitonin lysates of human B lymphoblastoid cells. Tapasin binds HC/β2m along with ER chaperones to the peptide transporter. It is located in the ER and its function comprises holding together class I molecules jointly with the chaperone calreticulin and the ERp57 to TAP. Studies of a tapasin-deficient cell line and from mice bearing a disrupted tapasin gene, the short-lived complex of class I molecules.
Tapasin and TAP are very important for the stabilization of the class I molecules and also for the optimization of the peptide presented to cytotoxic T cells. A PLC-independent tapasin homologue protein named TAPBPR was found that has the ability to act as a second MHC-I specific peptide proofreader or editor, but does not possess a transmembrane domain. Tapasin and TAPBPR share similar binding interfaces on MHC-I, as shown with the X-ray structure of TAPBPR with MHC-I (heavy chain and β2 microglobulin). The use of a photo-cleavable high-affinity peptide allowed researchers to form a stable (bound) MHC-I molecules and afterwards to form a stable TAPBPR and MHC-I complex with cleavage by UV light of the photoinduced peptide.
ERp57
ERp57 is an enzyme of the thiol oxidoreductase family located in the ER. It is attached to substrates in an indirect fashion through association with the molecular chaperone calreticulin of the peptide-loading complex, In early stages of generation of MHC-I molecules, ERp57 is associated with free MHC-I heavy chains. As a result, its function is determined by the formation of disulfide bonds in heavy chains, by oxidative folding of the heavy chain, and finally by the fact that ERp57 is loading the peptides onto MHC-I molecules.
MHC-I
MHC-I heavy chains may work as chaperones with the aid of the calnexin-calreticulin complex in the ER. In addition to this, β2-microglobulin (β2m) is attached to the heavy chains of the heterodimers and as a whole they act as receptors for antigenic peptides. When MHC-I chains are empty, they are recruited by calreticulin and form a transient PLC. Tapasin regularly plays a role in the stabilization of MHC-I. Only after MHC-I heterodimers are deployed for peptide proofreading or editing, stable pMHC-I (peptide-MHC-I) complexes are released to the cell surface for recognition and destruction of virus-infected or malignantly neoplastic cells. In general, each individual organism owns a collection of six MHC-I molecules (three from each parent). Thus, in autoimmune emergencies, compatible donors are relatives who own a similar collection of MHC-I molecules, apart from those of the recipient.
Calreticulin
Calreticulin – especially its lectin-like domain – interacts with MHC-I. The P domain faces the MHC-I peptide-binding site towards ERp57. This orientation makes it possible for tapasin to attach and secure MHC-I. This translocation of TAP facilitates its opening out into an ER luminal cavity, edged by standard membrane entry points such as those for tapasin and MHC-I. These two entry points facilitate the recruitment of MHC-I with optimal peptide loading and eventual release of MHC-I in T-cell surfaces for recognition.
References
Peptides
Immune system
Protein targeting
Transmembrane proteins | Peptide loading complex | [
"Chemistry",
"Biology"
] | 1,681 | [
"Biomolecules by chemical classification",
"Immune system",
"Protein targeting",
"Organ systems",
"Cellular processes",
"Molecular biology",
"Peptides"
] |
59,205,557 | https://en.wikipedia.org/wiki/Messaging%20Layer%20Security | Messaging Layer Security (MLS) is a security layer for end-to-end encrypting messages. It is maintained by the MLS working group of the Internet Engineering Task Force, and is designed to provide an efficient and practical security mechanism for groups as large as 50,000 and for those who access chat systems from multiple devices.
Security properties
Security properties of MLS include message confidentiality, message integrity and authentication, membership authentication, asynchronicity, forward secrecy, post-compromise security, and scalability.
History
The idea was born in 2016 and first discussed in an unofficial meeting during IETF 96 in Berlin with attendees from Wire, Mozilla and Cisco.
Initial ideas were based on pairwise encryption for secure 1:1 and group communication. In 2017, an academic paper introducing Asynchronous Ratcheting Trees was published by the University of Oxford and Facebook setting the focus on more efficient encryption schemes.
The first BoF took place in February 2018 at IETF 101 in London. The founding members are Mozilla, Facebook, Wire, Google, Twitter, University of Oxford, and INRIA.
As of March 29, 2023, the IETF has approved publication of Messaging Layer Security (MLS) as a new standard. It was officially published on July 19, 2023. At that time, Google announced it intended to add MLS to the end to end encryption used by Google Messages over RCS.
Matrix is one of the protocols declaring migration to MLS.
Research on adding post-quantum cryptography (PQC) to MLS is ongoing, but MLS does not currently support PQC.
Implementations
OpenMLS: language: Rust, license: MIT
MLS++: language: C++, license: BSD-2
mls-rs: language: Rust, license: MIT, Apache 2.0
MLS-TS: language: TypeScript, license: Apache 2.0
References
External links
RFC 9420 The Messaging Layer Security (MLS) Protocol
Cryptography
Internet privacy
Secure communication | Messaging Layer Security | [
"Mathematics",
"Engineering"
] | 409 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
59,207,188 | https://en.wikipedia.org/wiki/Molecular%20layer%20deposition | Molecular layer deposition (MLD) is a vapour phase thin film deposition technique based on self-limiting surface reactions carried out in a sequential manner. Essentially, MLD resembles the well established technique of atomic layer deposition (ALD) but, whereas ALD is limited to exclusively inorganic coatings, the precursor chemistry in MLD can use small, bifunctional organic molecules as well. This enables, as well as the growth of organic layers in a process similar to polymerization, the linking of both types of building blocks together in a controlled way to build up organic-inorganic hybrid materials.
Even though MLD is a known technique in the thin film deposition sector, due to its relative youth it is not as explored as its inorganic counterpart, ALD, and a wide sector development is expected in the upcoming years.
History
Molecular layer deposition is a sister technique of atomic layer deposition. While the history of atomic layer deposition dates back to the 1970s, thanks to the independent work of Valentin Borisovich Aleskovskii.
and Tuomo Suntola, the first MLD experiments with organic molecules were not published until 1991, when an article from Tetsuzo Yoshimura and co-workers appeared regarding the synthesis of polyimides using amines and anhydrides as reactants. After some work on organic compounds along the 1990s, the first papers related to hybrid materials emerged, after combining both ALD and MLD techniques. Since then, the number of articles submitted per year on molecular layer deposition has increased steadily, and a more diverse range of deposited layers have been observed, including polyamides, polyimines, polyurea, polythiourea and some copolymers, with special interest in the deposition of hybrid films.
Reaction mechanism
In similar fashion to an atomic layer deposition process, during an MLD process the reactants are pulsed on a sequential, cyclical manner, and all gas-solid reactions are self-limiting on the sample substrate. Each of these cycles are called MLD cycles and layer growth is measured as Growth Per Cycle (GPC), usually expressed in nm/cycle or Å/cycle. During a model, two precursor experiment, an MLD cycle proceeds as follows:
First, precursor 1 is pulsed in the reactor, where it reacts and chemisorbs to the surface species on the sample surface. Once all adsorption sites have been covered and saturation has been reached, no more precursor will attach, and excess precursor molecules and generated byproducts are withdrawn from the reactor, either by purging with inert gas or by pumping the reactor chamber down. Only when the chamber has been properly purged with inert gas/pumped down to base pressure (~ 10−6 mbar range) and all unwanted molecules from the previous step have been removed, can precursor 2 be introduced. Otherwise, the process runs the risk of CVD-type growth, where the two precursors react in the gaseous phase before attaching to the sample surface, which would result in a coating with different characteristics.
Next, precursor 2 is pulsed, which reacts with the previous precursor 1 molecules anchored to the surface. This surface reaction is again self-limiting and, followed again by purging/pumping to base pressure the reactor, leaves behind a layer terminated with surface groups that can again react with precursor 1 in the next cycle. In the ideal case, the repetition of the MLD cycle will build up an organic/inorganic film one monatomic layer at a time, enabling highly conformal coatings with precise thickness control and film purity
If ALD and MLD are combined, more precursors in a wider range can be used, both inorganic and organic. In addition, other reactions can be included in the ALD/MLD cycles as well, such as plasma or radical exposures. This way, an experiment can be freely customised according to the research needs by tuning the number of ALD and MLD cycles and the steps contained within the cycles.
Process chemistry and surface reactions
Precursor chemistry plays a key role in MLD. The chemical properties of the precursor molecules drive the composition, structure and stability of the deposited hybrid material. To reach the saturation stage in a short time and ensure a reasonable deposition rate, precursors must chemisorb on the surface, react rapidly with the surface active groups and react with each other. The desired MLD reactions should have a large negative ∆G value.
Organic compounds are employed as precursors for MLD. For their effective use, the precursor should have sufficient vapor pressure and thermal stability to be transported in the gas phase to the reaction zone without decomposing. Volatility is influenced by the molecular weight and intermolecular interactions. One of the challenges in MLD is to find an organic precursor that has sufficient vapor pressure, reactivity and thermal stability. Most organic precursors have low volatility, and heating is necessary to ensure the sufficient supply of vapor reaching the substrate. The backbone of the organic precursors can be flexible i.e., aliphatic, or rigid i.e., aromatics employed with the functional groups. The organic precursors usually are homo or heterobifunctional molecules with -OH, -COOH, -NH2, -CONH2, -CHO, -COCl, -SH, -CNO, -CN, alkenes, etc. functional groups. The bifunctional nature of the precursors is essential for continuous film growth as one group is expected to react with the surface and the other one is accessible to react with the next pulse of the co-reactant. The attached functional groups play a vital role in the reactivity and binding modes of the precursor and they should be able to react with the functional groups present at the surface. A flexible backbone may hinder the growth of a continuous and dense film by back coordination, blocking the reactive sites and thus lowering the film growth rate. Thus, finding a MLD precursor with all the above-mentioned requirements fulfilled is not straightforward process.
Surface groups play a crucial role as reaction intermediates. The substrate is usually hydroxylated or hydrogen terminated and hydroxyls serve as reactive linkers for condensation reactions with metals. The inorganic precursor reacts with surface reactive groups via the corresponding linking chemistry that leads to the formation of new O-Metal bonds. The metal precursor step changes the surface termination, leaving the surface with new reactive sites ready to react with the organic precursor. The organic precursor reacts at the resulting surface by bonding covalently with the metal sites, releasing metal ligands and leaves another reactive molecular layer ready for the next pulse. Byproducts are released after each adsorption step and the reactions are summarised below.
Process considerations
When performing an MLD process, as a variant of ALD, certain aspects need to be taken into account in order to obtain the desired layer with adequate purity and growth rate:
Saturation
Before starting an experiment, the researcher must know whether the process designed will yield saturated or unsaturated conditions. If this information is unknown, it is a priority to get to know it in order to have accurate results. If not long enough precursor pulsing times are allowed, the surface reactive sites of the sample will not have sufficient time to react with the gaseous molecules and form a monolayer, which will be translated in a lower growth per cycle (GPC). To solve this issue, a saturation experiment can be performed, where the film growth is monitored in-situ at different precursor pulsing times, whose GPCs will then be plotted against pulsing time to find the saturation conditions.
Additionally, too short purging times will result in remaining precursor molecules in the reactor chamber, which will be reactive in the gaseous phase towards the new precursor molecules introduced during the next step, obtaining an undesired CVD-grown layer instead.
MLD window
Film growth usually depends on the temperature of deposition, on what is called MLD window, a temperature range in which, ideally, film growth will remain constant. When working outside of the MLD window, a number of problems can occur:
When working at lower temperatures: limited growth, due to insufficient reactivity; or condensation, which will appear like a higher GPC than expected.
When working at higher temperatures: precursor decomposition, which originates non-saturating uncontrolled growth; or desorption that will lower deposition rates.
In addition, even when working within the MLD window, GPCs can still vary with temperature sometimes, due to the effect of other temperature-dependent factors, such as film diffusion, number of reactive sites or reaction mechanism.
Non-idealities
Non-monolayer growth
When carrying out an MLD process, the ideal case of one monolayer per cycle is not usually applicable. In the real world, many parameters affect the actual growth rate of the film, which in turn produce non idealities like sub-monolayer growth (deposition of less than a full layer per cycle), island growth and coalescence of islands.
Substrate effects
During an MLD process, film growth will usually achieve a constant value (GPC). However, during the first cycles, incoming precursor molecules will not interact with a surface of the grown material but rather with the bare substrate, and thus will undergo different chemical reactions with different reaction rates. As a consequence of this, growth rates can experience a substrate enhancement (faster substrate-film reaction than film-film reactions) and therefore higher GPCs in the first cycles; or a substrate inhibition (slower substrate-film reaction than film-film reactions), accompanied by a GPC decrease at the beginning. In any case, process growth rates can be very similar in both cases in some depositions.
Lower than anticipated growth
In MLD, it is not strange to observe that, often, experiments yield lower than anticipated growth rates. The reason for this relies on several factors, such as:
Molecule tilting: organic molecules with long chains are prone to not remaining completely perpendicular to the surface, lowering the number of surface sites.
Bidentate ligands: when a reacting molecule has two functional groups, it may bend and react with two surface sites instead of remaining straight on the surface. This has been shown, for instance, for titanicones grown with ethylene glycol and glycerol. Because glycerol has an additional hydroxyl group compared to ethylene glycol and is able to provide an extra reactive hydroxyl group in the case of a double reaction of the terminal hydroxyl groups with the surface.
Steric hindrance: organic precursors are often bulky, and can cover several surface groups when attached to the surface.
Long pulsing times: organic precursors can have very low vapour pressures, and very long pulsing times may be necessary in order to achieve saturation. In addition, long purging times are usually needed to remove all unreacted molecules from the chamber afterward.
Low temperatures: to increase the precursor vapor pressure, one might think of increasing its temperature. Nevertheless, organic precursors are usually very thermally fragile, and a temperature increase may induce decomposition.
Gas-phase: many organic reactions are normally carried out in the liquid phase, and are therefore dependent of acid-base interactions or solvation effects. These effects are not present in the gaseous phase and, as a consequence, many processes will yield lower reaction rates or directly won't be possible.
This phenomenon can be avoided as much as possible by using organic precursors with stiff backbones or with more than two functional groups, using a three step reaction sequence, or using precursors in which ring-opening reactions occur.
Physical state of precursors
Liquid precursors
High volatility and ease-of-handling make liquid precursors the preferred choice for ALD/MLD. Generally, liquid precursors have high enough vapor pressures at room temperature and hence require limited to no heating. They are also not prone to common problems with solid precursors like caking, particle size change, channeling and provide consistent and stable vapor delivery. Hence, some solid precursors with low melting points are generally used in their liquid states.
A carrier gas is usually employed to carry the precursor vapor from its source to the reactor. The precursor vapors can be directly entrained into this carrier gas with the help of solenoid and needle valves. On the other hand, the carrier gas may be flown over the head space of a container containing the precursor or bubbled through the precursor. For the latter, dip-tube bubblers are very commonly used. The setup comprises a hollow tube (inlet) opening almost at the bottom of a sealed ampoule filled with precursor and an outlet at the top of the ampoule. An inert carrier gas like Nitrogen/Argon is bubbled through the liquid via the tube and led to the reactor downstream via the outlet. Owing to relatively fast evaporation kinetics of liquids, the outcoming carrier gas is nearly saturated with precursor vapor. The vapor supply to the reactor can be regulated by adjusting the carrier gas flow, temperature of the precursor and if needed, can be diluted further down the line. It must be ensured that the connections downstream from the bubbler are kept at high enough temperatures so as to avoid precursor condensation. The setup can also be used in spatial reactors which demand extremely high, stable and constant supply of precursor vapor.
In conventional reactors, hold cells can also be used as a temporary reservoir of precursor vapor. In such a setup, the cell is initially evacuated. It is then opened to a precursor source and allowed to be filled with precursor vapor. The cell is then cut off from the precursor source. Depending upon the reactor pressure, the cell may then be pressurized with an inert gas. Finally, the cell is opened to the reactor and the precursor is delivered. This cycle of filling and emptying the hold (storage) cell can be synced with an ALD cycle. The setup is not suitable for spatial reactors which demand continuous supply of vapor.
Solid precursors
Solid precursors are not as common as liquid but are still used. A very common example of a solid precursor having potential applications in ALD for semiconductor industry is trimethylindium (TMIn). In MLD, some solid co-reactants like p-Aminophenol, Hydroquinone, p-Phenylenediamine can overcome the problem of double reactions faced by liquid reactants like Ethylene glycol. Their aromatic backbone can be attributed as one of the reasons for this. Growth rates obtained from such precursors is usually higher than precursors with flexible backbones.
However, most of the solid precursors have relatively low vapor pressures and slow evaporation kinetics.
For temporal setups, the precursor is generally filled in a heated boat and the overhead vapors are swept to the reactor by a carrier gas. However, slow evaporation kinetics make it difficult to deliver equilibrium vapor pressures. In order to ensure maximum saturation of a carrier gas with the precursor vapor, the contact between a carrier gas and the precursor needs to be long and sufficient. A simple dip-tube bubbler, commonly used for liquids, can be used for this purpose. But, the consistency in vapor delivery from such a setup is prone to evaporative/sublimative cooling of the precursor, precursor caking, carrier gas channeling, changes in precursor morphology and particle size change. Also, blowing high flows of carrier gas through a solid precursor can lead to small particles being carried away to the reactor or a downstream filter thereby clogging it. In order to avoid these problems, the precursor may first be dissolved in a non-volatile inert liquid or suspended in it and the solution/suspension can then be used in a bubbler setup.
Apart from this, some special vapor delivery systems have also been designed for solid precursors to ensure stable and consistent delivery of precursor vapor for longer durations and higher carrier flows.
Gaseous precursors
ALD/MLD are both gas phase processes. Hence, precursors are required to be introduced into the reaction zones in their gaseous form. A precursor already existing in a gaseous physical state would make its transport to the reactor very straightforward and hassle free. For example, there will be no need of heating the precursor thereby reducing the risk of condensation. However, precursors are seldom available in gaseous state. On the other hand, some ALD co-reactants are available in gaseous form. Examples include H2S used for sulphide films; NH3 used for nitride films; plasmas of O2 and O3 to produce oxides. The most common and straight forward way of regulating the supply of these co-reactants to the reactor is using a mass flow controller attached between the source and the reactor. They can also be diluted with an inert gas to control their partial pressure.
Film characterisation
Several characterisation techniques have evolved over time as the demand for creating ALD/MLD films for different applications has increased. This includes lab-based characterisation and efficient synchrotron-based x-ray techniques.
Lab-based characterisation
Since they both follow a similar protocol, almost all characterisation applicable to ALD generally applies to MLD as well. Many tools have been employed to characterise MLD film properties such as thickness, surface and interface roughness, composition, and morphology. Thickness and roughness (surface and interface) of a grown MLD film are of utmost importance and are usually characterised ex-situ by X-ray reflectivity (XRR). In-situ techniques offer an easier and more efficient characterisation than their ex-situ counterparts, among which spectroscopic ellipsometry (SE) and quartz crystal microbalance (QCM) have become very popular to measure thin films from a few angstroms to a few micrometers with exceptional thickness control.
X-ray photoelectron spectroscopy (XPS) and X-ray diffractometry (XRD) are widely used to gain insights into film composition and crystallinity, respectively, whereas atomic force microscopy (AFM) and scanning electron microscopy (SEM) are being frequently utilised to observe surface roughness and morphology. As MLD mostly deals with hybrid materials, comprising both organic and inorganic components, Fourier transform infrared spectroscopy (FTIR) is an important tool to understand the new functional group added or removed during the MLD cycles and also it is a powerful tool to elucidate the underlying chemistry or surface reactions during each sub cycle of an MLD process.
Synchrotron-based characterisation
A synchrotron is an immensely powerful source of x-rays that reaches energy levels which cannot be achieved in a lab-based environment. It produces synchrotron radiation, the electromagnetic radiation emitted when charged particles undergo radial acceleration, whose high power levels offer a deeper understanding of processes and lead to cutting-edge research outputs. Synchrotron-based characterisations also offer potential opportunities for understanding the basic chemistry and developing fundamental knowledge about MLD processes and their potential applications. The combination of in-situ X-ray fluorescence (XRF) and Grazing incidence small angle X-ray scattering (GISAXS) has been demonstrated as a successful methodology to learn the nucleation and growth during ALD processes and, although this combination has not yet been investigated in detail to study MLD processes, it holds great potential to improve the understanding of initial nucleation and internal structure of the hybrid materials developed by MLD or by vapour phase infiltration (VPI).
Potential applications
The main application for molecular scale-engineered hybrid materials relies on its synergetic properties, which surpass the individual performance of their inorganic and organic components. The main fields of application of MLD-deposited materials are
Packaging / encapsulation: depositing ultrathin, pinhole-free and flexible coatings with improved mechanical properties (flexibility, stretchability, reduced brittleness). One example are gas-barriers on organic light emitting diodes (OLEDs).
Electronics: Tailoring materials with special mechanical and dielectric properties, such as advanced integrated circuits that require particular insulators or flexible thin film transistors with high-k gate dielectrics. Also, the recovery of energy wasted as heat as electric power with certain thermoelectric devices.
Biomedical applications: to enhance either cell growth, better adhesion or the opposite, generating materials with anti-bacterial properties. These can be used in research areas like sensing, diagnostics or medicine delivery.
Combining inorganic and organic building blocks on a molecular scale has proved to be challenging, due to the different preparative conditions needed for forming inorganic and organic networks. Current routes are often based on solution chemistry, e.g. sol-gel synthesis combined with spin-coating, dipping or spraying, to which MLD is an alternative.
MLD usage for dielectric materials.
Low-k
The dielectric constant (k) of a medium is defined as the ratio of the capacitor capacitances with and without medium. Nowadays delay, crosstalk and power dissipation caused by the resistance of the metal interconnection and the dielectric layer of nanoscale devices have become the main factors that limit the performance of a device and, as electronic devices are scaled-down further, interconnect resistance capacitance (RC) delay may dominate the overall device speed. To solve this, current work is focused on minimising the dielectric constant of materials by combining inorganic and organic materials, whose reduced capacitance allows for shrinkage of spacing between metal lines and, with it, the ability to decrease the number of metal layers in a device. In these kind of materials, the organic part must be hard and resistant and, for that purpose, metal oxides and fluorides are commonly used. However, since this materials are more brittle, organic polymers are also added, providing the hybrid material with low dielectric constant, good interstitial ability, high flatness, low residual stress, low thermal conductivity. In current research, great efforts are being put in order to prepare low-k materials by MLD with a k value of less than 3.
High-k
Novel organic thin-film transistors require a high-performance dielectric layer, which should be thin and possess a high k-value. MLD makes tuning the high-k and dielectric strength possible by altering the amount and the ratio of the organic and inorganic components. Moreover, the usage of MLD allows to achieve better mechanical properties in terms of flexibility.
Various hybrid dielectrics have already been developed: zincone hybrids from zirconium tert-butoxide (ZTB) and ethylene glycol (EG); Al2O3 based hybrids such as self-assembled MLD-deposited octenyltrichlorosilane (OTS) layers and Al2O3 linkers. Additionally, dielectric Ti-based hybrid from TiCl4 and fumaric acid proved its applicability in charge memory capacitors.
MLD for porous materials
MLD has high potential for the deposition of porous hybrid organic-inorganic and purely organic films, such as Metal-Organic Frameworks (MOFs) and Covalent-Organic Frameworks (COFs). Thanks to the defined pore structure and chemical tunability, thin films of these novel materials are expected to be incorporated in the next generation of gas sensors and low-k dielectrics. Conventionally, thin films of MOFs and COFs are grown via solvent-based routes, which are detrimental in a cleanroom environment and can cause corrosion of the pre-existing circuitry. As a cleanroom-compatible technique, MLD presents an attractive alternative, which has not been fully realized yet. As to date, there are no reports on direct MLD of MOFs and COFs. Scientists are actively developing other solvent-free all-gas-phase methods towards a true MLD process.
One of the early examples of an MLD-like process is the so-called "MOF-CVD". It was first realized for ZIF-8 utilizing a two-step process: ALD of ZnO followed by exposure to 2-methylimidazole linker vapor. It was later extended to several other MOFs. MOF-CVD is a single-chamber deposition method and the reactions involved exhibit self-limiting nature, bearing a strong resemblance to a typical MLD process.
An attempt to perform a direct MLD of a MOF by sequential reactions of a metal precursor and organic linker commonly results in a dense and amorphous film. Some of these materials can serve as a MOF precursor after a specific gas-phase post-treatment. This two-step process presents an alternative to the MOF-CVD. It has been successfully realized for a few prototypical MOFs: IRMOF-8, MOF-5, UiO-66, Though the post-treatment step is necessary for MOF crystallization, it often requires harsh conditions (high temperature, corrosive vapors) that lead to rough and non-uniform films. A deposition with zero to minimum post-treatment is highly desirable for industrial applications.
MLD for conductive materials.
Conductive and flexible films are crucial for numerous emerging applications, such as displays, wearable devices, photovoltaics, personal medical devices, etc. For example, a zincone hybrid is closely related to a ZnO film and, therefore, may combine the conductivity of ZnO with the flexibility of an organic layer. Zincones can be deposited from diethylzinc (DEZ), hydroquinone (HQ) and water to generate a molecular chain in the form of (−Zn-O-phenylene-O−)n, which is an electrical conductor. Measurements of a pure ZnO film showed a conductivity of ~14 S/m, while the MLD zincone showed ~170 S/m, demonstrating a considerable enhancement of the conductivity in the hybrid alloy of more than one order of magnitude.
MLD for energy storage
MLD coatings for battery electrodes
One of the main applications of MLD in the batteries field is to coat the battery electrodes with hybrid (organic-inorganic) coatings. The main reason being, these coatings can potentially protect the electrodes from the main sources of degradation, while not breaking. These coatings are more flexible than purely inorganic materials. Therefore, being able to cope with volume expansion occurring in the battery electrodes upon charge and discharge.
MLD coatings on anodes:The implementation of silicon anodes in batteries is extremely interesting due to its high theoretical capacity (4200mAh/g). Nevertheless, the huge volume change upon lithium alloying and dealloying is a big issue as it leads to the degradation of the silicon anodes. MLD thin film coatings, such as Alucones (AL-GL, AL-HQ), can be used on silicon as a buffering matrix, due to is high flexibility and toughness. Therefore, relieving the volume expansion for the Si anode, and leading to a significant improve in cycling performance.
MLD coatings on cathodes:Li sulfur batteries are of great interest due to their high energy density, which makes it promising for applications such as electric vehicles (EVs) and hybrid electric vehicles (HEVs). However, their poor cycle life due to the dissolution of the polysulfides from the cathode, is detrimental for the battery performance. This fact, together with the large volume expansion are some of the main factors that lead to the poor electrochemical performance. Alucone coatings (AL-EG) on sulfur cathodes have been successfully used to face these issues.
MLD for thermoelectric Materials
Atomic/molecular layer deposition (ALD/MLD) as a thin film deposition technology with high precision and control creates this opportunity to produce very good hybrid inorganic-organic superlattice structures. Adding organic barrier layers inside the inorganic lattice of the thermoelectric materials improves the thermoelectric efficiency. The aforementioned phenomenon is the result of a quenching effect that the organic barrier layers have on phonons. Consequently, the electrons that are mainly responsible for the electrical transport through the lattice, can pass through the organic layers mostly intact, while the phonons that are responsible for the thermal transport will be suppressed to some degree. Consequently, the resulting films will have better thermoelectric efficiency.
Practical Outlook
It is believed that the application of barrier layers along with other methods for increasing thermoelectric efficiency can help to produce thermoelectric modules that are non-toxic, flexible, cheap, and stable. One such case is thermoelectric oxides of earth-abundant elements. These oxides in comparison to other thermoelectric materials have lower thermoelectricity due to their higher thermal conductivity. Therefore, adding barrier layers, by means of ALD/MLD, is a good method to overcome this negative characteristic of oxides.
MLD for biomedical applications
Bioactive and biocompatible surfaces
MLD can also be applied to design of bioactive and biocompatible surfaces for targeted cell and tissue responses. Bioactive materials involve materials for regenerative medicine, tissue engineering (tissue scaffolds), biosensors etc. The important factors that can affect the cell-surface interaction, as well as the immune response of the system are surface chemistry (e.g. functional groups, surface charge and wettability) and surface topography. Understanding these properties is crucial in order to control the attachment and proliferation of the cell, and resultant bioactivity of the surfaces. Furthermore, the choice of organic building blocks and a type of biomolecules (e.g. proteins, peptides or polysaccharides) during the formation of bioactive surfaces is a key factor for cellular response of the surface. MLD allows for the building of bioactive, precise structures by combining such organic molecules with inorganic biocompatible elements like titanium. The use of MLD for biomedical applications is not widely studied and is a promising field of research. This method enables surface modification and thus, can functionalize a surface.
A recent study published in 2017 used MLD to create bioactive scaffolds by combining titanium clusters with amino acids such as glycine, L-aspartic acid and L-arginine as organic linkers, to enhance rat conjunctival goblet cell proliferation. This novel group of organic-inorganic hybrid materials was called titaminates. Also, the bioactive hybrid materials that contain titanium and primary nucleobases such as thymine, uracil and adenine show high (>85%) cell viability and potential application in the field of tissue engineering.
Antimicrobial surfaces
Hospital-acquired infections caused by pathogenic microorganisms such as bacteria, viruses, parasites or fungi, are a major problem in modern healthcare. A large number of these microbes developed the ability to stop popular antimicrobial agents (such as antibiotics and antivirals) from working against them. To overcome the increasing problem of antimicrobial resistance, it has become necessary to develop alternative and effective antimicrobial technologies to which pathogens will not be able to develop resistance.
One possible approach is to cover a surface of medical devices with antimicrobial agents e.g. photosensitive organic molecules. In the method called antimicrobial photodynamic inactivation (aPDI), photosensitive organic molecules utilise light energy to form highly reactive oxygen species that oxidize biomolecules (like proteins, lipids and nucleic acids) leading to the pathogen death. Furthermore, aPDI can locally treat the infected area, which is an advantage for small medical devices like dental implants. MLD is a suitable technique to combine such photosensitive organic molecules like aromatic acids with biocompatible metal clusters (i.e. zirconium or titanium) to create light-activated antimicrobial coatings with controlled thickness and accuracy. The recent studies show that the MLD-fabricated surfaces based on 2,6-naphthalenedicarboxylic acid and Zr-O clusters were successfully used against Enterococcus faecalis in the presence of UV-A irradiation.
Advantages and limitations
Advantages
The main advantage of molecular layer deposition relates to its slow, cyclical approach. While other techniques may yield thicker films in shorter times, molecular layer deposition is known for its thickness control at Angstrom level precision. In addition, its cyclical approach yields films with excellent conformality, making it suitable for the coating of surfaces with complex shapes. The growth of multilayers consisting of different materials is also possible with MLD, and the ratio of organic/inorganic hybrid films can easily be controlled and tailored to the research needs.
Limitations
As well as in the previous case, the main disadvantage of molecular layer deposition is also related to it slow, cyclical approach. Since both precursors are pulsed sequentially during each cycle, and saturation needs to be achieved each time, the time required in order to obtain a film thick enough can easily be in the order of hours, if not days. In addition, before depositing the desired films it is always necessary to test and optimise all parameters for it to yield successful results.
In addition, another issue related to hybrid films deposited via MLD is their stability. Hybrid organic/inorganic films can degrade or shrink in H2O. However, this can be used to facilitate the chemical transformation of the films. Modifying the MLD surface chemistries can provide a solution to increase the stability and mechanical strength of hybrid films.
In terms of cost, regular molecular layer deposition equipment can cost between $200,000 and $800,000. What's more, the cost of the precursors used needs to be taken into consideration.
Similar to the atomic layer deposition case, there are some rather strict chemical limitations for precursors to be suitable for molecular layer deposition.
MLD precursors must have
Sufficient volatility
Aggressive and complete reactions
Thermal stability
No etching of the film or substrate material
Sufficient purity
In addition, it is advisable to find precursors with the following characteristics:
Gases or highly volatile liquids
High GPC
Unreactive, volatile byproducts
Inexpensive
Easy to synthesise and handle
Non-toxic
Environmentally friendly
References
External links
ALD/MLD process animation
ALD/MLD process design and optimisation
Thin film deposition
Semiconductor device fabrication
Chemical processes | Molecular layer deposition | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 7,150 | [
"Thin film deposition",
"Microtechnology",
"Coatings",
"Thin films",
"Chemical processes",
"Semiconductor device fabrication",
"nan",
"Chemical process engineering",
"Planes (geometry)",
"Solid state engineering"
] |
59,210,075 | https://en.wikipedia.org/wiki/Plumbylene | Plumbylenes (or plumbylidenes) are divalent organolead(II) analogues of carbenes, with the general chemical formula, R2Pb, where R denotes a substituent. Plumbylenes possess 6 electrons in their valence shell, and are considered open shell species.
The first plumbylene reported was the dialkylplumbylene, [(Me3Si)2CH]2Pb, which was synthesized by Michael F. Lappert et al in 1973.
Plumbylenes may be further classified into carbon-substituted plumbylenes, plumbylenes stabilized by a group 15 or 16 element, and monohalogenated plumbylenes (RPbX).
Synthesis
Plumbylenes can generally be synthesized via the transmetallation of PbX2 (where X denotes halogen) with an organolithium (RLi) or Grignard reagent (RMgX). The first reported plumbylene, [((CH3)3Si)2CH]2Pb, was synthesized by Michael F. Lappert et al by transmetallation of PbCl2 with [((CH3)3Si)2CH]Li. The addition of equimolar RLi to PbX2 produces the monohalogenated plumbylene (RPbX); addition of 2 equivalents leads to disubstituted plumbylene (R2Pb). Adding an organolithium or Grignard reagent with a different organic substituent (i.e. R’Li/R’MgX) from RPbX leads to the synthesis of heteroleptic plumbylenes (RR’Pb). Dialkyl-, diaryl-, diamido-, dithioplumbylenes, and monohalogenated plumbyelenes have been successfully synthesized this way.
Transmetallation with [((CH3)3Si)2N]2Pb as the Pb(II) precursor has also been used to synthesize diarylplumbylenes, disilylplumbylenes, and saturated N-heterocyclic plumbylenes.
Alternatively, plumbylenes may be synthesized from the reductive dehalogenation of tetravalent organolead compounds (R2PbX2).
Structure and bonding
The key aspects of bonding and reactivity in plumbylenes are dictated by the inert pair effect, whereby the combination of a widening s–p orbital energy gap as a trend down the group 14 elements and a strong relativistic contraction of the 6s orbital lead to a limited degree of sp hybridization and the 6s orbital being deep in energy and inert. Consequently, plumbylenes exclusively have a singlet spin state due to the large singlet–triplet energy gap, and tend to exist in an equilibrium between monomeric and dimeric forms in solution. This is in contrast to carbenes, which often have a triplet ground state and readily dimerize to form alkenes.
In dimethyllead, (CH3)2Pb, the Pb–C bond length is 2.267 Å and the C–Pb–C bond angle is 93.02°; the singlet–triplet gap is 36.99 kcal mol−1.
Diphenyllead, (C6H5)2Pb was computed with GAMESS at the B3PW91 level of theory using the basis sets 6-311+G(2df,p) for C and H and def2-svp for Pb with the ECP60MDF pseudopotential, in an adapted procedure (which uses the cc-pVTZ basis set for Pb instead). The molecular orbitals (MOs) (visualized using Chimera) and natural bond orbitals (NBOs) (visualized using multiwfn) generated are produced below, and qualitatively identical to the literature. As expected, the HOMO is 6s-dominated, and the LUMO is 6p-dominated. The NBOs are of the 6s lone pair and vacant 6p orbital respectively.
The Pb–C bond distance was found to be 2.303 Å and the C–Pb–C angle 105.7°. Notwithstanding the different levels of theory, the larger bond angle for (C6H5)2Pb compared to can be rationalized by the greater repulsion between the sterically bulkier phenyl groups relative to methyl groups.
Atoms in molecules (AIM) topology analysis revealed critical points in (C6H5)2Pb, and is consistent with the literature.
Plumbylenes occur as reactive intermediates in the formation of tetravalent plumbanes (R4Pb). Although the inert pair effect suggests the divalent state should be thermodynamically more stable than the tetravalent state, in the absence of stabilizing substituents, plumbylenes are sensitive to heat and light, and tend to undergo polymerization and disproportionation, forming elemental lead in the process.
Plumbylenes can be stabilized as monomers by the use of sterically bulky ligands (kinetic stabilization) or heteroatom-containing substituents that can donate electron density into the vacant 6p orbital (thermodynamic stabilization).
Dimerization
Plumbylenes are able to undergo dimerization in two ways: either through the formation of a Pb=Pb double bond to form a formal diplumbene, or through bridging halide interactions. Unhalogenated plumbylenes tend to exist in an equilibrium between the monomeric and dimeric form in solution, and, due to the low dimerization energy, as either monomers or dimers in the solid state, depending on the steric bulk of substituents. However, increasing the steric bulk of lead-bound substituents can prevent the close association of plumbylene molecules and allow the plumbylene to exist exclusively as monomers in solution or even in the solid state.
The driving force for dimerization in general arises from the Lewis amphoteric nature of plumbylenes, which possess a Lewis acidic vacant 6p orbital and a weakly Lewis basic 6s lone pair, which can act as electron acceptor and donor orbitals respectively.
These diplumbenes possess a trans-bent structure similar to that in lighter, non-carbon congeners (disilenes, digermylenes, distannylenes). The observed Pb–Pb bond lengths in diplumbenes (2.90 – 3.53 Å) have been found to typically be longer than those in tetravalent diplumbanes R3PbPbR3 (2.84 – 2.97 Å). This, together with the low computed dimerization energy (energy released from the formation of dimers from monomers) of 24 kJ mol−1 for Pb2H4, indicates weak multiple bonding. This counterintuitive result is due to the pair of 6s-6p donor-acceptor interactions representing the Pb=Pb double bond in diplumbenes being less energetically favourable compared to the overlap of spn orbitals (with a higher degree of hybridization than in diplumbenes) in the Pb–Pb single bond in diplumbanes.
In monohalogenated plumbylenes, the halogen atom on one plumbylene is able to donate a lone pair into the vacant 6p orbital of the lead atom on a separate plumbylene in a bridging mode. Monohalogenated plumbylenes have been found to generally exist as monomers in solution and dimers in the solid state, but, again, sufficiently bulky substituents on lead can sterically block this dimerization mode.
Due to decreasing dimerization energy down Group 14, while monohalogenated stannylenes and plumbylenes dimerize via the halogen-bridging mode, monohalogenated silylenes and germylenes tend to dimerize via the abovementioned multiply-bonded mode instead.
In a recent study, an N-heterocyclic plumbylene was shown to undergo dimerization leading to C–H activation, existing in solution in an equilibrium between the monomer and a dimer resulting from cleavage of an aryl C–H bond and formation of Pb–C and N–H bonds. DFT studies proposed that the reaction occurred via electrophilic substitution at the arene of one plumbylene by the lead atom of another, and involves concerted Pb–C and N–H bond formation instead of insertion of Pb into the C–H bond.
Stabilizing intramolecular interactions with substituents bearing lone pairs
Plumbylenes may be stabilized by electron donation into the vacant orbital of the lead atom. The two common intramolecular modes are resonance from a lone pair on the atom directly attached to the lead or by coordination from a Lewis base elsewhere in the molecule.
For example, Group 15 or 16 elements directly adjacent to Pb donate a lone pair in manner similar to their stabilizing effect on Fisher carbenes. Common examples of more remote electron-donors include nitrogen atoms that can lead to a six-memberd ring by bonding to the lead. Even a fluorine atom on a remote trifluoromethyl group has been seen forming a coordination to lead in [2,4,6-(CF3)3C6H2]2Pb.
Agostic interactions
Agostic interactions have also been shown to stabilize plumbylenes. DFT computations on the compounds [(R(CH3)2Si){(CH3)2P(BH3)}CH]2Pb (R = Me or Ph) found that agostic interactions between bonding B–H orbitals and the vacant 6p orbital lowered the energy of the molecule by ca. 38 kcal mol−1; this was supported by X-ray crystal structures showing the favourable positioning of said B–H bonds in proximity of Pb.
Reactivity
As previously mentioned, unstabilized plumbylenes are prone to polymerization and disproportionation, and plumbylenes without bulky substituents tend to dimerize in one of two modes. Below, the reactions of stabilized plumbylenes (at least at the temperatures at which they were studied) are listed.
Lewis acid-base adduct formation
Plumbylenes are Lewis acidic via the vacant 6p orbital and tend to form adducts with Lewis bases, such as trimethylamine N-oxide (Me3NO), 1-azidoadamantane (AdN3), and mesityl azide (MesN3). In contrast, the reaction between stannylenes and Me3NO produces the corresponding distannoxane (from oxidation of Sn(II) to Sn(IV)) instead of the Lewis adduct, which can be attributed to tin being a period above Pb, experiencing the inert pair effect to a lesser degree and hence having a higher susceptibility to oxidation.
In the case of AdN3, the terminal N of the azidoadamantane binds to the plumbylene via a bridging mode between the Lewis acidic Pb and the Lewis basic P atom; in the case of MesN3, the azide evolves N2 to form a nitrene, which then inserts into a C-H bond of an arene substituent and coordinates to Pb as a Lewis base.
Insertion
Similar to carbenes and other Group 14 congeners, plumbylenes have been shown to undergo insertion reactions, specifically into C–X (X = Br, I) and Group 16 E–E (E = S, Se) bonds.
Insertions into lead-substituent bonds can also occur.27 In the examples below, insertion is accompanied by intramolecular rearrangement to place more electron-donating heteroatoms next to the electron-deficient lead.27
Transmetallation
Plumbylenes are known to undergo nucleophilic substitution with organometallic reagents to form transmetallated products.28 In an unusual example, the use of TlPF6, bearing the weakly coordinating anion PF6−, led to the formation of crystals of an oligonuclear lead compound with a chain structure upon work-up, highlighting the interesting reactivity of plumbylenes.28
In addition, plumbylenes can also undergo metathesis with group 13 E(CH3)3 (E = Al, Ga) compounds.
Plumbylenes bearing different substituents can also undergo transmetallation and exchange substituents, with the driving force being the relief of steric strain and the low Pb-C bond dissociation energy.
Applications
Plumbylenes can be used as concurrent σ-donor-σ-acceptor ligands to metal complexes, functioning as σ-donor via its filled 6s orbital and σ-acceptor via its empty 6p orbital.
Room temperature-stable plumbylenes have also been suggested as precursors in chemical vapour deposition (CVD) and atomic layer deposition (ALD) of lead-containing materials. Dithioplumbylenes and dialkoxyplumbylenes may be useful as precursors for preparing the semiconductor material lead sulphide and piezoelectric PZT respectively.
References
Organolead compounds | Plumbylene | [
"Chemistry"
] | 2,896 | [
"Functional groups",
"Octet-deficient functional groups"
] |
59,211,466 | https://en.wikipedia.org/wiki/Convolutional%20layer | In artificial neural networks, a convolutional layer is a type of network layer that applies a convolution operation to the input. Convolutional layers are some of the primary building blocks of convolutional neural networks (CNNs), a class of neural network most commonly applied to images, video, audio, and other data that have the property of uniform translational symmetry.
The convolution operation in a convolutional layer involves sliding a small window (called a kernel or filter) across the input data and computing the dot product between the values in the kernel and the input at each position. This process creates a feature map that represents detected features in the input.
Concepts
Kernel
Kernels, also known as filters, are small matrices of weights that are learned during the training process. Each kernel is responsible for detecting a specific feature in the input data. The size of the kernel is a hyperparameter that affects the network's behavior.
Convolution
For a 2D input and a 2D kernel , the 2D convolution operation can be expressed as:where and are the height and width of the kernel, respectively.
This generalizes immediately to nD convolutions. Commonly used convolutions are 1D (for audio and text), 2D (for images), and 3D (for spatial objects, and videos).
Stride
Stride determines how the kernel moves across the input data. A stride of 1 means the kernel shifts by one pixel at a time, while a larger stride (e.g., 2 or 3) results in less overlap between convolutions and produces smaller output feature maps.
Padding
Padding involves adding extra pixels around the edges of the input data. It serves two main purposes:
Preserving spatial dimensions: Without padding, each convolution reduces the size of the feature map.
Handling border pixels: Padding ensures that border pixels are given equal importance in the convolution process.
Common padding strategies include:
No padding/valid padding. This strategy typically causes the output to shrink.
Same padding: Any method that ensures the output size same as input size is a same padding strategy.
Full padding: Any method that ensures each input entry is convolved over for the same number of times is a full padding strategy.
Common padding algorithms include:
Zero padding: Add zero entries to the borders of input.
Mirror/reflect/symmetric padding: Reflect the input array on the border.
Circular padding: Cycle the input array back to the opposite border, like a torus.
The exact numbers used in convolutions is complicated, for which we refer to (Dumoulin and Visin, 2018) for details.
Variants
Standard
The basic form of convolution as described above, where each kernel is applied to the entire input volume.
Depthwise separable
Depthwise separable convolution separates the standard convolution into two steps: depthwise convolution and pointwise convolution. The depthwise separable convolution decomposes a single standard convolution into two convolutions: a depthwise convolution that filters each input channel independently and a pointwise convolution ( convolution) that combines the outputs of the depthwise convolution. This factorization significantly reduces computational cost.
It was first developed by Laurent Sifre during an internship at Google Brain in 2013 as an architectural variation on AlexNet to improve convergence speed and model size.
Dilated
Dilated convolution, or atrous convolution, introduces gaps between kernel elements, allowing the network to capture a larger receptive field without increasing the kernel size.
Transposed
Transposed convolution, also known as deconvolution, fractionally strided convolution, and upsampling convolution, is a convolution where the output tensor is larger than its input tensor. It's often used in encoder-decoder architectures for upsampling. It's used in image generation, semantic segmentation, and super-resolution tasks.
History
The concept of convolution in neural networks was inspired by the visual cortex in biological brains. Early work by Hubel and Wiesel in the 1960s on the cat's visual system laid the groundwork for artificial convolution networks.
An early convolution neural network was developed by Kunihiko Fukushima in 1969. It had mostly hand-designed kernels inspired by convolutions in mammalian vision. In 1979 he improved it to the Neocognitron, which learns all convolutional kernels by unsupervised learning (in his terminology, "self-organized by 'learning without a teacher'").
In 1998, Yann LeCun et al. introduced LeNet-5, an early influential CNN architecture for handwritten digit recognition, trained on the MNIST dataset.
(Olshausen & Field, 1996) discovered that simple cells in the mammalian primary visual cortex implement localized, oriented, bandpass receptive fields, which could be recreated by fitting sparse linear codes for natural scenes. This was later found to also occur in the lowest-level kernels of trained CNNs.
The field saw a resurgence in the 2010s with the development of deeper architectures and the availability of large datasets and powerful GPUs. AlexNet, developed by Alex Krizhevsky et al. in 2012, was a catalytic event in modern deep learning.
See also
Convolutional neural network
Pooling layer
Feature learning
Deep learning
Computer vision
References
Artificial neural networks
Computer vision
Deep learning | Convolutional layer | [
"Engineering"
] | 1,178 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
59,211,511 | https://en.wikipedia.org/wiki/Terminal%20investment%20hypothesis | The terminal investment hypothesis is the idea in life history theory that as an organism's residual reproductive value (or the total reproductive value minus the reproductive value of the current breeding attempt) decreases, its reproductive effort will increase. Thus, as an organism's prospects for survival decreases (through age or an immune challenge, for example), it will invest more in reproduction. This hypothesis is generally supported in animals, although results contrary to it do exist.
Definition
The terminal investment hypothesis posits that as residual reproductive value (measured as the total reproductive value minus the reproductive value of the current breeding attempt) decreases, reproductive effort increases. This is based on the cost of reproduction hypothesis, which says that an increase in resources dedicated to current reproduction decreases the potential for future reproduction. But, as the residual reproductive value decreases, the importance of this trade-off decreases, leading to increased investment in the current reproductive attempt. This terminal investment hypothesis can be illustrated by the equation
,
where is the total reproductive value, the reproductive value of the current breeding attempt, the proportionate increase in resulting from a positive decision (where a yes-no decision must be made regarding whether or not to increase reproductive effort), the cost of a positive decision where there is no selective pressure for either a positive decision or negative decision (this variable is also known as the "barely-justified cost"). The variable is the proportionate loss in from a negative decision. The barely-justified cost is thus inversely proportional to the residual reproductive value. When the level of reproductive investment has not reached the point where the equation above is true, more positive decisions about reproductive effort will be made. Thus, as the residual reproductive value decreases, more positive decisions need to be made so the equation is equal.
In animals
In animals, most tests of the terminal investment hypothesis are correlations of age and reproductive effort, immune challenges on all age stages, and immune challenges on older ages versus younger ages. The last type of test is considered to be a more reliable measure of senescence's effect on reproductive effort, as younger individuals should reduce reproductive effort to reduce their chance of death because of their high future reproductive prospects, while older animals should increase effort because of their low future prospects. Overall, the terminal investment hypothesis is generally supported in a variety of animals.
In birds
A study on blue tits published in 2000 found that individuals injected with a human diphtheria–tetanus vaccine fed their nestlings less than those injected with a control solution. In a study published in 2004, house sparrows that were injected with a Newcastle disease vaccine were more likely to lay a replacement clutch after their first clutch had been artificially removed than those that were injected with a control solution. In a study published in 2006, old blue-footed boobies injected with lipopolysaccharides (to challenge the immune system) before laying fledged more young than normal, whereas young individuals fledged less than normal. An increase in maternal effort in immune challenged birds may be mediated by the hormone corticosterone; a study published in 2015 found that house wrens injected with lipopolysaccharides increased foraging, and that measurements of corticosterone from eggs laid after injection found a positive correlation of this hormone with maternal foraging rates.
In insects
A study published in 2009 supported the cost of reproduction and terminal investment hypotheses in the burying beetle. It found that beetles manipulated to overproduce young (by replacing a mouse carcass with a carcass) had shorter lifespans than those that bred on just carcasses, followed by those that had a carcass. In turn, non-breeding beetles had a significantly longer lifespan than those that bred. This supports the cost of reproduction hypothesis. Another experiment from the same study found beetles that first bred at 65 days had a larger brood size before dispersal (before the larvae start to pupate in the soil) than those that initially bred at 28 days. This supports the terminal investment hypothesis, and prevents the effect of an increased average brood size in older animals due to differential survival of quality individuals.
In flatworms
A study published in 2004 on the flatworm Diplostomum spathaceum found that as its intermediate host, a snail, aged, production of cercariae (which are passed on to the final host, a fish) decreased. This is in line with the bet hedging hypothesis, which, in this case, says that the flatworm should attempt to keep its host alive longer so that more young can be produced; it does not support the terminal investment hypothesis.
In mammals
A study published in 2002 found results contrary to the terminal investment hypothesis in reindeer. Calf weight peaked at the mother's seventh year of age, and declined thereafter. However, this would only be opposed to the hypothesis if reproductive costs did not increase with age. An alternative hypothesis, the senescence hypothesis, positing that reproductive output declines with age-related loss of function, was supported by the study. These two hypotheses are not necessarily mutually exclusive; a study on rhesus macaques published in 2010 strongly supported the senescence hypothesis and weakly supported the terminal investment hypothesis. It found that older mothers were lighter, less active, and had lighter infants with reduced survival rates compared to younger mothers (supporting the senescence hypothesis), but that older individuals spent more time in contact with their young (supporting the terminal investment hypothesis). Additionally, a study published in 1982 on red deer on the island of Rhum found that while older mothers produced less offspring (and lighter offspring, when they did) than expected for a given body weight, they had longer suckling bouts (which had previously been correlated with milk yield, calf body condition in early winter, and calf survival to spring) compared to younger mothers.
In reptiles
A study on spotted turtles published in 2008 found that individuals in very poor condition sometimes did not breed. This is consistent with the bet hedging hypothesis, and indicates decision making on a large temporal scale (as spotted turtles may live for 65 to 110 years). However, individuals in poor condition generally produced a relatively large amount of small eggs; consistent with the terminal investment hypothesis.
In plants
Although the terminal investment hypothesis has been relatively widely studied in animals, there have been few studies of the hypothesis' application to plants. One study on members of the long-lived oak genus Quercus found that trees declined in condition towards the end of their lifespan, and did not invest an increasing proportion of their decreasing resources in reproduction.
References
Game theory
Behavioral ecology | Terminal investment hypothesis | [
"Mathematics",
"Biology"
] | 1,335 | [
"Behavior",
"Evolutionary game theory",
"Behavioral ecology",
"Behavioural sciences",
"Game theory",
"Ethology"
] |
68,535,891 | https://en.wikipedia.org/wiki/Polymer%20devolatilization | Polymer devolatilization, also known as polymer degassing, is the process of removing low-molecular-weight components such as residual monomers, solvents, reaction by-products and water from polymers.
Motivation
When exiting a reactor after a polymerization reaction, many polymers still contain undesired low-molecular weight components. These component may make the product unusable for further processing (for example, a polymer solution cannot directly be used for plastics processing), may be toxic, may cause bad sensory properties such as an unpleasant smell or worsen the properties of the polymer. It may also be desirable to recycle monomers and solvents to the process. Plastic recycling can also involve removal of water and volatile degradation products.
Basic process types
Devolatilization can be carried out when a polymer is in the solid or liquid phase, with the volatile components going into a liquid or gas phase. Examples are:
Solid polymer, liquid phase: Extraction of caprolactam from polyamides with water.
Solid polymer, gas phase: Removal of ethylene from polyethylene via air or nitrogen in silos.
Liquid polymer, gas phase: Removal of styrene from polystyrene via vacuum.
It is usual for different types of devolatilization steps to be combined to overcome limitations in the individual steps.
Physical and chemical aspects
Thermodynamics
The thermodynamic activity of volatiles needs to be higher in the polymer than in the other phase for them to leave the polymer. In order to design such a process, the activity needs to be calculated. This is usually done via the Flory–Huggins solution theory. This effect can be enhanced via higher temperatures or lower partial pressure of the volatile component by applying an inert gas or lower pressure.
Diffusion
In order to be removed from the polymer, the volatile components need to travel to a phase boundary via diffusion. Because of the low diffusion coefficients of volatiles in polymers, this can be the rate-determining step. This effect can be enhanced by higher temperatures or by small diffusion lengths due to its higher Fourier number.
Heat transfer
Because polymers and polymer solutions often have a very high viscosity, the flow in devolatilizers is laminar, leading to low heat transfer coefficients, which can also be a limiting factor.
Chemical stability
Higher temperatures can also affect the chemical stability of the polymer and thus its use properties. If a polymer's ceiling temperature is exceeded, it will partially revert to its monomers, destroying its usability. More generally, polymer degradation also occurs during devolatilization, limiting the temperature and residence time available for the process.
Foam vs. film devolatilization
There are two basic forms of devolatilization to a vacuum. In foam devolatilization, bubbles inside the polymer solution nucleate and grow, finally bursting and releasing their volatile content to the surroundings. This requires sufficient vapor pressure. If possible, this is a very efficient method because the volatiles only need to diffuse a short way.
Film devolatilization occurs when there is no longer sufficient vapor pressure to generate bubbles, and requires on sufficient surface area and good mixing. In this case, stripping agent such as nitrogen may be added to the polymer to induce improved mass transfer through bubbles.
Types of devolatilizers for polymer melt
Devolatilizers for polymer melts are classified as static or moving, also called "still" and "rotating" in the literature.
Static devolatilizers
Static devolatilizers include:
Falling strand devolatilizers: Polymer is partitioned into many individual strands which fall down in a vacuum chamber. Diffusion moves volatiles into the gas phase, which are then collected via a vacuum system. This is usually the last stage of a devolatizing process, when vapor pressure is low.
Falling film evaporator: Polymer falls down vertical walls, volatiles diffusing on the side that is not in contact with the walls.
Tube evaporators: A boiling polymer solution flows downward in a vertical shell and tube heat exchanger into a separator. Polymer is collected at the bottom, vapor is collected via a vacuum system and condensers.
Flash evaporators: A polymer solution is preheated and brought into a separator, where pressure below the vapor pressure of the solution leads to a part of the volatiles evaporating.
Moving devolatilizers
Co-rotating twin screw extruders: The polymer solution is brought into a co-rotating twin screw extruders, where it is subjected to shear and mechanical energy input and where vapors are drawn off. This type of machine allows different pressures in different zones. An advantage is the self-cleaning action of those extruders.
Single-screw extruders: In principle similar to co-rotating twin screw extruders, without the self-cleaning action.
Wiped-film evaporators: Polymer solution is brought into a single large vessel, where a rotor agitates the product and creates surface renewal. Only a single pressure level is possible in these machines.
Large-volume kneaders: A polymer solution is brought into a large-volume kneader and subjected to shear at longer residence times than in an extruder.
Devolatilizers for suspensions and latexes
Removal of monomers and solvents from latex and suspensions, for example in the production of synthetic rubber, is usually done via stirred vessels.
References
Chemical engineering
Process engineering
Polymers | Polymer devolatilization | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,130 | [
"Process engineering",
"Chemical engineering",
"Mechanical engineering by discipline",
"nan",
"Polymer chemistry",
"Polymers"
] |
68,536,741 | https://en.wikipedia.org/wiki/Feebly%20interacting%20particle | Feebly interacting particles (FIPs) are subatomic particles defined by having extremely suppressed interactions with the Standard Model (SM) bosons and / or fermions. These particles are potential thermal dark matter candidates, extending the model of weakly interacting massive particles (WIMPs) to include weakly interacting sub-eV particles (WISPs) and others. FIP physics is also known as dark-sector physics.
Candidates
FIP candidates could be massive (FIMP / WIMP) or massless and coupled to the SM particles through some minimal coupling strength.
The light FIPs are theorized to be dark matter candidates, and, they provide an explanation for the origin of neutrino masses and CP symmetry in strong interactions.
Neutrinos technically qualify as FIPs, but usually when the acronym "FIP" is used, it is intended to refer to some other, as-yet unknown particle.
Cai, Cacciapaglia, and Lee (2022) proposed massive gravitons as feebly Interacting particle candidates.
See also
WIMP – weakly interacting massive particle
WISP – weakly interacting sub-eV / slight / slender particle
References
Dark matter
Hypothetical particles
Physics beyond the Standard Model
Astroparticle physics
Exotic matter | Feebly interacting particle | [
"Physics",
"Astronomy"
] | 251 | [
"Dark matter",
"Hypothetical particles",
"Unsolved problems in astronomy",
"Concepts in astronomy",
"Theoretical physics",
"Astroparticle physics",
"Unsolved problems in physics",
"Astrophysics",
"Subatomic particles",
"Particle physics",
"Exotic matter",
"Particle physics stubs",
"Theoretic... |
68,542,438 | https://en.wikipedia.org/wiki/Phosphate%20phosphite | A phosphate phosphite is a chemical compound or salt that contains phosphate and phosphite anions (PO33- and PO43-). These are mixed anion compounds or mixed valence compounds. Some have third anions.
Phosphate phosphites frequently occur as metal organic framework (MOF) compounds which are of research interest for gas storage, detection or catalysis. In these phosphate and phosphite form bridging ligands to hard metal ions. Protonated amines are templates.
Naming
An phosphate phosphite compound may also be called a phosphite phosphate.
Production
Phosphate phosphite compounds are frequently produced by hydrothermal synthesis, in which a water solution of ingredients is enclosed in a sealed container and heated. Phosphate may be reduced to phosphite or phosphite oxidised to phosphate in this process.
Properties
On heating,
Related
Related to these are the nitrite nitrates and arsenate arsenites.
List
References
Phosphites
Phosphates
Mixed anion compounds | Phosphate phosphite | [
"Physics",
"Chemistry"
] | 218 | [
"Matter",
"Mixed anion compounds",
"Salts",
"Phosphates",
"Ions"
] |
68,543,797 | https://en.wikipedia.org/wiki/Arsenate%20arsenite | An arsenate arsenite is a chemical compound or salt that contains arsenate and arsenite anions (AsO33- and AsO43-). These are mixed anion compounds or mixed valence compounds. Some have third anions. Most known substances are minerals, but a few artificial arsenate arsenite compounds have been made. Many of the minerals are in the Hematolite Group.
An arsenate arsenite compound may also be called an arsenite arsenate.
Properties
Some members of this group of materials like mcgovernite has an extremely high unit cell dimension of 204 Å.
Related
Mixed valence pnictide compounds related to the arsenate arsenites include the nitrite nitrates, and phosphate phosphites.
List
References
Arsenates
Arsenites
Mixed anion compounds | Arsenate arsenite | [
"Physics",
"Chemistry"
] | 178 | [
"Ions",
"Matter",
"Mixed anion compounds"
] |
64,198,260 | https://en.wikipedia.org/wiki/Ginsenoside%20Rb1 | Ginsenoside Rb1 (or Ginsenoside Rb1 or GRb1 or GRb1) is a chemical compound belonging to the ginsenoside family.
Like other ginsenosides, it is found in the plant genus Panax (ginseng), and has a variety of potential health effects including anticarcinogenic, immunomodulatory, anti‐inflammatory, antiallergic, antiatherosclerotic, antihypertensive, and antidiabetic effects as well as antistress activity and effects on the central nervous system.
Pharmacological effects
A 1998 study by Seoul National University reported that GRb1 and GRg3 (ginsenosides Rb1 and Rg3) significantly attenuated glutamate-induced neurotoxicity by inhibiting the overproduction of nitric oxide synthase among some other findings regarding their neuroprotective properties.
In 2002, the Laboratory for Cancer Research in Rutgers University showed that GRb1 and GRg1 have neuroprotective effect for spinal cord neurons, while ginsenoside Re did not exhibit any activity. GRb1 and GRg1 are proposed to represent potentially effective therapeutic agents for spinal cord injuries.
The protection that GRg1 (ginsenoside Rg1) and GRb1 offer against Alzheimer’s disease symptoms in mice was first published by researchers in 2015. The GRg1 affected three metabolic pathways: the metabolism of lecithin, amino acids and sphingolipids, while GRb1 treatment affected lecithin and amino acid metabolism.
It was reported in 2017 that GRb1 improved cardiac function and remodelling in heart failure in mice. The treatment of H-ginsenoside Rb1 potentially attenuated cardiac hypertrophy and myocardial fibrosis.
Proposed biosynthesis
The biosynthesis of GRb1 in Panax ginseng starts from farnesyl diphosphate (FPP), which is converted to squalene with squalene synthase (SQS), then to 2,3-oxidosqualene with squalene epoxidase (SE).
The 2,3-oxidasqualene is then converted to dammarenediol-II by cyclization, with dammarenediol-II synthase (DS) as the catalyst. The dammarenediol-II is converted to protopanaxadiol and then to ginsenoside Rd.
Finally, GRb1 is synthesized from ginsenoside Rd, catalysed by UDPG:ginsenoside Rd glucosyltransferase (UGRdGT), a biosynthetic enzyme of GRb1 first discovered in 2005.
References
Biosynthesis
Triterpene glycosides | Ginsenoside Rb1 | [
"Chemistry"
] | 595 | [
"Biosynthesis",
"Metabolism",
"Chemical synthesis"
] |
64,198,506 | https://en.wikipedia.org/wiki/Peroxydiphosphoric%20acid | Peroxydiphosphoric acid (H4P2O8) is an oxyacid of phosphorus. Its salts are known as peroxydiphosphates. It is one of two peroxyphosphoric acids, along with peroxymonophosphoric acid.
History
Both peroxyphosphoric acids were first synthesized and characterized in 1910 by Julius Schmidlin and Paul Massini, where peroxydiphosphoric acid was obtained in poor yields from the reaction between diphosphoric acid and highly-concentrated hydrogen peroxide.
H4P2O7 + H2O2 -> H4P2O8 + H2O
Preparation
Peroxydiphosphoric acid can be prepared by the reaction between phosphoric acid and fluorine, with peroxymonophosphoric acid being a by-product.
2H3PO4 + F2 -> H4P2O8 + 2HF
The compound is not commercially available and must be prepared as needed. Peroxodiphosphates can be obtained by electrolysis of phosphate solutions.
Properties
Peroxydiphosphoric acid is a tetraprotic acid, with acid dissociation constants given by pKa1 ≈ −0.3, pKa2 ≈ 0.5, pKa3 = 5.2 and pKa4 = 7.6. In aqueous solution, it disproportionates upon heating to peroxymonophosphoric acid and phosphoric acid.
H4P2O8 + H2O <=> H3PO5 + H3PO4
References
Phosphorus oxoacids
Mineral acids | Peroxydiphosphoric acid | [
"Chemistry"
] | 359 | [
"Acids",
"Inorganic compounds",
"Mineral acids"
] |
41,378,045 | https://en.wikipedia.org/wiki/Reproductive%20Toxicology%20%28journal%29 | Reproductive Toxicology is a peer-reviewed journal published bimonthly by Elsevier which focuses on the effects of toxic substances on the reproductive system. The journal was established in 1987 and is affiliated with the European Teratology Society. According to the Journal Citation Reports, the journal has a 2023 impact factor of 3.3.
References
Toxicology journals
Elsevier academic journals
Academic journals established in 1987
Bimonthly journals
English-language journals | Reproductive Toxicology (journal) | [
"Environmental_science"
] | 89 | [
"Toxicology journals",
"Toxicology"
] |
49,088,255 | https://en.wikipedia.org/wiki/Arnold%E2%80%93Beltrami%E2%80%93Childress%20flow | The Arnold–Beltrami–Childress (ABC) flow or Gromeka–Arnold–Beltrami–Childress (GABC) flow is a three-dimensional incompressible velocity field which is an exact solution of Euler's equation. Its representation in Cartesian coordinates is the following:
where is the material derivative of the Lagrangian motion of a fluid parcel located at
This ABС flow was analyzed by Dombre et al. 1986 who gave it the name A-B-C because this example was independently introduced by Arnold (1965) and Childress (1970) as an interesting class of Beltrami flows. For some values of the parameters, e.g., A=B=0, this flow is very simple because particle trajectories are helical screw lines. For some other values of the parameters, however, these flows are ergodic and particle trajectories are everywhere dense. The last result is a counterexample to some statements in traditional textbooks on fluid mechanics that vortex lines are either closed or they can not end in the fluid. That is, because for the ABC flows we have , vortex lines coincide with the particle trajectories and they are also everywhere dense for some values of the parameters A, B, and C.
It is notable as a simple example of a fluid flow that can have chaotic trajectories.
It is named after Vladimir Arnold, Eugenio Beltrami, and Stephen Childress. Ippolit S. Gromeka's (1881) name has been historically neglected, though much of the discussion has been done by him first.
See also
Beltrami flow
References
V. I. Arnold. "Sur la topologie des ecoulements stationnaires des fluides parfaits". C. R. Acad. Sci. Paris, 261:17–20, 1965.
Chaos theory
Fluid dynamics
Differential equations | Arnold–Beltrami–Childress flow | [
"Chemistry",
"Mathematics",
"Engineering"
] | 397 | [
"Chemical engineering",
"Mathematical objects",
"Differential equations",
"Equations",
"Piping",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
49,093,735 | https://en.wikipedia.org/wiki/Wyman-Gordon%2050%2C000%20ton%20forging%20press | The Wyman-Gordon 50,000-ton forging press is a forging press located at the Wyman-Gordon Grafton Plant that was built as part of the Heavy Press Program by the United States Air Force. It was manufactured by Loewy Hydropress of Pittsburgh, Pennsylvania and began operation in October, 1955.
References
External links
Photographs of the press
Wyman-Gordon
Historic American Engineering Record in Massachusetts
Industrial machinery | Wyman-Gordon 50,000 ton forging press | [
"Engineering"
] | 86 | [
"Industrial machinery"
] |
51,425,999 | https://en.wikipedia.org/wiki/DNA%20replication%20stress | DNA replication stress refers to the state of a cell whose genome is exposed to various stresses. The events that contribute to replication stress occur during DNA replication, and can result in a stalled replication fork.
There are many events that contribute to replication stress, including:
Misincorporation of ribonucleotides
Unusual DNA structures
Conflicts between replication and transcription
Insufficiency of essential replication factors
Common fragile sites
Overexpression or constitutive activation of oncogenes
Chromatin inaccessibility
ATM and ATR are proteins that help to alleviate replication stress. Specifically, they are kinases that are recruited and activated by DNA damage. The stalled replication fork can collapse if these regulatory proteins fail to stabilize it. When this occurs, reassembly of the fork is initiated in order to repair the damaged DNA end.
Replication fork
The replication fork consists of a group of proteins that influence the activity of DNA replication. In order for the replication fork to stall, the cell must possess a certain number of stalled forks and arrest length. The replication fork is specifically paused due to the stalling of helicase and polymerase activity, which are linked together. In this situation, the fork protection complex (FPC) is recruited to help maintain this linkage.
In addition to stalling and maintaining the fork structure, protein phosphorylation can also create a signal cascade for replication restart. The protein Mrc1, which is part of the FPC, transmits the checkpoint signal by interacting with kinases throughout the cascade. When there is a loss of these kinases (from replication stress), an excess of ssDNA is produced, which is necessary for the restarting of replication.
Replication block removal
DNA interstrand cross-links (ICLs) cause replication stress by blocking replication fork progression. This blockage leads to failure of DNA strand separation and a stalled replication fork. Repair of ICLs can be accomplished by sequential incisions, and homologous recombination. In vertebrate cells, replication of an ICL-containing chromatin template triggers recruitment of more than 90 DNA repair and genome maintenance factors. Analysis of the proteins recruited to stalled replication forks revealed a specific set of DNA repair factors involved in the replication stress response. Among these proteins, SLF1 and SLF2 were found to physically link the SMC5/6 DNA repair protein complex to RAD18. The SMC5/6 complex is employed in homologous recombination, and its linkage to RAD18 likely allows recruitment of SMC5/6 to ubiquitination products at sites of DNA damage.
Replication-coupled repair
Mechanisms that process damaged DNA in coordination with the replisome in order to maintain replication fork progression are considered to be examples of replication-coupled repair. In addition to the repair of DNA interstrand crosslinks, indicated above, multiple DNA repair processes operating in overlapping layers can be recruited to faulty sites depending on the nature and location of the damage. These repair processes include (1) removal of misincorporated bases; (2) removal of misincorporated ribonucleotides; (3) removal of damaged bases (e.g. oxidized or methylated bases) that block the replication polymerase; (4) removal of DNA-protein crosslinks; and (5) removal of double-strand breaks. Such repair pathways can function to protect stalled replication forks from degradation and allow restart of broken forks, but when deficient can cause replication stress.
Single-strand break repair
Singe-strand breaks are one of the most common forms of endogenous DNA damage. Replication fork collapse at leading strand nicks generates resected single-ended double-strand breaks that can be repaired by homologous recombination.
Causation
Replication stress is induced from various endogenous and exogenous stresses, which are regularly introduced to the genome. These stresses include, but are not limited to, DNA damage, excessive compacting of chromatin (preventing replisome access), over-expression of oncogenes, or difficult-to-replicate genome structures. Replication stress can lead to genome instability, cancer, and ageing. Uncoordinated replication–transcription conflicts and unscheduled R-loop accumulation are significant contributors.
Specific events
The events that lead to genome instability occur in the cell cycle prior to mitosis, specifically in the S phase. Disturbance to this phase can generate negative effects, such as inaccurate chromosomal segregation, for the upcoming mitotic phase. The two processes that are responsible for damage to the S phase are oncogenic activation and tumor suppressor inactivation. They have both been shown to speed up the transition from the G1 phase to the S phase, leading to inadequate amounts of DNA replication components. These losses can contribute to the DNA damage response (DDR). Replication stress can be an indicative characteristic for carcinogenesis, which typically lacks DNA repair systems. A physiologically short duration of the G1 phase is also typical of fast replicating progenitors during early embryonic development.
Applications in cancer
Normal replication stress occurs at low to mild levels and induces genomic instability, which can lead to tumorigenesis and cancer progression. However, high levels of replication stress have been shown to kill cancer cells.
In one study, researchers sought to determine the effects of inducing high levels of replication stress on cancer cells. The results showed that with further loss of checkpoints, replication stress is increased to a higher level. With this change, the DNA replication of cancer cells may be incomplete or incorrect when entering into the mitotic phase, which can eventually result in cell death through mitotic catastrophe.
Another study examined how replication stress affected APOBEC3B activity. APOBEC3 (apolipoprotein B mRNA editing enzyme, catalytic polypeptide-like 3) has been seen to mutate the cancer genome in various cancer types. Results from this study show that weakening oncogenic signaling or intensifying DNA replication stress can alter carcinogenic potential, and can be manipulated therapeutically.
References
DNA replication
Molecular genetics | DNA replication stress | [
"Chemistry",
"Biology"
] | 1,261 | [
"DNA replication",
"Molecular genetics",
"Genetics techniques",
"Molecular biology"
] |
47,349,294 | https://en.wikipedia.org/wiki/Subjective%20expected%20relative%20similarity | Subjective expected relative similarity (SERS) is a normative and descriptive theory that predicts and explains cooperation levels in a family of games termed Similarity Sensitive Games (SSG), among them the well-known Prisoner's Dilemma game (PD). SERS was originally developed in order to (i) provide a new rational solution to the PD game and (ii) to predict human behavior in single-step PD games. It was further developed to account for: (i) repeated PD games, (ii) evolutionary perspectives and, as mentioned above, (iii) the SSG subgroup of 2×2 games.
SERS predicts that individuals cooperate whenever their subjectively perceived similarity with their opponent exceeds a situational index derived from the game's payoffs, termed the similarity threshold of the game. SERS proposes a solution to the rational paradox associated with the single step PD and provides accurate behavioral predictions. The theory was developed by Prof. Ilan Fischer at the University of Haifa.
The Prisoner's Dilemma
The dilemma is described by a 2 × 2 payoff matrix that allows each player to choose between a cooperative and a competitive (or defective) move. If both players cooperate, each player obtains the reward (R) payoff. If both defect, each player obtains the punishment (P) payoff. However, if one player defects while the other cooperates, the defector obtains the temptation (T) payoff and the cooperator obtains the sucker's (S) payoff, where (and, assuring that sharing the payoffs awarded for uncoordinated choices does not exceed the payoffs obtained by mutual cooperation).
Given the payoff structure of the game (see Table 1), each individual player has a dominant strategy of defection. This dominant strategy yields a better payoff regardless of the opponent's choice. By choosing to defect, players protect themselves from exploitation and retain the option to exploit a trusting opponent. Because this is the case for both players, mutual defection is the only Nash equilibrium of the game. However, this is a deficient equilibrium (since mutual cooperation results in a better payoff for both players).
The PD game payoff matrix:
The repeated prisoner's dilemma
Players that knowingly interact for several games (where the end point of the game is unknown), thus playing a repeated Prisoner's Dilemma game, may still be motivated to cooperate with their opponent while attempting to maximise their payoffs along the entire set of their repeated games. Such players face a different challenge of choosing an efficient and lucrative strategy for the repeated play. This challenge may become more complex when individuals are embedded in an ecology, having to face many opponents with various and unknown strategies.
The SERS theory
SERS assumes that the similarity between the players is subjectively and individually perceived (denoted as , where ). Two players confronting each other may have either identical or different perceptions of their similarity to their opponent. In other words, similarity perceptions need neither be symmetric nor correspond to formal logic constraints. After perceiving , each player chooses between cooperation and defection, attempting to maximize the expected outcome. This means that each player estimates his or her expected payoffs under each of two possible courses of action. The expected value of cooperation is given by and the expected payoff of defection is given by . Hence, cooperation provides a higher expected payoff whenever which may also be expressed further as:
Cooperate if . Defining , we obtain a simple decision rule: cooperate whenever , where denotes the level of perceived similarity with the opponent, and denotes the similarity threshold derived from the payoff matrix.
To illustrate, consider a PD payoff matrix with . The similarity threshold calculated for the game is given by: . Thus a player perceiving the similarity with the opponent, , exceeding 0.71 should cooperate in order to maximise his expected payoffs.
Empirical evidence
Several experiments were conducted to test whether SERS provides not only a normative theory but also a descriptive theory of human behaviour.
For example, an experiment involving 215 university undergraduates revealed an average of 30% cooperation rate for a payoff matrix with and an average of 46% cooperation rate for a payoff matrix .
Participants cooperated 47% under high level of induced similarity and only 29% under low level of induced similarity.
The cooperation rate for manipulating the perception of similarity of the opponent, revealed an increase from 67% to 80% of cooperation for the lower similarity threshold and from 40% to 70% cooperation for the higher similarity threshold.
Other experiments with various similarity induction methods and payoff matrices further confirmed SERS's status as a descriptive theory of human behaviour.
The SERS theory for Repeated PD Games
Experiments on the impact of SERS on repeated games are presently being conducted and analysed at the University of Haifa and the Max Planck Institute for Research on Collective Goods in Bonn.
Similarity sensitive games
The PD game is not the only similarity sensitive game. Games for which the choice of the action with the higher expected value depends on the value of are defined as Similarity Sensitive Games (SSGs), whereas others are nonsimilarity sensitive. Focusing only on the 24 completely rank-ordered and symmetric games, we can mark 12 SSGs. After eliminating games that reflect permutations of other games generated either by switching rows, columns, or both rows and columns, we are left with six basic (completely rank-ordered and symmetric) SSGs.
These are games for which SERS provides a rational and payoff-maximizing strategy that recommends which alternative to choose for any given perception of similarity with the opponent.
Mimicry and Relative Similarity (MaRS)
Developing the SERS theory into an evolutionary strategy yields the Mimicry and Relative Similarity (MaRS) algorithm.
Fusing enacted and expected mimicry generates a powerful and cooperative mechanism that enhances fitness and reduces the risks associated with trust and cooperation. When conflicts take the form of repeated PD games, individuals get the opportunity to learn and monitor the extent of similarity with their opponents. They can then react by choosing whether to enact, expect, or exclude mimicry. This rather simple behavior has the capacity to protect individuals from exploitation and drive the evolution of cooperation within entire populations. MaRS paves the way for the induction of cooperation and supports the survival of other cooperative strategies. The existence of MaRS in heterogeneous populations helps those cooperative strategies that do not have the capacity of MaRS to combat hostile and random opponents. Despite the fact that MaRS cannot prevail in a duel with an unconditional defector, interacting within heterogeneous populations allows MaRS to fight unpredictable and hostile strategies and cooperate with cooperative ones, including itself. The operation of MaRS promotes cooperation, minimizes the extent of exploitation, and accounts for high fitness levels.
Testing the model in computer simulations of behavioral niches, populated with agents that enact various strategies and learning algorithms, shows how mimicry and relative similarity outperforms all the opponent strategies it was tested against, pushes noncooperative opponents toward extinction, and promotes the development of cooperative populations.
See also
Game theory
Nash Equilibrium
Tit for Tat
Win stay lose shift
References
Game theory | Subjective expected relative similarity | [
"Mathematics"
] | 1,449 | [
"Game theory"
] |
47,349,607 | https://en.wikipedia.org/wiki/Chlorine%20gas%20poisoning | Chlorine gas poisoning is an illness resulting from the effects of exposure to chlorine beyond the threshold limit value. Acute chlorine gas poisoning primarily affects the respiratory system, causing difficulty breathing, cough, irritation of the eyes, nose, and throat, and sometimes skin irritation. Higher exposures can lead to severe lung damage, such as toxic pneumonitis or pulmonary edema, with concentrations around 400ppm and beyond potentially fatal. Chronic exposure to low levels can result in respiratory issues like asthma and chronic cough. Common exposure sources include occupational settings, accidental chemical mixing, and industrial accidents. Diagnosis involves tests like pulse oximetry, chest radiography, and pulmonary function tests. Treatment is supportive, with no antidote, and involves oxygen and bronchodilators for lung damage. Most individuals with mild exposure recover within a few days, though some may develop long-term respiratory issues.
Signs and symptoms
The signs of acute chlorine gas poisoning are primarily respiratory, and include difficulty breathing and cough; listening to the lungs will generally reveal crackles. There will generally be sneezing, nose irritation, burning sensations, and throat irritations. There may also be skin irritations or chemical burns and eye irritation or conjunctivitis. A person with chlorine gas poisoning may also have nausea, vomiting, or a headache.
Chronic exposure to relatively low levels of chlorine gas may cause pulmonary problems like acute wheezing attacks, chronic cough with phlegm, and asthma.
Causes
Occupational exposures constitute the highest risk of toxicity and common domestic exposures result from the mixing of chlorine bleach with acidic washing agents such as acetic, nitric or phosphoric acid. They also occur as a result of the chlorination of table water. Other exposure risks occur during industrial or transportation accidents. Wartime exposure is rare.
Dose toxicity
Humans can smell chlorine gas at ranges from 0.1–0.3 ppm. According to a review from 2010: "At 1–3 ppm, there is mild mucous membrane irritation that can usually be tolerated for about an hour. At 5–15 ppm, there is moderate mucous membrane irritation. At 30 ppm and beyond, there is immediate chest pain, shortness of breath, and cough. At approximately 40–60 ppm, a toxic pneumonitis and/or acute pulmonary edema can develop. Concentrations of about 400 ppm and beyond are generally fatal over 30 minutes, and at 1,000 ppm and above, fatality ensues within only a few minutes."
Mechanism
The concentration of the inhaled gas and duration of exposure and water contents of the tissues exposed are the key determinants of toxicity; moist tissues like the eyes, throat, and lungs are the most susceptible to damage.
Once inhaled, chlorine gas diffuses into the epithelial lining fluid (ELF) of the respiratory epithelium and may directly interact with small molecules, proteins and lipids there and damage them, or may hydrolyze to hypochlorous acid and hydrochloric acid which in turn generate chloride ions and reactive oxygen species; the dominant theory is that most damage is via the acids.
Diagnosis
Tests performed to confirm chlorine gas poisoning and monitor patients for supportive care include pulse oximetry, testing serum electrolyte, blood urea nitrogen (BUN), and creatinine levels, measuring arterial blood gases, chest radiography, electrocardiogram (ECG), pulmonary function testing, and laryngoscopy or bronchoscopy.
Treatment
There is no antidote for chlorine poisoning; management is supportive after evacuating people from the site of exposure and flushing exposed tissues. For lung damage caused by inhalation, oxygen and bronchodilators may be administered.
Outcomes
There is no way to predict outcomes. Most people with mild to moderate exposure generally recover fully in three to five days, but some develop chronic problems such as reactive airway disease. Smoking or pre-existing lung conditions like asthma appear to increase the risk of long term complications.
Epidemiology
In 2014, the American Association of Poison Control Centers reported about 6,000 exposures to chlorine gas in the US in 2013, compared with 13,600 exposures to carbon monoxide, which was the most common poison gas exposure; the year before they reported about 5,500 cases of chlorine gas poisoning compared with around 14,300 cases of carbon monoxide poisoning.
Mass poisoning incidents
Wartime
In 1915, the German Army used chlorine against Allied soldiers in the 2nd Battle of Ypres.
In 2007 chlorine was used by insurgents in the Iraqi insurgency (2003–11),
In 2014 chlorine was allegedly used in Kafr Zita, Syria.
Industrial accidents
United States
There have been many instances of mass chlorine gas poisonings in industrial accidents.
In 2002 in Missouri, a flex hose ruptured during unloading a train car at a chemical plant, releasing approximately of chlorine gas. 67 persons were injured.
In 2004 in Macdona, Texas, a freight train accident released of chlorine gas and other toxic chemicals. At least 40 people were injured and three died, including two residents and the train conductor.
In 2005 in South Carolina a freight train derailed, releasing an estimated of chlorine. Nine people died, and at least 529 persons sought medical care.
Globally
In 2015, In Nigeria, the explosion of a chlorine gas storage tank at a water treatment plant in Jos killed eight people.
In 2017, chlorine gas was released in Fort McMurray, Alberta, Canada, after chemicals were mixed improperly at a water treatment plant. In 2020 the Regional Municipality of Wood Buffalo was fined $150,000 (CAD) for the incident.
In 2017, in Iran, at least 475 people, including nine firemen, suffered respiratory and other symptoms after a chlorine gas leak in the southwestern Iranian province of Khuzestan.
In 2020, on March 6, an incident occurred at EPCL (Engro Polymer and Chemicals Limited) Port Qasim, Karachi, where over 50 people were hospitalized as a result of chlorine gas leakage. No fatalities were reported.
In 2022, on June 27, a tank holding chlorine gas in the port of Aqaba, Jordan, fell and ruptured. 14 people were killed and more than 260 were injured.
References
Further reading
External links
Toxic effects of substances chiefly nonmedicinal as to source
Gases
Medical emergencies
Industrial hygiene | Chlorine gas poisoning | [
"Physics",
"Chemistry",
"Environmental_science"
] | 1,353 | [
"Matter",
"Toxicology",
"Phases of matter",
"Toxic effects of substances chiefly nonmedicinal as to source",
"Statistical mechanics",
"Gases"
] |
47,350,023 | https://en.wikipedia.org/wiki/RDH13 | Retinol dehydrogenase 13 (all-trans/9-cis) is a protein that in humans is encoded by the RDH13 gene. This gene encodes a mitochondrial short-chain dehydrogenase/reductase, which catalyzes the reduction and oxidation of retinoids. The encoded enzyme may function in retinoic acid production and may also protect the mitochondria against oxidative stress. Alternatively spliced transcript variants have been described.
Gene
The human RDH13 gene is on the 19th chromosome, with its specific localization being 19q13.42. The gene contains 12 exons in total.
Structure
The analysis of the submitochondrial localization of RDH13 indicates its association with the inner mitochondrial membrane. The primary structure of RDH13 contains two hydrophobic segments, 2–21 and 242–261, which are sufficiently long to serve as transmembrane segments; however, as shown in the present study, alkaline extraction completely removes the protein from the membrane, indicating that RDH13 is a peripheral membrane protein. The peripheral association of RDH13 with the membrane further distinguishes this protein from the microsomal retinaldehyde reductases, which are integral membrane proteins that appear to be anchored in the membrane via their N-terminal hydrophobic segments.
Function
RDH13 is most closely related to the NADP+-dependent microsomal enzymes RDH11, RDH12 and RDH14. Purified RDH13 acts on retinoids in an oxidative reductive manner, and strongly prefers the cofactor NADPH over NADH. Moreover, RDH13 is much has much more efficient reductase activity than dehydrogenase activity. RDH13 as a retinaldehyde reductase is significantly less active than that of a related protein RDH11, primarily because of the much higher Km value for retinaldehyde. However, the kcat value of RDH13 for retinaldehyde reduction. arable with that of RDH11, and the Km values of the two enzymes for NADPH are also very similar. Thus, consistent with its sequence similarity to RDH11, RDH12 and RDH14, RDH13 acts as an NADP+-dependent retinaldehyde reductase.
RDH13 is localized in the mitochondria, which is different from the other members of this family, as they localize to the endoplasmic reticulum. The exact sequence targeting RDH13 to the mitochondria remains to be established.
Clinical significance
RDH13 is part of a subfamily of four retinol dehydrogenases, RDH11, RDH12, RDH13, and RDH14, that display dual-substrate specificity, uniquely metabolizing all-trans- and cis-retinols with C(15) pro-R specificity. The metabolites involved in these reactions are known as retinoids, which are chromophores involved in vision, transcriptional regulation, and cellular differentiation. RDH11-14 could be involved in the first step of all-trans- and 9-cis-retinoic acid production in many tissues. RDH11-14 fill the gap in our understanding of 11-cis-retinal and all-trans-retinal transformations in photoreceptor and retinal pigment epithelial cells. The dual-substrate specificity of this subfamily explains the minor phenotype associated with mutations in 11-cis-retinol dehydrogenase (RDH5) causing fundus albipunctatus in humans.
References
Further reading
Proteins | RDH13 | [
"Chemistry"
] | 773 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
55,865,352 | https://en.wikipedia.org/wiki/Seismic%20intensity%20scales | Seismic intensity scales categorize the intensity or severity of ground shaking (quaking) at a given location, such as resulting from an earthquake. They are distinguished from seismic magnitude scales, which measure the magnitude or overall strength of an earthquake, which may, or perhaps may not, cause perceptible shaking.
Intensity scales are based on the observed effects of the shaking, such as the degree to which people or animals were alarmed, and the extent and severity of damage to different kinds of structures or natural features. The maximal intensity observed, and the extent of the area where shaking was felt (see isoseismal map, below), can be used to estimate the location and magnitude of the source earthquake; this is especially useful for historical earthquakes where there is no instrumental record.
Ground shaking
Ground shaking can be caused in various ways (volcanic tremors, avalanches, large explosions, etc.), but shaking intense enough to cause damage is usually due to rupturing of the Earth's crust known as earthquakes. The intensity of shaking depends on several factors:
The "size" or strength of the source event, such as measured by various seismic magnitude scales.
The type of seismic wave generated, and its orientation.
The depth of the event.
The distance from the source event.
Site response due to local geology
Site response is especially important as certain conditions, such as unconsolidated sediments in a basin, can amplify ground motions as much as ten times.
Where an earthquake is not recorded on seismographs an isoseismal map showing the intensities felt at different areas can be used to estimate the location and magnitude of the quake. Such maps are also useful for estimating the shaking intensity, and thereby the likely level of damage, to be expected from a future earthquake of similar magnitude. In Japan this kind of information is used when an earthquake occurs to anticipate the severity of damage to be expected in different areas.
The intensity of local ground-shaking depends on several factors besides the magnitude of the earthquake, one of the most important being soil conditions. For instance, thick layers of soft soil (such as fill) can amplify seismic waves, often at a considerable distance from the source. At the same time, sedimentary basins will often resonate, increasing the duration of shaking. This is why, in the 1989 Loma Prieta earthquake, the Marina district of San Francisco was one of the most damaged areas, though it was nearly from the epicenter. Geological structures were also significant, such as where seismic waves passing under the south end of San Francisco Bay reflected off the base of the Earth's crust towards San Francisco and Oakland. A similar effect channeled seismic waves between the other major faults in the area.
History
The first simple classification of earthquake intensity was devised by Domenico Pignataro in the 1780s. The first recognizable intensity scale in the modern sense of the word was drawn up by the German mathematician Peter Caspar Nikolaus Egen in 1828. However, the first modern mapping of earthquake intensity was made by Robert Mallet, an Irish engineer who was sent by Imperial College, London, to research the December 1857 Basilicata earthquake, also known as The Great Neapolitan Earthquake of 1857. The first widely adopted intensity scale, the 10-grade Rossi–Forel scale, was introduced in the late 19th century. In 1902, Italian seismologist Giuseppe Mercalli, created the Mercalli Scale, a new 12-grade scale. Significant improvements were achieved, mainly by Charles Francis Richter during the 1950s, when (1) a correlation was found between seismic intensity and the Peak ground acceleration (PGA; see the equation that Richter found for California). and (2) a definition of the strength of the buildings and their subdivision into groups (called type of buildings) was made. Then, the seismic intensity was evaluated based on the degree of damage to a given type of structure. That gave the Mercalli Scale, as well as the European MSK-64 scale that followed, a quantitative element representing the vulnerability of the building's type. Since then, that scale has been called the Modified Mercalli intensity scale (MMS) and the evaluations of the seismic intensities are more reliable.
In addition, more intensity scales have been developed and are used in different parts of the world:
See also
Earthquake engineering
Peak ground acceleration
Seismic performance
Spectral acceleration
Notes
Sources
.
.
.
.
. Also available here (sections renumbered).
.
.
.
Further reading
External links
USGS ShakeMap Providing near-real-time maps of ground motion and shaking intensity following significant earthquakes.
Seismology measurement
Seismology
Earthquake engineering | Seismic intensity scales | [
"Engineering"
] | 949 | [
"Earthquake engineering",
"Civil engineering",
"Structural engineering"
] |
55,866,076 | https://en.wikipedia.org/wiki/Acylindrically%20hyperbolic%20group | In the mathematical subject of geometric group theory, an acylindrically hyperbolic group is a group admitting a non-elementary 'acylindrical' isometric action on some geodesic hyperbolic metric space. This notion generalizes the notions of a hyperbolic group and of a relatively hyperbolic group and includes a significantly wider class of examples, such as mapping class groups and Out(Fn).
Formal definition
Acylindrical action
Let G be a group with an isometric action on some geodesic hyperbolic metric space X. This action is called acylindrical if for every there exist such that for every with one has
If the above property holds for a specific , the action of G on X is called R-acylindrical. The notion of acylindricity provides a suitable substitute for being a proper action in the more general context where non-proper actions are allowed.
An acylindrical isometric action of a group G on a geodesic hyperbolic metric space X is non-elementary if G admits two independent hyperbolic isometries of X, that is, two loxodromic elements such that their fixed point sets and are disjoint.
It is known (Theorem 1.1 in ) that an acylindrical action of a group G on a geodesic hyperbolic metric space X is non-elementary if and only if this action has unbounded orbits in X and the group G is not a finite extension of a cyclic group generated by loxodromic isometry of X.
Acylindrically hyperbolic group
A group G is called acylindrically hyperbolic if G admits a non-elementary acylindrical isometric action on some geodesic hyperbolic metric space X.
Equivalent characterizations
It is known (Theorem 1.2 in ) that for a group G the following conditions are equivalent:
The group G is acylindrically hyperbolic.
There exists a (possibly infinite) generating set S for G, such that the Cayley graph is hyperbolic, and the natural translation action of G on is a non-elementary acylindrical action.
The group G is not virtually cyclic, and there exists an isometric action of G on a geodesic hyperbolic metric space X such that at least one element of G acts on X with the WPD ('Weakly Properly Discontinuous') property.
The group G contains a proper infinite 'hyperbolically embedded' subgroup.
History
Properties
Every acylindrically hyperbolic group G is SQ-universal, that is, every countable group embeds as a subgroup in some quotient group of G.
The class of acylindrically hyperbolic groups is closed under taking infinite normal subgroups, and, more generally, under taking 's-normal' subgroups. Here a subgroup is called s-normal in if for every one has .
If G is an acylindrically hyperbolic group and or with then the bounded cohomology is infinite-dimensional.
Every acylindrically hyperbolic group G admits a unique maximal normal finite subgroup denoted K(G).
If G is an acylindrically hyperbolic group with K(G)={1} then G has infinite conjugacy classes of nontrivial elements, G is not inner amenable, and the reduced C*-algebra of G is simple with unique trace.
There is a version of small cancellation theory over acylindrically hyperbolic groups, allowing one to produce many quotients of such groups with prescribed properties.
Every finitely generated acylindrically hyperbolic group has cut points in all of its asymptotic cones.
For a finitely generated acylindrically hyperbolic group G, the probability that the simple random walk on G of length n produces a 'generalized loxodromic element' in G converges to 1 exponentially fast as .
Every finitely generated acylindrically hyperbolic group G has exponential conjugacy growth, meaning that the number of distinct conjugacy classes of elements of G coming from the ball of radius n in the Cayley graph of G grows exponentially in n.
Examples and non-examples
Finite groups, virtually nilpotent groups and virtually solvable groups are not acylindrically hyperbolic.
Every non-elementary subgroup of a word-hyperbolic group is acylindrically hyperbolic.
Every non-elementary relatively hyperbolic group is acylindrically hyperbolic.
The mapping class group of a connected oriented surface of genus with punctures is acylindrically hyperbolic, except for the cases where (in those exceptional cases the mapping class group is finite).
For the group Out(Fn) is acylindrically hyperbolic.
By a result of Osin, every non virtually cyclic group G, that admits a proper isometric action on a proper CAT(0) space with G having at least one rank-1 element, is acylindrically hyperbolic. Caprace and Sageev proved that if G is a finitely generated group acting isometrically properly discontinuously and cocompactly on a geodetically complete CAT(0) cubical complex X, then either X splits as a direct product of two unbounded convex subcomplexes, or G contains a rank-1 element.
Every right-angled Artin group G, which is not cyclic and which is directly indecomposable, is acylindrically hyperbolic.
For the special linear group is not acylindrically hyperbolic (Example 7.5 in ).
For the Baumslag–Solitar group is not acylindrically hyperbolic. (Example 7.4 in )
Many groups admitting nontrivial actions on simplicial trees (that is, admitting nontrivial splittings as fundamental groups of graphs of groups in the sense of Bass–Serre theory) are acylindrically hyperbolic. For example, all one-relator groups on at least three generators are acylindrically hyperbolic.
Most 3-manifold groups are acylindrically hyperbolic.
References
Further reading
Group theory
Geometric group theory
Geometric topology
Geometry | Acylindrically hyperbolic group | [
"Physics",
"Mathematics"
] | 1,311 | [
"Geometric group theory",
"Group actions",
"Geometric topology",
"Group theory",
"Fields of abstract algebra",
"Topology",
"Geometry",
"Symmetry"
] |
55,872,661 | https://en.wikipedia.org/wiki/Category%20of%20representations | In representation theory, the category of representations of some algebraic structure has the representations of as objects and equivariant maps as morphisms between them. One of the basic thrusts of representation theory is to understand the conditions under which this category is semisimple; i.e., whether an object decomposes into simple objects (see Maschke's theorem for the case of finite groups).
The Tannakian formalism gives conditions under which a group G may be recovered from the category of representations of it together with the forgetful functor to the category of vector spaces.
The Grothendieck ring of the category of finite-dimensional representations of a group G is called the representation ring of G.
Definitions
Depending on the types of the representations one wants to consider, it is typical to use slightly different definitions.
For a finite group and a field , the category of representations of over has
Objects: Pairs (, ) of vector spaces over and representations of on that vector space
Morphisms: Equivariant maps
Composition: The composition of equivariant maps
Identities: The identity function (which is an equivariant map).
The category is denoted by or .
For a Lie group, one typically requires the representations to be smooth or admissible. For the case of a Lie algebra, see Lie algebra representation. See also: category O.
The category of modules over the group ring
There is an isomorphism of categories between the category of representations of a group over a field (described above) and the category of modules over the group ring [], denoted []-Mod.
Category-theoretic definition
Every group can be viewed as a category with a single object, where morphisms in this category are the elements of and composition is given by the group operation; so is the automorphism group of the unique object. Given an arbitrary category , a representation of in is a functor from to . Such a functor sends the unique object to an object say in and induces a group homomorphism ; see Automorphism group#In category theory for more. For example, a -set is equivalent to a functor from to Set, the category of sets, and a linear representation is equivalent to a functor to Vect, the category of vector spaces over a field .
In this setting, the category of linear representations of over is the functor category → Vect, which has natural transformations as its morphisms.
Properties
The category of linear representations of a group has a monoidal structure given by the tensor product of representations, which is an important ingredient in Tannaka-Krein duality (see below).
Maschke's theorem states that when the characteristic of doesn't divide the order of , the category of representations of over is semisimple.
Restriction and induction
Given a group with a subgroup , there are two fundamental functors between the categories of representations of and (over a fixed field): one is a forgetful functor called the restriction functor
and the other, the induction functor
.
When and are finite groups, they are adjoint to each other
,
a theorem called Frobenius reciprocity.
The basic question is whether the decomposition into irreducible representations (simple objects of the category) behaves under restriction or induction. The question may be attacked for instance by the Mackey theory.
Tannaka-Krein duality
Tannaka–Krein duality concerns the interaction of a compact topological group and its category of linear representations. Tannaka's theorem describes the converse passage from the category of finite-dimensional representations of a group back to the group , allowing one to recover the group from its category of representations. Krein's theorem in effect completely characterizes all categories that can arise from a group in this fashion. These concepts can be applied to representations of several different structures, see the main article for details.
Notes
References
External links
https://ncatlab.org/nlab/show/category+of+representations
Representation theory
Category theory | Category of representations | [
"Mathematics"
] | 834 | [
"Mathematical structures",
"Fields of abstract algebra",
"Category theory",
"Categories in category theory",
"Representation theory"
] |
60,682,233 | https://en.wikipedia.org/wiki/Nikiel%27s%20conjecture | In mathematics, Nikiel's conjecture in general topology was a conjectural characterization of the continuous image of a compact total order. The conjecture was first formulated by in 1986. The conjecture was proven by Mary Ellen Rudin in 1999.
The conjecture states that a compact topological space is the continuous image of a total order if and only if it is a monotonically normal space.
Notes
Topology
Conjectures that have been proved | Nikiel's conjecture | [
"Physics",
"Mathematics"
] | 87 | [
"Mathematical theorems",
"Topology stubs",
"Topology",
"Space",
"Geometry",
"Conjectures that have been proved",
"Spacetime",
"Mathematical problems"
] |
60,696,171 | https://en.wikipedia.org/wiki/Custirsen | Custirsen, with aliases including custirsen sodium, OGX-011, and CC-8490, is an investigational drug that is under clinical testing for the treatment of cancer. It is an antisense oligonucleotide (ASO) targeting clusterin expression. In metastatic prostate cancer, custirsen showed no benefit in improving overall survival.
Custirsen was developed through a collaboration between OncoGenex Pharmaceuticals Inc. and Isis. In 2009, OncoGenex Pharmaceuticals Inc. and Teva Pharmaceutical Industries Ltd. agreed to develop and commercialise Custirsen.
Mechanism of action
An antisense oligonucleotide (ASO) is a single-strand DNA sequence complementary to a desired messenger RNA (mRNA) sequence. Antisense therapy targets gene sequences using antisense oligonucleotides by binding the ASO to the mRNA strand. This creates an inhibitory complex that reduces plasma protein levels by preventing translation.
Custirsen is a second-generation phosphorothioate antisense oligonucleotide. Phosphorothioates are oligonucleotides with a sulfur ion replacing an oxygen molecule in the chain. They have high antisense activity due to their increased chirality, nuclease stability, and solubility. Second-generation oligonucleotides are highly specific to the target mRNA sequence, increasing the affinity of the compound. Custirsen acts as an anti-cancer drug by binding to the mRNA initiation site of the clusterin gene, reducing clusterin protein plasma concentrations. The synthetic addition of a 2’-methoxyethyl on each nucleotide bookending the phosphorothioate backbone causes:
An increased affinity for the RNA54 targeted gene sequence
An increased resistance to digestive nucleases
Decreased toxicity
Increased tissue half-life by approximately seven days
Decrease adverse side effects, enabling more potent concentrations
Clusterin is upregulation in many tumours including prostate, breast, non-small cell lung, ovary and colorectal. It has been linked with the development of aggressive tumours by protecting the cells from apoptosis. It is also upregulated in response to standard cancer treatments including chemotherapy, androgen deprivation therapy, and radiation therapy. This resistance is caused by the inhibition of the pro-apoptotic BLC2 gene, prevention of protein aggregation, and increased NF-KB.
The anti-apoptotic activity of clusterin in aiding tumour growth is due to interactions with protein complexes. These include:
The binding of clusterin to a structurally altered Ku70–Bax complex
Regulation of Nuclear Factor (NF)-κB activity and signalling
Kinase ERK and AKT Kinase
Promoted epithelial-mesenchymal transition
Pharmacokinetics
A meta-analysis study evaluated 5588 Custirsen plasma concentrations from 631 subjects over seven clinical studies. Subjects with cancer received multiple doses between 40 mg and 640 mg intravenously over two hours, whilst healthy subjects received either a single or double dose at 320 mg-640 mg.
The pharmacokinetics of Custirsen was described by a three-compartment model with first-order elimination where:
Three-compartment model refers to the distribution of the drug. The body is divided into central, peripheral 1, and peripheral 2. Distribution rate of the drug is respectively highest to lowest.
First-order elimination describes the elimination rate of the drug. In first-order kinetics, elimination is proportional to concentration of drug present in the body.
For a representative sixty-six year old with a body mass (kg) of eighty-two and a blood Custirsen level of 0.933 mg/dL, the estimated parameter values were:
Clearance (CL) = 2.36
Central Volume of Distribution (V) = 6.08
Peripheral Volume of Distribution (V ) = 1.13
Volume of the Second Peripheral Compartment (V ) = 15.8
Side effects
A phase I study investigated the maximum tolerated dose of Custirsen in patients with recurrent or refractory high-grade gliomas.
The recommended and safe dosing of Custirsen was determined in a Phase I same-dose escalation scheme involving forty patients with tumours known to upregulate clusterin with metastatic or locally recurrent disease (prostate, ovary, breast). Custirsen was infused intravenously on days 1, 3, and 5, with weekly dosing starting on day 8 for four weeks. The drug was increasingly administered in six dose cohorts of 40 mg, 80 mg, 160 mg, 320 mg, 480 mg, and 640 mg.
The results found that the recommended dose of Custirsen was 640 mg, with maximum decrease of clusterin blood plasma occurring at this dose. Researchers found a statistically significant increase in the apoptotic index in prostatectomy specimens.
This study provided the dosing framework for further studies into Custirsen, determining 640 mg as a tolerable and biologically active dose.
In this study, no dose-limiting toxicities were reported in doses up to and including 480 mg. For patients who received the 640 mg dose, the following adverse reactions were present:
Thrombocytopenia
Anaemia
Leukopenia
Fever
Fatigue
Rigors
Alopecia
Anorexia
In patients who received combined treatment with Docetaxel, four of sixteen patients experienced dose-limiting toxicities at 640 mg:
Dyspnea
Pleural effusion
Neutropenia
Fatigue
Mucositis
Phase III Studies
Phase III studies into the effectiveness of the combinational treatment of Custirsen and chemotherapy as a treatment of metastatic-castration-resistant prostate cancer and also Custirsen as a biomarker for clusterin are currently under evaluation.
The Phase III SYNERGY trial looked into the addition of Custirsen to first-line Docetaxel and Prednisone chemotherapy, concluding that there was no marked increased survival rate compared to the Custirsen-independent treatment group. Researchers also found no difference in the progressive rates of the cancer. Contrasting previous research, subjects in the Custirsen group had more adverse reactions to the treatment than the Custirsen-independent group.
The findings of this clinical study contradict the results found in previous studies. However, researchers suggested further studies into patients with metastatic-castration-resistant prostate cancer whom have poor prognostic features, believing there is a therapeutic effect of Custirsen in individuals with this feature of the disease.
References
Antineoplastic and immunomodulating drugs
Experimental cancer drugs
Antisense RNA
Therapeutic gene modulation | Custirsen | [
"Biology"
] | 1,397 | [
"Therapeutic gene modulation"
] |
44,244,186 | https://en.wikipedia.org/wiki/Four-point%20flexural%20test | The four-point flexural test provides values for the modulus of elasticity in bending , flexural stress , flexural strain and the flexural stress-strain response of the material. This test is very similar to the three-point bending flexural test. The major difference being that with the addition of a fourth bearing the portion of the beam between the two loading points is put under maximum stress, as opposed to only the material right under the central bearing in the case of three-point bending.
This difference is of prime importance when studying brittle materials, where the number and severity of flaws exposed to the maximum stress is directly related to the flexural strength and crack initiation. Compared to the three-point bending flexural test, there are no shear forces in the four-point bending flexural test in the area between the two loading pins. The four-point bending test is therefore particularly suitable for brittle materials that cannot withstand shear stresses very well.
It is one of the most widely used apparatus to characterize fatigue and flexural stiffness of asphalt mixtures.
Testing method
The test method for conducting the test usually involves a specified test fixture on a universal testing machine. Details of the test preparation, conditioning, and conduct affect the test results. The sample is placed on two supporting pins a set distance apart and two loading pins placed at an equal distance around the center. These two loadings are lowered from above at a constant rate until sample failure.
Calculation of the flexural stress
for four-point bending test where the loading span is 1/2 of the support span (rectangular cross section)
for four-point bending test where the loading span is 1/3 of the support span (rectangular cross section)
for three-point bending test (rectangular cross section)
in these formulas the following parameters are used:
= Stress in outer fibers at midpoint, (MPa)
= load at a given point on the load deflection curve, (N)
= Support span, (mm)
= Width of test beam, (mm)
= Depth or thickness of tested beam, (mm)
Calculation of the Elastic modulus
In the 4-point bending test, the specimen is placed on two supports and loaded in the middle by a test punch with two loading points. This results in a constant bending moment between the two supports. Consequently, a shear-free zone is created, where the specimen is subjected only to bending. This has the advantage that no additional shear force acts on the specimen, unlike in the 3-point bending test.
The bending modulus for a flat specimen is calculated as follows:
b: Specimen width in mm
a: Specimen thickness in mm
lA: Span length (distance between support point and the nearest loading point of the test punch) in mm
lB: Length of the reference beam (between the loading points, symmetrically placed relative to the loading points) in mm
DL: Distance between the reference beam and the main beam (centered between the loading points) in mm
E: Bending modulus in kN/mm²
lv: Span length in mm
XH: End of bending modulus determination in kN
XL: Start of bending modulus determination in kN
DL: Deflection in mm between XH and XL
Advantages and disadvantages
Advantages of three-point and four-point bending tests over uniaxial tensile tests include:
simpler sample geometries
minimum sample machining is required
simple test fixture
possibility to use as-fabricated materials
Disadvantages include:
more complex integral stress distributions through the sample
Application with different materials
Ceramics
Ceramics are usually very brittle, and their flexural strength depends on both their inherent toughness and the size and severity of flaws. Exposing a large volume of material to the maximum stress will reduce the measured flexural strength because it increases the likelihood of having cracks reaching critical length at a given applied load. Values for the flexural strength measured with four-point bending will be significantly lower than with three-point bending., Compared with three-point bending test, this method is more suitable for strength evaluation of butt joint specimens. The advantage of four-point bending test is that a larger portion of the specimen between two inner loading pins is subjected to a constant bending moment, and therefore, positioning the joint region is more repeatable.
Composite materials
Plastics
Standards
ASTM C1161: Standard Test Method for Flexural Strength of Advanced Ceramics at Ambient Temperature
ASTM D6272: Standard Test Method for Flexural Properties of Unreinforced and Reinforced Plastics and Electrical Insulating Materials by Four-Point Bending
ASTM C393: Standard Test Method for Core Shear Properties of Sandwich Constructions by Beam Flexure
ASTM D7249: Standard Test Method for Facing Properties of Sandwich Constructions by Long Beam Flexure
ASTM D7250: Standard Practice for Determining Sandwich Beam Flexural and Shear Stiffness
See also
Bending
Euler–Bernoulli beam equation
Flexural strength
Three-point flexural test
List of area moments of inertia
Second moment of area
References
External links
ASTM C1161: Standard Test Method for Flexural Strength of Advanced Ceramics at Ambient Temperature
ASTM D6272: Standard Test Method for Flexural Properties of Unreinforced and Reinforced Plastics and Electrical Insulating Materials by Four-Point Bending
ASTM C393: Standard Test Method for Core Shear Properties of Sandwich Constructions by Beam Flexure
ASTM D7249: Standard Test Method for Facing Properties of Sandwich Constructions by Long Beam Flexure
ASTM D7250: Standard Practice for Determining Sandwich Beam Flexural and Shear Stiffness
ASTM C78: Standard Test Method for Flexural Strength of Concrete (Using Simple Beam with Third-Point Loading)
Materials testing
Mechanics | Four-point flexural test | [
"Physics",
"Materials_science",
"Engineering"
] | 1,151 | [
"Materials testing",
"Mechanics",
"Materials science",
"Mechanical engineering"
] |
44,245,768 | https://en.wikipedia.org/wiki/Medrylamine | Medrylamine is an antihistamine related to diphenhydramine.
References
Dimethylamino compounds
Ethers
H1 receptor antagonists
Muscarinic antagonists
Muscle relaxants | Medrylamine | [
"Chemistry"
] | 41 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
44,248,016 | https://en.wikipedia.org/wiki/Interface%20%28journal%29 | Interface (also known as The Electrochemical Society Interface) is a quarterly open access scientific journal published by the Electrochemical Society covering developments in electrochemistry and solid-state chemistry, as well as news and information about and for members of the society.
History
The journal was established in 1992, because the Journal of the Electrochemical Society became a purely technical publication. The new publication was intended to provide members with information on matters affecting their society interests. The first issue was published in the Winter of 1992, with a cover that featured Nobel Laureate Rudolph Marcus, who learned of his winning the prize while at the ECS fall meeting in Toronto.
Indexing and abstracting
The journal is indexed and abstracted in the following bibliographic databases:
References
External links
Electrochemistry journals
Academic journals published by learned and professional societies
Quarterly journals
English-language journals
Academic journals established in 1992
Electrochemical Society academic journals | Interface (journal) | [
"Chemistry"
] | 181 | [
"Electrochemistry journals",
"Electrochemistry",
"Electrochemistry stubs",
"Physical chemistry journals",
"Physical chemistry stubs"
] |
44,252,795 | https://en.wikipedia.org/wiki/Thomas%27%20cyclically%20symmetric%20attractor | In the dynamical systems theory, Thomas' cyclically symmetric attractor is a 3D strange attractor originally proposed by René Thomas. It has a simple form which is cyclically symmetric in the x, y, and z variables and can be viewed as the trajectory of a frictionally dampened particle moving in a 3D lattice of forces. The simple form has made it a popular example.
It is described by the differential equations
where is a constant.
corresponds to how dissipative the system is, and acts as a bifurcation parameter. For the origin is the single stable equilibrium. At it undergoes a pitchfork bifurcation, splitting into two attractive fixed points. As the parameter is decreased further they undergo a Hopf bifurcation at , creating a stable limit cycle. The limit cycle then undergoes a period doubling cascade and becomes chaotic at . Beyond this the attractor expands, undergoing a series of crises (up to six separate attractors can coexist for certain values). The fractal dimension of the attractor increases towards 3.
In the limit the system lacks dissipation and the trajectory ergodically wanders the entire space (with an exception for 1.67%, where it drifts parallel to one of the coordinate axes: this corresponds to quasiperiodic torii). The dynamics has been described as deterministic fractional Brownian motion, and exhibits anomalous diffusion.
References
Nonlinear systems
Dynamical systems
Chaotic maps | Thomas' cyclically symmetric attractor | [
"Physics",
"Mathematics"
] | 298 | [
"Functions and mappings",
"Mathematical objects",
"Nonlinear systems",
"Mechanics",
"Mathematical relations",
"Chaotic maps",
"Dynamical systems"
] |
67,151,652 | https://en.wikipedia.org/wiki/Piezoelectrochemical%20transducer%20effect | The piezoelectrochemical transducer effect (PECT) is a coupling between the electrochemical potential and the mechanical strain in ion-insertion-based electrode materials. It is similar to the piezoelectric effect – with both exhibiting a voltage-strain coupling - although the PECT effect relies on movement of ions within a material microstructure, rather than charge accumulation from the polarization of electric dipole moments.
Many different materials have been shown to exhibit a PECT effect including: lithiated graphite.; carbon fibers inserted with lithium, sodium, and potassium; sodiated black phosphorus; lithiated aluminium; lithium cobalt oxide; vanadium oxide nanofibers inserted with lithium and sodium; and lithiated silicon.
These materials all exhibit a voltage-strain coupling, whereby the material expands when it is charged with ions, and contracts when it is discharged. The reverse is also true: when applying a mechanical strain the electrical potential changes.
This has led to various proposals of applications for the PECT effect with research focusing on actuators, strain-sensors, and energy harvesters.
Origins
The PECT effect was first reported by Dr. F Lincoln Vogel in 1981 when studying how intercalation voltages could be used to provide an actuation force in graphitized carbon fibres. The research used sulphate (SO4) ions from sulfuric acid to intercalate into the microstructure of carbon fibers, forming graphite intercalation compounds (GICs). It was hypothesized that an axial strain of up to 2% should be possible, however only 0.2% was observed due to experimental limitations.
The effect is often explained by the theories of Larché and Cahn who derived mathematical formulations for the equilibrium relationships between the electric potential, chemical potential, and mechanical stress in solid materials. In summary the theory states that solid materials under mechanical stress undergo a change in chemical potential, which in turn affects their electrical potential.
Applications
Actuation
Since PECT materials expand and contract upon ion-insertion it is possible to use this effect for actuation. Several different materials have been proposed for this, including: carbon fibers inserted with lithium, sodium, and potassium; lithium cobalt oxide; and vanadium oxide nanofibers inserted with lithium and sodium. Applications for PECT-based actuation range from microelectromechanical systems (MEMS), to large morphing structures.
Different materials exhibit different amounts of expansion/contraction, with a response that is dependent on the type of ion, as well as the amount of charge. For example, silicon expands by more than 300% when inserted with lithium, whereas graphite expands by around 13%. Carbon fibres expand by up to 1% when inserted with lithium, but only around 0.2% when inserted with potassium.
Strain-sensing
As PECT materials exhibit a change in voltage upon application of strain, it is possible to calibrate this change in voltage to the level of strain in a material. This has been proposed for applications in battery health monitoring, as well as structural health monitoring.
Electricity production
When mechanical strain is applied to a PECT material it changes the chemical potential, and therefore the electric potential of that material. Since current flows from more negative materials to more positive materials, it is possible to induce a current flow between two ionically connected materials by simply applying a mechanical strain. It is therefore possible to harness and convert mechanical energy into electrical energy.
A number of materials have been demonstrated to be capable of PECT-based energy harvesting, including: carbon fibers inserted with lithium, sodiated black phosphorus; lithiated aluminium; and lithiated silicon. A structural carbon fibre composite has also been shown to be capable of harvesting energy using the PECT effect. Conventional lithium-ion batteries have also been shown to be capable of PECT-based energy harvesting.
This effect has most often been demonstrated using a two-electrode bending setup:
Two electrodes of the same material are connected ionically through an electrolyte, and electrically via an outer circuit.
A bending deformation is applied causing tension in one electrode and compression in the other.
The resulting change in chemical potential results in current flow in the outer circuit, which can be used to power an external device.
PECT energy harvesting is limited by the rate of ionic diffusion, and therefore is only efficient at low frequency (typically below around 1 Hz).
Figures of merit for comparing different PECT-based energy harvesters were formulated by Preimesberger et al.
Implications for batteries
The PECT effect is also present in typical ion-insertion-based battery electrodes (e.g. Li-ion). The electrodes expand and contract when inserted with ions, which is one of the issues that leads to battery ageing and capacity loss over time. The PECT effect in battery electrodes could be an issue in situations where battery electrodes are mechanically stressed (e.g. in structural batteries), causing a change in electrical potential when the stress-state changes.
It has been proposed that the PECT effect in Li-ion batteries could be exploited to measure battery health., and to harvest mechanical energy.
References
Electrochemistry
Piezoelectric materials | Piezoelectrochemical transducer effect | [
"Physics",
"Chemistry"
] | 1,070 | [
"Physical phenomena",
"Materials",
"Electrical phenomena",
"Electrochemistry",
"Piezoelectric materials",
"Matter"
] |
67,153,627 | https://en.wikipedia.org/wiki/Tissue%20clearing | Tissue clearing refers to a group of chemical techniques used to turn tissues transparent. By turning tissues transparent to certain wavelengths of light, it allows one to gain optical access to a tissue. That is, light can pass into and out of the cleared tissue freely, allowing one to see the structures deep within the tissue without physically cutting it open. Many tissue clearing methods exist, each with different strengths and weaknesses. Some are generally applicable, while others are designed for specific applications. Tissue clearing is usually useful only combined with one or more fluorescent labeling techniques such as immunolabeling and subsequently imaged, most often by optical sectioning microscopy techniques. Tissue clearing has been applied to many areas in biological research. It is one of the more efficient ways to perform three-dimensional histology.
History
In the early 1900s, Werner Spalteholz developed a technique that allowed the clarification of large tissues, using Wintergrünöl (methyl salicylate) and benzyl benzoate. Various scientists then introduced their own variations on Spalteholz's technique. Tuchin et al. introduced tissue optical clearing (TOC) in 1997, adding a new branch of tissue clearing that was hydrophilic instead of hydrophobic like Spalteholz's technique. In the 1980s, Andrew Murray & Marc Kirschner developed a two-step process, wherein tissues were first dehydrated with alcohol and subsequently made transparent by immersion in a mixture of benzyl alcohol and benzyl benzoate (BABB), a technique they coupled with light sheet fluorescence microscopy, which remains the method with the highest clearing efficacy to date, regardless any tissue pre-processing step. In the most extreme case, it allows the clearing of a whole mouse of even a whole human brain.
Principles
Tissue opacity is thought to be the result of light scattering due to heterogeneous refractive indices. Tissue clearing methods chemically homogenize refractive indices, resulting in almost completely transparent tissue.
Classifications
While there are multiple class names for tissue-clearing methods, they are all classified based on the final state of the tissue by the end of the clearing method. These include hydrophobic clearing methods, which may also be known as organic, solvent-based, organic solvent-based, or dehydration clearing methods; hydrophilic clearing methods, which may also be known as aqueous-based or water-based methods, and hydrogel-based clearing methods.
Labeling
Tissue clearing methods have varying compatibility with different methods of fluorescent labeling. Some are better suited to genetic labelling by endogenously expressed fluorescent protein, while others externally delivered probes as immunolabeling and chemical dye labeling. The latter is more general and applicable to all tissues, notably human tissues, but the penetration of the probes becomes a critical problem.
Imaging
After clearing and labeling, tissues are typically imaged using confocal microscopy, two-photon microscopy, or one of the many variants of light-sheet fluorescence microscopy. Other less commonly used methods include optical projection tomography and stimulated Raman scattering. As long as the tissue allows for the unobstructed passing of light, the optical resolution is fundamentally limited by Abbe diffraction limit. The compatibility of any tissue clearing method with any microscopy system is, therefore, configurational rather than optical.
Data
Tissue clearing is one of the more efficient ways to facilitate 3D imaging of tissues, and hence generates massive volumes of complex data, which requires powerful computational hardware and software to store, process, analyze, and visualize. A single mouse brain can generate terabytes of data. Both commercial and open-source software exists to address this need, some of it adapted from solutions for two-dimensional images and some of it designed specifically for the three-dimensional images produced by imaging of cleared tissues.
Applications
Tissue clearing has been applied to the nervous system, bones (including teeth), skeletal muscles, hearts and vasculature, gastrointestinal organs, urogenital organs, skin, lymph nodes, mammary glands, lungs, eyes, tumors, and adipose tissues. Whole-body clearing is less common, but has been done in smaller animals, including rodents. Tissue clearing has also been applied to human cancer tissues. For some techniques, bone tissue must be decalcified to remove light-scattering hydroxyapatite crystals, leaving behind a protein matrix suitable for clearing.
References
Tissue engineering | Tissue clearing | [
"Chemistry",
"Engineering",
"Biology"
] | 906 | [
"Biological engineering",
"Cloning",
"Chemical engineering",
"Tissue engineering",
"Medical technology"
] |
42,801,222 | https://en.wikipedia.org/wiki/Breit%E2%80%93Wheeler%20process | The Breit–Wheeler process or Breit–Wheeler pair production is a proposed physical process in which a positron–electron pair is created from the collision of two photons. It is the simplest mechanism by which pure light can be potentially transformed into matter. The process can take the form γ γ′ → e+ e− where γ and γ′ are two light quanta (for example, gamma photons).
The multiphoton Breit–Wheeler process, also referred to as nonlinear Breit–Wheeler or strong field Breit–Wheeler in the literature, occurs when a high-energy probe photon decays into pairs propagating through a strong electromagnetic field (for example, a laser pulse). In contrast with the linear process, this can take the form of γ + n ω → e+ e−, where n represents the number of photons, and ω represents the coherent laser field.
The inverse process, e+ e− → γ γ′, in which an electron and a positron collide and annihilate to generate a pair of gamma photons, is known as electron–positron annihilation or the Dirac process for the name of the physicist who first described it theoretically and anticipated the Breit–Wheeler process.
This mechanism is theoretically characterized by a very weak probability, so producing a significant number of pairs requires two extremely bright, collimated sources of photons having photon energy close to or above the electron and positron rest mass energy. Manufacturing such a source, for instance, a gamma-ray laser, is still a technological challenge. In many experimental configurations, pure Breit–Wheeler is dominated by other more efficient pair creation processes that screen pairs produced via this mechanism. The Dirac process (pair annihilation) has, on the other hand, been extensively verified. This is also the case for the multi-photon Breit–Wheeler, which was observed at the Stanford Linear Accelerator Center in 1997 by colliding high-energy electrons with a counter-propagating terawatt laser pulse.
Although this mechanism is still one of the most difficult to be observed experimentally on Earth, it is of considerable importance for the absorption of high-energy photons travelling cosmic distances.
The photon–photon and the multiphoton Breit–Wheeler processes are described theoretically by the theory of quantum electrodynamics.
History
The photon–photon Breit–Wheeler process was described theoretically by Gregory Breit and John A. Wheeler in 1934 in Physical Review. It followed previous theoretical work of Paul Dirac on antimatter and pair annihilation. In 1928, Paul Dirac's work proposed that electrons could have positive and negative energy states following the framework of relativistic quantum theory but did not explicitly predict the existence of a new particle.
Experimental observations
Photon–photon Breit–Wheeler possible experimental configurations
Although the process is one of the manifestations of the mass–energy equivalence, as of 2017, the pure Breit–Wheeler has never been observed in practice because of the difficulty in preparing colliding gamma ray beams and the very weak probability of this mechanism. Recently, different teams have proposed novel theoretical studies on possible experimental configurations to finally observe it on Earth.
In 2014, physicists at Imperial College London proposed a relatively simple way to physically demonstrate the Breit–Wheeler process. The collider experiment that the physicists proposed involves two key steps. First, they would use an extremely powerful high-intensity laser to accelerate electrons to nearly the speed of light. They would then fire these electrons into a slab of gold to create a beam of photons a billion times more energetic than those of visible light. The next stage of the experiment involves a tiny gold can called a hohlraum (German for 'empty room' or 'cavity'). Scientists would fire a high-energy laser at the inner surface of this hohlraum to create a thermal radiation field. They would then direct the photon beam from the first stage of the experiment through the centre of the hohlraum, causing the photons from the two sources to collide and form electrons and positrons. It would then be possible to detect the formation of the electrons and positrons when they exited the can. Monte Carlo simulations suggest that this technique is capable of producing of the order of 105 Breit–Wheeler pairs in a single shot.
In 2016, a second novel experimental setup was proposed theoretically to demonstrate and study the Breit–Wheeler process by colliding two high-energy photon sources (composed of non-coherent hard x-ray and gamma-ray photons) generated from the interaction of two extremely intense lasers on solid thin foils or gas jets. The forthcoming short-pulse extremely intense lasers, laser interaction with solid target will be the place of strong radiative effects driven by the nonlinear inverse quantum scattering. This effect, negligible so far, will become a dominant cooling mechanism for the extremely relativistic electrons accelerated above the 100 MeV level at the laser-solid interface via different mechanisms.
Multiphoton Breit–Wheeler experiments
The multiphoton Breit–Wheeler process has already been observed and studied experimentally. One of the most efficient configurations to maximize the multiphoton Breit–Wheeler pair production consists on colliding head-on a bunch of gamma photon with a counter-propagating (or with a slight collision angle, the co-propagating configuration being the less efficient configuration) ultra-high intensity laser pulse. To first create the photons and then have the pair production in an all-in-one setup, the similar configuration can be used by colliding GeV electrons. Depending on the laser intensity, these electrons will first radiate gamma photons via the so-called non-linear inverse Compton scattering mechanism when interacting with the laser pulse. Still interacting with the laser, the photons then turn into multiphoton Breit–Wheeler electron–positron pairs.
This method was used in 1997 at the Stanford Linear Accelerator Center. Researchers were able to conduct the multi-photon Breit–Wheeler process using electrons to first create high-energy photons, which then underwent multiple collisions to produce electrons and positrons, all within the same chamber. Electrons were accelerated in the linear accelerator to an energy of 46.6 GeV before being sent head-on into a Neodymium (Nd:glass) linear polarized laser of intensity 1018 W/cm2 (maximal electric field amplitude of around 6×109 V/m), of wavelength 527 nanometers and duration 1.6 picoseconds. In this configuration, it has been estimated that photons of energy up to 29 GeV were generated. This led to the yield of 106 ±14 positrons with a broad energy spectrum in the GeV level (peak around 13 GeV).
The aforementioned experiment may be reproduced in the future at SLAC with more powerful laser technologies. The use of higher laser intensities (1020 W/cm2) is now easily achievable with short-pulse titanium-sapphire laser solutions that would significantly enhance process efficiencies (inverse nonlinear Compton and nonlinear Breit–Wheeler pair creation) leading to several orders of magnitude higher antimatter production, enabling higher-resolution measurements, additional mass-shift, as well as nonlinear and spin effects.
The extreme intensities expected to be available in future multi-petawatt laser systems will allow all-optical, laser–electron collision experiments where the electron beam is generated from direct laser interaction with a gas jet in a so-called laser wakefield acceleration regime. The resulting electron bunch is then made to interact with a second high-power laser in order to study QED processes. The feasibility of an all-optical multi-photon Breit–Wheeler pair production scheme has first been proposed theoretically in Implementation of this scheme is restricted to multi-beam short-pulse extreme-intensity laser facilities such as the CILEX-Apollon and ELI systems (CPA titanium sapphire technology at 0.8 micrometer, duration of 15–30 femtoseconds). The generation of electron beams of few GeV and few nanocoulomb is possible with a first laser of 1 petawatt combined with the use of tuned and optimized gas-jet density profiles such as two-step profiles. Strong pair generation can be achieved by colliding head-on this electron beam with a second laser of intensity above 1022 W/cm2. In this configuration at this level of intensity, theoretical studies predict that several hundreds of pico-Coulombs of antimatter could be produced. This experimental setup could even be one of the most prolific positron yield factory. This all-optical scenario may be preliminary tested with lower laser intensities of the order of 1021 W/cm2.
In July 2021 evidence consistent with the process was reported by the STAR detector one of the four experiments at the Relativistic Heavy Ion Collider although it was unclear if it was due to massless photons or massive virtual photons, vacuum birefringence was also studied obtaining evidence enough to claim the first known observation of the process.
See also
Two-photon physics
References
Photonics
Hypothetical processes
Quantum electrodynamics | Breit–Wheeler process | [
"Physics"
] | 1,903 | [
"Theoretical physics",
"Hypotheses in physics"
] |
42,804,273 | https://en.wikipedia.org/wiki/Ribosomally%20synthesized%20and%20post-translationally%20modified%20peptides | Ribosomally synthesized and post-translationally modified peptides (RiPPs), also known as ribosomal natural products, are a diverse class of natural products of ribosomal origin. Consisting of more than 20 sub-classes, RiPPs are produced by a variety of organisms, including prokaryotes, eukaryotes, and archaea, and they possess a wide range of biological functions.
As a consequence of the falling cost of genome sequencing and the accompanying rise in available genomic data, scientific interest in RiPPs has increased in the last few decades. Because the chemical structures of RiPPs are more closely predictable from genomic data than are other natural products (e.g. alkaloids, terpenoids), their presence in sequenced organisms can, in theory, be identified rapidly. This makes RiPPs an attractive target of modern natural product discovery efforts.
Definition
RiPPs consist of any peptides (i.e. molecular weight below 10 kDa) that are ribosomally-produced and undergo some degree of enzymatic post-translational modification. This combination of peptide translation and modification is referred to as "post-ribosomal peptide synthesis" (PRPS) in analogy with nonribosomal peptide synthesis (NRPS).
Historically, the current sub-classes of RiPPs were studied individually, and common practices in nomenclature varied accordingly in the literature. More recently, with the advent of broad genome sequencing, it has been realized that these natural products share a common biosynthetic origin. In 2013, a set of uniform nomenclature guidelines were agreed upon and published by a large group of researchers in the field. Prior to this report, RiPPs were referred to by a variety of designations, including post-ribosomal peptides, ribosomal natural products, and ribosomal peptides.
The acronym "RiPP" stands for "ribosomally synthesized and post-translationally modified peptide".
Prevalence and applications
RiPPs constitute one of the major superfamilies of natural products, like alkaloids, terpenoids, and nonribosomal peptides, although they tend to be large, with molecular weights commonly in excess of 1000 Da. The advent of next-generation sequencing methods has made genome mining of RiPPs a common strategy. In part due to their increased discovery and hypothesized ease of engineering, the use of RiPPs as drugs is increasing. Although they are ribosomal peptides in origin, RiPPs are typically categorized as small molecules rather than biologics due to their chemical properties, such as moderate molecular weight and relatively high hydrophobicity.
The uses and biological activities of RiPPs are diverse.
RiPPs in commercial use include nisin, a food preservative, thiostrepton, a veterinary topical antibiotic, and nosiheptide and duramycin, which are animal feed additives. Phalloidin functionalized with a fluorophore is used in microscopy as a stain due to its high affinity for actin. Anantin is a RiPP used in cell biology as an atrial natriuretic peptide receptor inhibitor.
In 2012-2013, a derivatized RiPP in clinical trials was LFF571. Phase II clinical trials of LFF571, a derivative of the thiopeptide GE2270-A, for the treatment of Clostridioides difficile infections, with comparable safety and efficacy to vancomycin, was terminated early as the results were unfavorable. Also recently in clinical trials was the NVB302 (a derivative of the lantibiotic actagardine) which is used for the treatment of Clostridioides difficile infection. Duramycin has completed phase II clinical trials for the treatment of cystic fibrosis.
Other bioactive RiPPs include the antibiotics cyclothiazomycin and bottromycin, the ultra-narrow spectrum antibiotic plantazolicin, and the cytotoxin patellamide A. Streptolysin S, the toxic virulence factor of Streptococcus pyogenes, is also a RiPP. Additionally, human thyroid hormone itself is a RiPP due to its biosynthetic origin as thyroglobulin.
Classifications
Amatoxins and phallotoxins
Amatoxins and phallotoxins are 8- and 7-membered natural products, respectively, characterized by N-to-C cyclization in addition to a tryptathionine motif derived from the crosslinking of Cys and Trp. The amatoxins and phallotoxins also differ from other RiPPs based on the presence of a C-terminal recognition sequence in addition to the N-terminal leader peptide. α-Amanitin, an amatoxin, has a number of posttranslational modifications in addition to macrocyclization and formation of the tryptathionine bridge: oxidation of the tryptathionine leads to the presence of a sulfoxide, and numerous hydroxylations decorate the natural product. As an amatoxin, α-amanitin is an inhibitor of RNA polymerase II.
Bottromycins
Bottromycins contain a C-terminal decarboxylated thiazole in addition to a macrocyclic amidine.
There are currently six known bottromycin compounds, which differ in the extent of side chain methylation, an additional characteristic of the bottromycin class. The total synthesis of bottromycin A2 was required to definitively determine the structure of the first bottromycin.
Thus far, gene clusters predicted to produce bottromycins have been identified in the genus Streptomyces. Bottromycins differ from other RiPPs in that there is no N-terminal leader peptide. Rather, the precursor peptide has a C-terminal extension of 35-37 amino acids, hypothesized to act as a recognition sequence for posttranslational machinery.
Cyanobactins
Cyanobactins are diverse metabolites from cyanobacteria with N-to-C macrocylization of a 6–20 amino acid chain. Cyanobactins are natural products isolated from cyanobacteria, and close to 30% of all cyanobacterial strains are thought to contain cyanobacterial gene clusters. However, while thus far all cyanobactins are credited to cyanobacteria, there exists the possibility that other organisms could produce similar natural products.
The precursor peptide of the cyanobactin family is traditionally designated the "E" gene, whereas precursor peptides are designated gene "A" in most RiPP gene clusters. "A" is a serine protease involved in cleavage of the leader peptide and subsequent macrocyclization of the peptide natural product, in combination with an additional serine protease homologue, the encoded by gene "G". Members of the cyanobactin family may bear thiazolines/oxazolines, thiazoles/oxazoles, and methylations depending on additional modification enzymes. For example, perhaps the most famous cyanobactin is patellamide A, which contains two thiazoles, a methyloxazoline, and an oxazoline in its final state, a macrocycle derived from 8 amino acids.
Lanthipeptides
Lanthipeptides are one of the most well-studied families of RiPPs. The family is characterized by the presence of lanthionine (Lan) and 3-methyllanthionine (MeLan) residues in the final natural product. There are four major classes of lanthipeptides, delineated by the enzymes responsible for installation of Lan and MeLan. The dehydratase and cyclase can be two separate proteins or one multifunctional enzyme. Previously, lanthipeptides were known as "lantipeptides" before a consensus was reached in the field.
Lantibiotics are lanthipeptides that have known antimicrobial activity. The founding member of the lanthipeptide family, nisin, is a lantibiotic that has been used to prevent the growth of food-born pathogens for over 40 years.
Lasso peptides
Lasso peptides are short peptides containing an N-terminal macrolactam macrocycle "ring" through which a linear C-terminal "tail" is threaded. Because of this threaded-loop topology, these peptides resemble lassos, giving rise to their name. They are a member of a larger class of amino-acid-based lasso structures. Additionally, lasso peptides are formally rotaxanes.
The N-terminal "ring" can be from 7 to 9 amino acids long and is formed by an isopeptide bond between the N-terminal amine of the first amino acid of the peptide and the carboxylate side chain of an aspartate or glutamate residue. The C-terminal "tail" ranges from 7 to 15 amino acids in length.
The first amino acid of lasso peptides is almost invariably glycine or cysteine, with mutations at this site not being tolerated by known enzymes. Thus, bioinformatics-based approaches to lasso peptide discovery have thus used this as a constraint. However, some lasso peptides were recently discovered that also contain serine or alanine as their first residue.
The threading of the lasso tail is trapped either by disulfide bonds between ring and tail cysteine residues (class I lasso peptides), by steric effects due to bulky residues on the tail (class II lasso peptides), or both (class III lasso peptides). The compact structure makes lasso peptides frequently resistant to proteases or thermal unfolding.
Linear azol(in)e-containing peptides
Linear azole(in)e-containing peptides (LAPs) contain thiazoles and oxazoles, or their reduced thiazoline and oxazoline forms. Thiazol(in)es are the result of cyclization of Cys residues in the precursor peptide, while (methyl)oxazol(in)es are formed from Thr and Ser. Azole and azoline formation also modifies the residue in the -1 position, or directly C-terminal to the Cys, Ser, or Thr. A dehydrogenase in the LAP gene cluster is required for oxidation of azolines to azoles.
Plantazolicin is a LAP with extensive cyclization. Two sets of five heterocycles endow the natural product with structural rigidity and unusually selective antibacterial activity. Streptolysin S (SLS) is perhaps the most well-studied and most famous LAP, in part because the structure is still unknown since the discovery of SLS in 1901. Thus, while the biosynthetic gene cluster suggests SLS is a LAP, structural confirmation is lacking.
Microcins
Microcins are all RiPPs produced by Enterobacteriaceae with a molecular weight <10 kDa. Many members of other RiPP families, such as microcin E492, microcin B17 (LAP) and microcin J25 (Lasso peptide) are also considered microcins. Instead of being classified based on posttranslational modifications or modifying enzymes, microcins are instead identified by molecular weight, native producer, and antibacterial activity. Microcins are either plasmid- or chromosome-encoded, but specifically have activity against Enerobacteriaceae. Because these organisms are also often producers of microcins, the gene cluster contains not only a precursor peptide and modification enzymes, but also a self-immunity gene to protect the producing strain, and genes encoding export of the natural product.
Microcins have bioactivity against Gram-negative bacteria but usually display narrow-spectrum activity due to hijacking of specific receptors involved in the transport of essential nutrients.
Thiopeptides
Most of the characterized thiopeptides have been isolated from Actinobacteria. General structural features of thiopeptide macrocycles, are dehydrated amino acids and thiazole rings formed from dehydrated serine/threonine and cyclized cysteine residues, respectively
The thiopeptide macrocycle is closed with a six-membered nitrogen-bearing ring. Oxidation state and substitution pattern of the nitrogenous ring determines the series of the thiopeptide natural product. While the mechanism of macrocyclization is not known, the nitrogenous ring can exist in thiopeptides as a piperidine, dehydropiperidine, or a fully oxidized pyridine. Additionally, some thiopeptides bear a second macrocycle, which bears a quinaldic acid or indolic acid residue derived from tryptophan. Perhaps the most well-characterized thiopeptide, thiostrepton A, contains a dehydropiperidine ring and a second, quinaldic acid-containing macrocycle. Four residues are dehydrated during posttranslational modification, and the final natural product also bears four thiazoles and one azoline.
Other RiPPs
Autoinducing Peptides (AIPs) and quorum sensing peptides are used as signaling molecules in the process called quorum sensing. AIPs are characterized by the presence of a cyclic ester or thioester, unlike other regulatory peptides that are linear. In pathogens, exported AIPs bind to extracellular receptors that trigger the production of virulence factors. In Staphylococcus aureus, AIPs are biosynthesized from a precursor peptide composed of a C-terminal leader region, the core region, and negatively charged tail region that is, along with the leader peptide, cleaved before AIP export.
Bacterial Head-to-Tail Cyclized Peptides refers exclusively to ribosomally synthesized peptides with 35-70 residues and a peptide bond between the N- and C-termini, sometimes referred to as bacteriocins, although this term is used more broadly. The distinctive nature of this class is not only the relatively large size of the natural products but also the modifying enzymes responsible for macrocyclization. Other N-to-C cyclized RiPPs, such as the cyanobactins and orbitides, have specialized biosynthetic machinery for macrocylization of much smaller core peptides. Thus far, these bacteriocins have been identified only in Gram-positive bacteria. Enterocin AS-48 was isolated from Enterococcus and, like other bacteriocins, is relatively resistant to high temperature, pH changes, and many proteases as a result of macrocyclization. Based on solution structures and sequence alignments, bacteriocins appear to take on similar 3D structures despite little sequence homology, contributing to stability and resistance to degradation.
Conopeptides and other toxoglossan peptides are the components of the venom of predatory marine snails, such as the cone snails or Conus. Venom peptides from cone snails are generally smaller than those found in other animal venoms (10-30 amino acids vs. 30-90 amino acids) and have more disulfide crosslinks. A single species may have 50-200 conopeptides encoded in its genome, recognizable by a well-conserved signal sequence.
Cyclotides are RiPPs with a head-to-tail cyclization and three conserved disulfide bonds that form a knotted structure called a cyclic cysteine knot motif. No other posttranslational modifications have been observed on the characterized cyclotides, which are between 28 - 37 amino acids in size. Cyclotides are plant natural products and the different cyclotides appear to be species-specific. While many activities have been reported for cyclotides, it has been hypothesized that all are united by a common mechanism of binding to and disrupting the cell membrane.
Glycocins are RiPPs that are glycosylated antimicrobial peptides. Only two members have been fully characterized, making this a small RiPP class. Sublancin 168 and glycocin F are both Cys-glycosylated and, in addition, have disulfide bonds between non-glycosylated Cys residues. While both members bear S-glycosyl groups, RiPPs bearing O- or N-linked carbohydrates will also be included in this family as they are discovered.
Linaridins are characterized by C-terminal aminovinyl cysteine residues. While this posttranslational modification is also seen in the lanthipeptides epidermin and mersacidin, linaridins do not have Lan or MeLan residues. In addition, the linaridin moiety is formed from modification of two Cys residues, whereas lanthipeptide aminovinyl cysteines are formed from Cys and dehydroalanine (Dha). The first linaridin to be characterized was cypemycin.
Microviridins are cyclic N-acetylated trideca- and tetradecapeptides with ω-ester and/or ω-amide bonds. Lactone formation through glutamate or aspartate ω-carboxy groups and the lysine ε-amino group forms macrocycles in the final natural product. This class of RiPPs function as protease inhibitors and were originally isolated from Microcystis viridis. Gene clusters encoding microviridins have also been identified in genomes across the Bacteroidetes and Proteobacteria phyla.
Orbitides are plant-derived N-to-C cyclized peptides with no disulfide bonds. Also referred to as Caryophyllaceae-like homomonocyclopeptides, orbitides are 5-12 amino acids in length and are composed of mainly hydrophobic residues. Similar to the amatoxins and phallotoxins, the gene sequences of orbitides suggest the presence of a C-terminal recognition sequence. In the flaxseed variety Linum usitatissimum, a precursor peptide was found using Blast searching that potentially contains five core peptides separated by putative recognition sequences.
Proteusins are named after "Proteus", a Greek shape-shifting sea god. Until now, the only known members in the family of Proteusins are called polytheonamides. They were originally presumed to be nonribosomal natural products due to the presence of many D-amino acids and other non-proteinogenic amino acids. However, a metagenomic study revealed the natural products as the most extensively modified class of RiPPs known to date. Six enzymes are responsible for installing a total of 48 posttranslational modifications onto the polytheonamide A and B precursor peptides, including 18 epimerizations. Polytheonamides are exceptionally large, as a single molecule is able to span a cell membrane and form an ion channel.
Sactipeptides contain intramolecular linkages between the sulfur of Cys residues and the α-carbon of another residue in the peptide. A number of nonribosomal peptides bear the same modification. In 2003, the first RiPP with a sulfur-to-α-carbon linkage was reported when the structure of subtilosin A was determined using isotopically enriched media and NMR spectroscopy. In the case of subtilosin A, isolated from Bacillus subtilis 168, the Cα crosslinks between Cys4 and Phe31, Cys7 and Thr28, and Cys13 and Phe22 are not the only posttranslational modifications; the C- and N-termini form an amide bond, resulting in a circular structure that is conformationally restricted by the Cα bonds. Sactipeptides with antimicrobial activity are commonly referred to as sactibiotics (sulfur to alpha-carbon antibiotic).
Biosynthesis
RiPPs are characterized by a common biosynthetic strategy wherein genetically-encoded peptides undergo translation and subsequent chemical modification by biosynthetic enzymes.
Common features
All RiPPs are synthesized first at the ribosome as a precursor peptide. This peptide consists of a core peptide segment which is typically preceded (and occasionally followed) by a leader peptide segment and is typically ~20-110 residues long. The leader peptide is usually important for enabling enzymatic processing of the precursor peptide via aiding in recognition of the core peptide by biosynthetic enzymes and for cellular export. Some RiPPs also contain a recognition sequence C-terminal to the core peptide; these are involved in excision and cyclization. Additionally, eukaryotic RiPPs may contain a signal segment of the precursor peptide which helps direct the peptide to cellular compartments.
During RiPP biosynthesis, the unmodified precursor peptide (containing an unmodified core peptide, UCP) is recognized and chemically modified sequentially by biosynthetic enzymes (PRPS). Examples of modifications include dehydration (i.e. lanthipeptides, thiopeptides), cyclodehydration (i.e. thiopeptides), prenylation (i.e. cyanobactins), and cyclization (i.e. lasso peptides), among others. The resulting modified precursor peptide (containing a modified core peptide, MCP) then undergoes proteolysis, wherein the non-core regions of the precursor peptide are removed. This results in the mature RiPP.
Nomenclature
Papers published prior to a recent community consensus employ differing sets of nomenclature. The precursor peptide has been referred to previously as prepeptide, prepropeptide, or structural peptide. The leader peptide has been referred to as a propeptide, pro-region, or intervening region. Historical alternate terms for core peptide included propeptide, structural peptide, and toxin region (for conopeptides, specifically).
Family-specific features
Lanthipeptides
Lanthipeptides are characterized by the presence lanthionine (Lan) and 3-methyllanthionine (MeLan) residues. Lan residues are formed from a thioether bridge between Cys and Ser, while MeLan residues are formed from the linkage of Cys to a Thr residue. The biosynthetic enzymes responsible for Lan and MeLan installation first dehydrate Ser and Thr to dehydroalanine (Dha) and dehydrobutyrine (Dhb), respectively. Subsequent thioether crosslinking occurs through a Michael-type addition by Cys onto Dha or Dhb.
Four classes of lanthipeptide biosynthetic enzymes have been designated. Class I lanthipeptides have dedicated lanthipeptide dehydratases, called LanB enzymes, though more specific designations are used for particular lanthipeptides (e.g. NisB is the nisin dehydratase). A separate cyclase, LanC, is responsible for the second step in Lan and MeLan biosynthesis. However, class II, III, and IV lanthipeptides have bifunctional lanthionine synthetases in their gene clusters, meaning a single enzyme carries out both dehydration and cyclization steps. Class II synthetases, designated LanM synthetases, have N-terminal dehydration domains with no sequence homology to other lanthipeptide biosynthetic enzymes; the cyclase domain has homology to LanC. Class III (LanKC) and IV (LanL) enzymes have similar N-terminal lyase and central kinase domains, but diverge in C-terminal cyclization domains: the LanL cyclase domain is homologous to LanC, but the class III enzymes lack Zn-ligand binding domains.
Linear azol(in)e-containing peptides
The hallmark of linear azol(in)e-containing peptide (LAP) biosynthesis is the formation of azol(in)e heterocycles from the nucleophilic amino acids serine, threonine, or cysteine. This is accomplished by three enzymes referred to as the B, C, and D proteins; the precursor peptide is referred to as the A protein, as in other classes.
The C protein is mainly involved in leader peptide recognition and binding and is sometimes called a scaffolding protein. The D protein is an ATP-dependent cyclodehydratase that catalyzes the cyclodehydration reaction, resulting in formation of an azoline ring. This occurs by direct activation of the amide backbone carbonyl with ATP, resulting in stoichiometric ATP consumption. The C and D proteins are occasionally present as a single, fused protein, as is the case for trunkamide biosynthesis. The B protein is a flavin mononucleotide (FMN)-dependent dehydrogenase which oxidizes certain azoline rings into azoles.
The B protein is typically referred to as the dehydrogenase; the C and D proteins together form the cyclodehydratase, although the D protein alone performs the cyclodehydration reaction. Early work on microcin B17 adopted a different nomenclature for these proteins, but a recent consensus has been adopted by the field as described above.
Cyanobactins
Cyanobactin biosynthesis requires proteolytic cleavage of both N-terminal and C-terminal portions of the precursor peptide. The defining proteins are thus an N-terminal protease, referred to as the A protein, and a C-terminal protease, referred to as the G protein. The G protein is also responsible for macrocyclization.
For cyanobactins, the precursor peptide is referred to as the E peptide. Minimally, the E peptide requires a leader peptide region, a core (structural) region, and both N-terminal and C-terminal protease recognition sequences. In contrast to most RiPPs, for which a single precursor peptide encodes a single natural product via a lone core peptide, cyanobactin E peptides can contain multiple core regions; multiple E peptides can even be present in a single gene cluster.
Many cyanobactins also undergo heterocyclization by a heterocyclase (referred to as the D protein), installing oxazoline or thiazoline moieties from Ser/Thr/Cys residues prior to the action of the A and G proteases. The heterocyclase is an ATP-dependent YcaO homologue that behaves biochemically in the same manner as YcaO-domain cyclodehydratases in thiopeptide and linear azol(in)e-containing peptide (LAP) biosynthesis (described above).
A common modification is prenylation of hydroxyl groups by an F protein prenyltransferase. Oxidation of azoline heterocycles to azoles can also be accomplished by an oxidase domain located on the G protein. Unusual for ribosomal peptides, cyanobactins can include D-amino acids; these can occur adjacent to azole or azoline residues. The functions of some proteins found commonly in cyanobactin biosynthetic gene clusters, the B and C proteins, are unknown.
Thiopeptides
Thiopeptide biosynthesis involves particularly extensive modification of the core peptide scaffold. Indeed, due to the highly complex structures of thiopeptides, it was commonly thought that these natural products were nonribosomal peptides. Recognition of the ribosomal origin of these molecules came in 2009 with the independent discovery of the gene clusters for several thiopeptides.
The standard nomenclature for thiopeptide biosynthetic proteins follows that of the thiomuracin gene cluster. In addition to the precursor peptide, referred to as the A peptide, thiopeptide biosynthesis requires at least six genes. These include lanthipeptide-like dehydratases, designated the B and C proteins, which install dehydroalanine and dehydrobutyrine moieties by dehydrating Ser/Thr precursor residues. Azole and azoline synthesis is effected by the E protein, the dehydrogenase, and the G protein, the cyclodehydratase. The nitrogen-containing heterocycle is installed by the D protein cyclase via a putative [4+2] cycloaddition of dehydroalanine moieties to form the characteristic macrocycle. The F protein is responsible for binding of the leader peptide.
Thiopeptide biosynthesis is biochemically similar to that of cyanobactins, lanthipeptides, and linear azol(in)e-containing peptides (LAPs). As with cyanobactins and LAPs, azole and azoline synthesis occurs via the action of an ATP-dependent YcaO-domain cyclodehydratase. In contrast to LAPs, where cyclodehydration occurs via the action of two distinct proteins responsible for leader peptide binding and cyclodehydrative catalysis, these are fused into a single protein (G protein) in cyanobactin and thiopeptide biosynthesis. However, in thiopeptides, an additional protein, designated the Ocin-ThiF-like protein (F protein) is necessary for leader peptide recognition and potentially recruiting other biosynthetic enzymes.
Lasso peptides
Lasso peptide biosynthesis requires at least three genes, referred to as the A, B, and C proteins. The A gene encodes the precursor peptide, which is modified by the B and C proteins into the mature natural product. The B protein is an adenosine triphosphate-dependent cysteine protease that cleaves the leader region from the precursor peptide. The C protein displays homology to asparagine synthetase and is thought to activate the carboxylic acid side chain of a glutamate or aspartate residue via adenylylation. The N-terminal amine formed by the B protein (protease) then reacts with this activated side chain to form the macrocycle-forming isopeptide bond. The exact steps and reaction intermediates in lasso peptide biosynthesis remain unknown due to experimental difficulties associated with the proteins. Commonly, the B protein is referred to as the lasso protease, and the C protein is referred to as the lasso cyclase.
Some lasso peptide biosynthetic gene clusters also require an additional protein of unknown function for biosynthesis. Additionally, lasso peptide gene clusters usually include an ABC transporter (D protein) or an isopeptidase, although these are not strictly required for lasso peptide biosynthesis and are sometimes absent. No X-ray crystal structure is yet known for any lasso peptide biosynthetic protein.
The biosynthesis of lasso peptides is particularly interesting due to the inaccessibility of the threaded-lasso topology to chemical peptide synthesis.
See also
Nonribosomal peptide
References
Biosynthesis
Molecular biology
Enzymes
Peptides | Ribosomally synthesized and post-translationally modified peptides | [
"Chemistry",
"Biology"
] | 6,664 | [
"Biomolecules by chemical classification",
"Peptides",
"Biosynthesis",
"Chemical synthesis",
"Molecular biology",
"Biochemistry",
"Metabolism"
] |
42,806,211 | https://en.wikipedia.org/wiki/Conway%20criterion | In the mathematical theory of tessellations, the Conway criterion, named for the English mathematician John Horton Conway, is a sufficient rule for when a prototile will tile the plane. It consists of the following requirements: The tile must be a closed topological disk with six consecutive points A, B, C, D, E, and F on the boundary such that:
the boundary part from A to B is congruent to the boundary part from E to D by a translation T where T(A) = E and T(B) = D.
each of the boundary parts BC, CD, EF, and FA is centrosymmetric—that is, each one is congruent to itself when rotated by 180-degrees around its midpoint.
some of the six points may coincide but at least three of them must be distinct.
Any prototile satisfying Conway's criterion admits a periodic tiling of the plane—and does so using only 180-degree rotations. The Conway criterion is a sufficient condition to prove that a prototile tiles the plane but not a necessary one. There are tiles that fail the criterion and still tile the plane.
Every Conway tile is foldable into either an isotetrahedron or a rectangle dihedron and conversely, every net of an isotetrahedron or rectangle dihedron is a Conway tile.
History
The Conway criterion applies to any shape that is a closed disk—if the boundary of such a shape satisfies the criterion, then it will tile the plane. Although the graphic artist M.C. Escher never articulated the criterion, he discovered it in the mid 1920s. One of his earliest tessellations, later numbered 1 by him, illustrates his understanding of the conditions in the criterion. Six of his earliest tessellations all satisfy the criterion. In 1963 the German mathematician Heinrich Heesch described the five types of tiles that satisfy the criterion. He shows each type with notation that identifies the edges of a tile as one travels around the boundary: CCC, CCCC, TCTC, TCTCC, TCCTCC, where C means a centrosymmetric edge, and T means a translated edge.
Conway was likely inspired by Martin Gardner's July 1975 column in Scientific American that discussed which convex polygons can tile the plane. In August 1975, Gardner revealed that Conway had discovered his criterion while trying to find an efficient way to determine which of the 108 heptominoes tile the plane.
Examples
In its simplest form, the criterion simply states that any hexagon with a pair of opposite sides that are parallel and congruent will tessellate the plane. In Gardner's article, this is called a type 1 hexagon. This is also true of parallelograms. But the translations that match the opposite edges of these tiles are the composition of two 180° rotations—about the midpoints of two adjacent edges in the case of a hexagonal parallelogon, and about the midpoint of an edge and one of its vertices in the case of a parallelogram. When a tile that satisfies the Conway Criterion is rotated 180° about the midpoint of a centrosymmetric edge, it creates either a generalized parallelogram or a generalized hexagonal parallelogon (these have opposite edges congruent and parallel), so the doubled tile can tile the plane by translations. The translations are the composition of 180° rotations just as in the case of the straight-edge hexagonal parallelogon or parallelograms.
The Conway criterion is surprisingly powerful—especially when applied to polyforms. With the exception of four heptominoes, all polyominoes up through order 7 either satisfy the Conway criterion or two copies can form a patch which satisfies the criterion.
References
External links
Conway’s Magical Pen An online app where you can create your own original Conway criterion tiles and their tessellations.
Tessellation
John Horton Conway | Conway criterion | [
"Physics",
"Mathematics"
] | 812 | [
"Tessellation",
"Planes (geometry)",
"Euclidean plane geometry",
"Symmetry"
] |
42,809,646 | https://en.wikipedia.org/wiki/Radiophysical%20Research%20Institute | The Radiophysical Research Institute (NIRFI), based in Nizhny Novgorod, Russia, is a research institute that conducts basic and applied research in the field of radiophysics, radio astronomy, cosmology and radio engineering. It is also known for its work in solar physics, sun-earth physics as well as the related geophysics. It also does outreach for the Russian education system. It was formed in 1956 as the Radiophysical Research Institute of the (Soviet) Ministry of Education and Science.
Projects NIRFI
Sura Ionospheric Heating Facility
Zimenkovsky radio-astronomical observatory
Radio telescope - RT-14 laboratories NIRFI Staraya Pustin + two RT-7
Further reading
254 pages.
References
Astrophysics
Radio astronomy
Astronomy in Russia
History of science and technology in Russia
Physics research institutes
Research institutes in Russia
Research institutes in the Soviet Union
1956 establishments in the Soviet Union
Astronomy in the Soviet Union
Research institutes established in 1956 | Radiophysical Research Institute | [
"Physics",
"Astronomy"
] | 193 | [
"Radio astronomy",
"Astronomical sub-disciplines",
"Astrophysics"
] |
42,810,674 | https://en.wikipedia.org/wiki/In%20vitro%20to%20in%20vivo%20extrapolation | In vitro to in vivo extrapolation (IVIVE) refers to the qualitative or quantitative transposition of experimental results or observations made in vitro to predict phenomena in vivo, biological organisms.
The problem of transposing in vitro results is particularly acute in areas such as toxicology where animal experiments are being phased out and are increasingly being replaced by alternative tests.
Results obtained from in vitro experiments cannot often be directly applied to predict biological responses of organisms to chemical exposure in vivo. Therefore, it is extremely important to build a consistent and reliable in vitro to in vivo extrapolation method.
Two solutions are now commonly accepted:
(1) Increasing the complexity of in vitro systems where multiple cells can interact with each other in order recapitulate cell-cell interactions present in tissues (as in "human on chip" systems).
(2) Using mathematical modeling to numerically simulate the behavior of a complex system, whereby in vitro data provides the parameter values for developing a model.
The two approaches can be applied simultaneously allowing in vitro systems to provide adequate data for the development of mathematical models. To comply with push for the development of alternative testing methods, increasingly sophisticated in vitro experiments are now collecting numerous, complex, and challenging data that can be integrated into mathematical models.
Pharmacology
IVIVE in pharmacology can be used to assess pharmacokinetics (PK) or pharmacodynamics (PD)..
Since biological perturbation depends on concentration of the toxicant as well as exposure duration of a candidate drug (parent molecule or metabolites) at that target site, in vivo tissue and organ effects can either be completely different or similar to those observed in vitro.
Therefore, extrapolating adverse effects observed in vitro is incorporated into a quantitative model of in vivo PK model. It is generally accepted that physiologically based PK (PBPK) models, including absorption, distribution, metabolism, and excretion of any given chemical are central to in vitro - in vivo extrapolations.
In the case of early effects or those without inter-cellular communications, it is assumed that the same cellular exposure concentration cause the same effects, both experimentally and quantitatively, in vitro and in vivo. In these conditions, it is enough to (1) develop a simple pharmacodynamics model of the dose–response relationship observed in vitro and (2) transpose it without changes to predict in vivo effects.
However, cells in cultures do not mimic perfectly cells in a complete organism. To solve that extrapolation problem, more statistical models with mechanistic information are needed, or we can rely on mechanistic systems of biology models of the cell response.
Those models are characterized by a hierarchical structure, such as molecular pathways, organ function, whole-cell response, cell-to- cell communications, tissue response and inter-tissue communications.
References
Quignot N., Hamon J., Bois F., 2014, Extrapolating in vitro results to predict human toxicity, in In Vitro Toxicology Systems, Bal-Price A., Jennings P., Eds, Methods in Pharmacology and Toxicology series, Springer Science, New York, USA, p. 531-550
Latin biological phrases
Alternatives to animal testing | In vitro to in vivo extrapolation | [
"Chemistry",
"Biology"
] | 669 | [
"Latin biological phrases",
"Animal testing",
"Alternatives to animal testing"
] |
62,817,045 | https://en.wikipedia.org/wiki/Zinc%20oxide%20nanostructure | Zinc oxide (ZnO) nanostructures are structures with at least one dimension on the nanometre scale, composed predominantly of zinc oxide. They may be combined with other composite substances to change the chemistry, structure or function of the nanostructures in order to be used in various technologies. Many different nanostructures can be synthesised from ZnO using relatively inexpensive and simple procedures. ZnO is a semiconductor material with a wide band gap energy of 3.3eV and has the potential to be widely used on the nanoscale. ZnO nanostructures have found uses in environmental, technological and biomedical purposes including ultrafast optical functions, dye-sensitised solar cells, lithium-ion batteries, biosensors, nanolasers and supercapacitors. Research is ongoing to synthesise more productive and successful nanostructures from ZnO and other composites. ZnO nanostructures is a rapidly growing research field, with over 5000 papers published during 2014-2019.
Synthesis
ZnO creates one of the most diverse range of nanostructures, and there is a great amount of research on different synthesis routes of various ZnO nanostructures. The most common methods to synthesise ZnO structures is using chemical vapor deposition (CVD), which is best used to form nanowires and comb or tree-like structures.
Chemical vapor deposition (CVD)
In vapor deposition processes, zinc and oxygen are transported in gaseous form and react with each other, creating ZnO nanostructures. Other vapor molecules or solid and liquid catalysts can also be involved in the reaction, which affect the properties of the resultant nanostructure . To directly create ZnO nanostructures, one can decompose zinc oxide at high temperatures where it splits into zinc and oxygen ions and when cooled it forms various nanostructures, including complex structures such as nanobelts and nanorings. Alternatively, zinc powder can be transported through oxygen vapor which react to form nanostructures . Other vapours such as nitrous oxide or carbon oxides can be used by themselves or in combination. These methods are known as vapor-solid (VS) processes due to their reactants states. VS processes can create a variety of ZnO nanostructures but their morphology and properties are highly dependent on the reactants and reaction conditions such as the temperature and vapor partial pressures.
Vapor deposition processes can also use catalysts to assist the growth of nanostructures. These are known as vapor-liquid-solid (VLS) processes, and use a catalytic liquid alloy phase as an extra step in nanostructure synthesis to accelerate growth. The liquid alloy, which includes zinc, is attached to nucleated seeds made usually of gold or silica. The alloy absorbs the oxygen vapor and saturates, facilitating a chemical reaction between zinc and oxygen. The nanostructure develops as the ZnO solidifies and grows outwards from the gold seed. This reaction can be highly controlled to produce more complex nanostructures by modifying the size and arrangement of gold seeds, and of the alloys and vapor constituents.
Aqueous solution growth
A large variety of ZnO nanostructures can also be synthesised by growth in an aqueous solution, which is desirable due to its simplicity and low processing temperature. A ZnO seed layer is used to begin uniform growth and to ensure nanowires are oriented. A solution of catalysts and molecules containing zinc and oxygen are reacted and nanostructures grow from the seed layer. An example of such a reaction involves hydrolysing ZnO(NO3)2 (zinc nitrate) and the decomposition of hexamethyltetramine (HMT) to form ZnO. Altering the growth solution and its concentration, temperature and structure of the seed layer can change the morphology of the synthesised nanostructures. Nanorods, aligned nanowire arrays, flower-like and disc like nanowires and nanobelt arrays, along with other nanostructures, can all be created in aqueous solutions by varying the growth solution.
Electrodeposition
Another method to synthesise ZnO nanostructures is electrodeposition, which uses electric current to facilitate chemical reactions and deposition on electrodes. Its low temperature and ability to create precise thickness structures makes it a cost-effective and environmentally friendly method. Structured nanocolumnar crystals, porous films, thin films and aligned wires have been synthesised in this way. The quality and size of these structures depends on substrates, current density, deposition time and temperature. The bandgap energy is also dependent on these parameters, since it is dependent not only on the material but also its size due to the nanoscale effect on the band structure.
Defects and Doping
ZnO has a rich defect and dopant chemistry that can significantly alter properties and behaviour of the material. Doping ZnO nanostructures with other elements and molecules leads to a variety of material characteristics, because the addition or vacancy of atoms changes the energy levels in the band gap. Native defects due to oxygen and zinc vacancies or zinc interstitials create its n-type semiconductor properties, but the behaviour is not fully understood. Carriers created by doping have been found to exhibit a strong dominance over native defects. Nanostructures contain small length scales, and this results in a large surface to volume ratio. Surface defects have hence been the primary focus of research into defects of ZnO nanostructures. Deep level emissions also occur, affecting material characteristics.
ZnO can occupy multiple types of lattices, but is often found in a hexagonal wurtzite structure. In this lattice all of the octahedral sites are empty, hence there is space for intrinsic defects, Zn interstitials, and also external dopants to occupy gaps in the lattice, even when the lattice is at a nanoscale. Zn interstitials occur when extra zinc atoms are located inside the crystal ZnO lattice. They occur naturally but their concentration can be increased by using Zn vapor rich synthesis conditions. Oxygen vacancies are common defects in metal oxides where an oxygen atom is left out of the crystal structure. Both oxygen vacancies and Zn interstitials increase the number of electron charge carriers, thus becoming an n-type semiconductor. Since these defects occur naturally as a by-product of the synthesis process, it is difficult to make p-type ZnO nanostructures.
Defects and dopants are usually introduced during the synthesis of the ZnO nanostructure, either by controlling their formation or accidentally obtained during the growing process through contamination. Since it is difficult to control these processes, defects occur naturally. Dopants can diffuse into the nanostructure during synthesis. Alternatively, the nanostructures can be treated after synthesis such as through plasma injection or exposure to gases. Unwanted dopants and defects can also be manipulated so that they are removed or passivated. Crudely, the region of the nanostructure can be fully removed, such as cutting off the surface layer of a nanowire. Oxygen vacancies can be filled using plasma treatment, where an oxygen containing plasma inserts oxygen back into the lattice. At temperatures where the lattice is mobile, oxygen molecules and gaps can be moved using electric fields to change the nature of the material.
Defects and dopants are used in most ZnO nanostructure applications. Indeed, the defects in ZnO enable a variety of semiconductor properties with different band gaps. By combining ZnO with dopants, a variety of electrical and material characteristics can be achieved. For example, optical properties of ZnO can change through defects and dopants. Ferromagnetic properties can be introduced into ZnO nanostructures through doping with transition metal elements. This creates magnetic semiconductors, which is a focus of spintronics.
Application
ZnO nanostructures can be used for many different applications. Here are a few examples.
Dye Sensitised Solar Cells
Dye sensitised solar cells (DSSCs) are a type of thin film solar cell that uses a liquid dye to absorb sunlight. Currently TiO2 (titanium dioxide) is mostly in use for DSSCs as the photoanode material. However ZnO is found to be a good candidate for photoanode material in DSSCs. This is because the nanostructure synthesis is easy to control, it has higher electron transport properties, and it is possible to use organic material as hole transporter, unlike when TiO2 is the photoanode material. Researchers have found that the structure of ZnO nanostructure affects the solar cell performance. There are also disadvantages for using ZnO nanostructures, like a so called voltage leakage that needs more investigation.
Batteries and supercapacitors
Rechargeable lithium-ion batteries (LIBs) are currently the most common power source since they produce high power and have a high energy density. The use of metal oxides as anodes has largely improved the limitations of the batteries, and ZnO is particularly seen as an up-and-coming potential anode. This is due to its low toxicity and costs, and its high theoretical capacity (978 mAhg−1).
ZnO experiences volume expansion during processes resulting in a loss of electrical disconnection, decreasing capacity. A solution may be to dope with different materials and to develop on the nanoscale with nanostructures, such as porous surfaces, that allow for volume changes during the chemical process. Alternatively, lithium storage components can be mixed in with the ZnO nanostructures to create a more stable capacity. Research has been successful in synthesising such composite ZnO nanostructures with carbon, graphite, and other metal oxides.
Another commonly used energy storage appliance are supercapacitors (SCs). The SCs are mostly used in electric vehicles and as backup power systems. They are known for being environmentally friendly and may replace the currently used energy storage devices. This is due to its more advanced stability, power density and overall greater performance. Because of its remarkable energy density of 650Aħg−1 and electrical conductivity of 230Scm−1 ZnO is recognized as a great potential electrode material. Nonetheless it has poor electrical conductivity as its small surface area makes for a restricted capacity. Just as for the batteries, multiple combinations of carbon structures, graphene, metal oxides with ZnO nanostructures have improved capacitance of these materials. A composite with ZnO base has not only a better power density and energy density, but is also more cost-effective and eco-friendly.
Biosensors and biomedical
It has already been discovered that ZnO nanostructures are able to bind biological substances. Recent research shows that because of this trait and because of its surface selectivity, ZnO is a good candidate for a biosensor. It can naturally form anisotropic nanostructures that are used to deliver drugs. ZnO based biosensors can also help in diagnosing the early stages of cancer. There is ongoing research to see if ZnO nanostructures can be used for bioimaging. It has so far only been tested on mice and shows positive results. In addition, ZnO nanomaterials are already used in cosmetic products, like face creams and sun cream
It is, however, not yet clear what the effect of ZnO nanostructures is on human cells and the environment. Since used ZnO biosensors will eventually dissolve and release Zn ions, they may be absorbed by the cells and the local effect of this is not yet known. Nanomaterials in cosmetics will eventually be washed off and released in the environment. Due to these unknown risks, there needs to be a lot more research before ZnO can be safely applied in the biomedical field.
References
Zinc oxide
Nanomaterials | Zinc oxide nanostructure | [
"Materials_science"
] | 2,490 | [
"Nanotechnology",
"Nanomaterials"
] |
62,817,424 | https://en.wikipedia.org/wiki/Hemispherical%20electron%20energy%20analyzer | A hemispherical electron energy analyzer or hemispherical deflection analyzer is a type of electron energy spectrometer generally used for applications where high energy resolution is needed—different varieties of electron spectroscopy such as angle-resolved photoemission spectroscopy (ARPES), X-ray photoelectron spectroscopy (XPS) and Auger electron spectroscopy (AES) or in imaging applications such as photoemission electron microscopy (PEEM) and low-energy electron microscopy (LEEM).
It consists of two concentric conductive hemispheres that serve as electrodes that bend the trajectories of the electrons entering a narrow slit at one end so that their final radii depend on their kinetic energy. The analyzer, therefore, provides a mapping from kinetic energies to positions on a detector.
Function
An ideal hemispherical analyzer consists of two concentric hemispherical electrodes (inner and outer hemispheres) of radii and held at proper voltages. In such a system, the electrons are linearly dispersed, depending on their kinetic energy, along the direction connecting the entrance and the exit slit, while the electrons with the same energy are first-order focused.
When two voltages, and , are applied to the inner and outer hemispheres, respectively, the electric potential in the region between the two electrodes follows from the Laplace equation:
The electric field, pointing radially from the center of the hemispheres out, has the familiar planetary motion form
The voltages are set in such a way that the electrons with kinetic energy equal to the so-called pass energy follow a circular trajectory of radius . The centripetal force along the path is imposed by the electric field . With this in mind,
The potential difference between the two hemispheres needs to be
.
A single pointlike detector at radius on the other side of the hemispheres will register only the electrons of a single kinetic energy. The detection can, however, be parallelized because of nearly linear dependence of the final radii on the kinetic energy. In the past, several discrete electron detectors (channeltrons) were used, but now microchannel plates with phosphorescent screens and camera detection prevail.
In general, these trajectories are described in polar coordinates for the plane of the great circle for electrons impinging at an angle with respect to the normal to the entrance, and for the initial radii to account for the finite aperture and slit widths (typically 0.1 to 5 mm):
where
As can be seen in the pictures of calculated electron trajectories, the finite slit width maps directly into energy detection channels (thus confusing the real energy spread with the beam width). The angular spread, while also worsening the energy resolution, shows some focusing as the equal negative and positive deviations map to the same final spot.
When these deviations from the central trajectory are expressed in terms of the small parameters defined as , , and having in mind that itself is small (of the order of 1°), the final radius of the electron's trajectory, , can be expressed as
.
If electrons of one fixed energy were entering the analyzer through a slit that is wide, they would be imaged on the other end of the analyzer as a spot wide. If their maximal angular spread at the entrance is , an additional width of is acquired, and a single energy channel is smeared over at the detector side. But there, this additional width is interpreted as energy dispersion, which is, to the first order, . It follows that the instrumental energy resolution, given as a function of the width of the slit, , and the maximal incidence angle, , of the incoming photoelectrons, which is itself dependent on the width of the aperture and slit, is
.
The analyzer resolution improves with increasing . However, technical problems related to the size of the analyzer put a limit on its actual value, and most analyzers have it in the range of 100–200 mm. Lower pass energies also improve the resolution, but then the electron transmission probability is reduced, and the signal-to-noise ratio deteriorates accordingly.
The electrostatic lenses in front of the analyzer have two main purposes: they collect and focus the incoming photoelectrons into the entrance slit of the analyzer, and they decelerate the electrons to the range of kinetic energies around , in order to increase the resolution.
When acquiring spectra in swept (or scanning) mode, the voltages of the two hemispheres – and hence the pass energy – are held fixed; at the same time, the voltages applied to the electrostatic lenses are swept in such a way that each channel counts electrons with the selected kinetic energy for the selected amount of time. In order to reduce the acquisition time per spectrum, the so-called snapshot (or fixed) mode can be used. This mode exploits the relation between the kinetic energy of a photoelectron and its position inside the detector. If the detector energy range is wide enough, and if the photoemission signal collected from all the channels is sufficiently strong, the photoemission spectrum can be obtained in one single shot from the image of the detector.
See also
Mass spectrometry
References
Electron spectroscopy | Hemispherical electron energy analyzer | [
"Physics",
"Chemistry"
] | 1,079 | [
"Electron spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
54,286,399 | https://en.wikipedia.org/wiki/Curie%27s%20principle | Curie's principle, or Curie's symmetry principle, is a maxim about cause and effect formulated by Pierre Curie in 1894:
The idea was based on the ideas of Franz Ernst Neumann and Bernhard Minnigerode. Thus, it is sometimes known as the Neuman–Minnigerode–Curie principle.
References
Group theory
Concepts in physics
Symmetry | Curie's principle | [
"Physics",
"Mathematics"
] | 76 | [
"Group theory",
"Fields of abstract algebra",
"nan",
"Geometry",
"Symmetry"
] |
54,287,671 | https://en.wikipedia.org/wiki/Plasmon%20coupling | Plasmon coupling is a phenomenon that occurs when two or more plasmonic particles approach each other to a distance below approximately one diameter's length. Upon the occurrence of plasmon coupling, the resonance of individual particles start to hybridize, and their resonance spectrum peak wavelength will shift (either blueshift or redshift), depending on how surface charge density distributes over the coupled particles. At a single particle's resonance wavelength, the surface charge densities of close particles can either be out of phase or in phase, causing repulsion or attraction and thus leading to increase (blueshift) or decrease (redshift) of hybridized mode energy. The magnitude of the shift, which can be the measure of plasmon coupling, is dependent on the interparticle gap as well as particles geometry and plasmonic resonances supported by individual particles. A larger redshift is usually associated with smaller interparticle gap and larger cluster size.
Plasmon coupling can also cause the electric field in the interparticle gap to be boosted by several orders of magnitude, which far-exceeds the field enhancement for a single plasmonic nanoparticle. Many sensing applications such as surface enhanced Raman spectroscopy (SERS) utilize the plasmon coupling between nanoparticles to achieve ultralow detection limit.
Plasmon ruler
Plasmon ruler refers to a dimer of two identical plasmonic nanospheres linked together through a polymer, typically DNA or RNA. Based on the Universal Scaling Law between spectral shift and the interparticle separations, the nanometer scale distance can be monitored by the color shifts of dimer resonance peak. Plasmon rulers are typically used to monitor distance fluctuation below the diffraction limit, between tens of nanometers and a few nanometers.
Plasmon coupling microscopy
Plasmon coupling microscopy is a ratiometric widefield imaging approach that allows monitoring of multiple plasmon rulers with high temporal resolution. The entire field of view is imaged simultaneously on two wavelength channels, which corresponds to the red and blue flank of the plasmon ruler resonance. The spectral information of an individual plasmon ruler is expressed in the intensity distribution on the two monitored channels, quantified as R=(I1-I2)/(I1+I2). Each R value corresponds to a certain nanometer scale distance which can be calculated using computer simulation or generated from experiments.
References
Plasmonics | Plasmon coupling | [
"Physics",
"Chemistry",
"Materials_science"
] | 517 | [
"Plasmonics",
"Surface science",
"Condensed matter physics",
"Nanotechnology",
"Solid state engineering"
] |
54,288,469 | https://en.wikipedia.org/wiki/Template-guided%20self-assembly | Template-guided self-assembly is a versatile fabrication process that can arrange various micrometer to nanometer sized particles into lithographically created template with defined patterns. The process contain the following four steps.
Create Template
The "template" can be created by either photolithography or e-beam lithography to define binding sites for various building blocks. The binding sites should reflect the footprint of the building blocks or clusters to be bound.
Surface Treatment
After film development, the created pattern is treated with charged polymers in order to “stick” the particles. Take poly-lysine as an example, the poly-lysine will cover the negatively charged glass surface and turn the charge to be positive; it thus can non-specifically bind negatively charged metallic nanoparticles.
Particle Assembly
To do particle assembly, treated pattern is submerged in a small amount of aqueous solution of particles. A few approaches can be used to facilitate the binding efficiency. One of them is to use capillary force at the edge of the aqueous droplet to “push” the particles into the binding sites. If assembling multiple types of particles, the particles should be assembled in the order of decreasing sizes. For example, if assembling both 60 nm gold nanoparticles as well as 40 nm silver nanoparticles, 60 nm gold nanoparticles should be applied first because it is too big to enter binding sites tailored for 40 nm particles. Rationally design the binding sequence as well as the binding site sizes can result in minimizing the binding errors from occurring.
Remove Template
After binding of all building blocks, the template can be removed by either dissolving in an organic solvent, or stripped off by a scotch tape.
References
Microtechnology | Template-guided self-assembly | [
"Materials_science",
"Engineering"
] | 360 | [
"Materials science",
"Microtechnology"
] |
54,290,686 | https://en.wikipedia.org/wiki/ARC%20Centre%20of%20Excellence%20in%20Future%20Low-Energy%20Electronics%20Technologies | The ARC Centre of Excellence in Future Low-Energy Electronics Technologies (or FLEET) is a collaboration of physicists, electrical engineers, chemists and material scientists from seven Australian universities developing ultra-low energy electronics aimed at reducing energy use in information technology (IT). The Centre was funded in the 2017 ARC funding round.
Aims
FLEET aims to develop a new generation of ultra-low resistance electronic devices, capitalising on Australian research in atomically thin materials, topological materials, exciton superfluids and nanofabrication.
Programmes
FLEET is pursuing three broad research themes to develop devices in which electrical current can flow without resistance:
Topological insulators: a relatively new class of materials and recognised by the 2016 Nobel Prize in Physics, topological insulators conduct electricity only along their edges, and strictly in one direction. This one-way path conducts electricity without loss of energy due to resistance. Approaches being used within FLEET to study topological materials include magnetic topological insulators and quantum anomalous Hall effect (QAHE), topological Dirac semimetals (including oxide ‘antiperovskites’) and artificial topological systems (artificial graphene and 2D topological insulators).
Exciton superfluids: a quantum state known to achieve electrical current flow with minimal wasted dissipation of energy. FLEET aims to develop superfluid devices that operate at room temperature, without the need for expensive, energy-intensive cooling. Approaches being used within FLEET’s include exciton–polariton bosonic condensation in atomically thin materials, topologically-protected exciton–polariton flow, and exciton superfluids in twin-layer materials.
Light-transformed materials: a material can be temporarily forced into a new state by applying an intense light beam. FLEET aims to study the fundamental physics behind this temporary state change. Approaches being pursued in FLEET include optically-induced Floquet topological states (topological states that change with time), nonequilibrium superfluidity and creation of topological states in multi-dimensional extensions of the kicked quantum rotor.
These approaches are enabled by the following two technologies:
Atomically thin materials: FLEET seeks to find new ways of controlling the properties of two-dimensional materials via synthesis, substrates, and tuning electric and magnetic ordering.
Nanodevice fabrication: FLEET aims to work on new techniques to integrate novel atomically thin materials into high-quality device structures with suitable performance.
Participants
FLEET is an Australian initiative, headquartered at Monash University, and in conjunction with the Australian National University, the University of New South Wales, the University of Queensland, RMIT University, the University of Wollongong and Swinburne University of Technology, complemented by a group of Australian and international partners.
It is funded by the Australian Research Council and by the member universities.
FLEET's Director is Michael Fuhrer, who is an ARC Laureate Fellow in the School of Physics and Astronomy at Monash University studying two-dimensional materials (of which graphene is the most well known example), and topological insulators. Deputy Director is Alexander Hamilton at the University of New South Wales.
FLEET partners include Australian Nuclear Science and Technology Organisation, the Australian Synchrotron, California Institute of Technology, Columbia University in the City of New York, Johannes Gutenberg University at Mainz, University of Maryland Joint Quantum Institute & National Institute of Standards and Technology, Max Planck Institute of Quantum Optics, the National University of Singapore, the University of Colorado Boulder, University of Maryland Center for Nanophysics and Advanced Materials, the University of Texas at Austin, Tsinghua University at Beijing, and the University of Würzburg in Germany.
References
External links
FLEET Official Website
Research organisations in Australia
Physics organizations
Electrical engineering organizations
Chemistry organizations
Materials science organizations | ARC Centre of Excellence in Future Low-Energy Electronics Technologies | [
"Chemistry",
"Materials_science",
"Engineering"
] | 768 | [
"Materials science",
"Materials science organizations",
"nan",
"Electrical engineering organizations",
"Electrical engineering"
] |
54,294,051 | https://en.wikipedia.org/wiki/Soci%C3%A9t%C3%A9%20de%20Chimie%20Industrielle%20%28American%20Section%29 | The Société de Chimie Industrielle (American Section) is an independent learned society inspired by the creation of the Société de Chimie Industrielle in Paris in 1917. The American Section was formed on January 18, 1918, and held its first meeting on April 4, 1918.
The Société de Chimie Industrielle (American Section) hosts speakers, grants scholarships, and gives awards. It has given the International Palladium Medal roughly every second year since 1961, and helps to award the Othmer Gold Medal and the Winthrop-Sears Medal every year. The Société also hosts monthly talks, and presents scholarships to writers, educators, and historians of science.
History
One of the first societies for chemists was the Society of Chemical Industry, founded in London in 1881. This inspired a number of other groups, including the Société de Chimie Industrielle in Paris, France. The French Société was modeled on the British organization in 1917.
A number of those active in forming the French Société were elected to its first set of officers, which included industrialist Paul Kestner as president,
vice-presidents Albin Haller and Henry Louis Le Châtelier, and
Jean Gérard as general secretary.
Creation of the French Société in turn inspired creation of a related American association in New York in 1918. This was part of an effort to rebuild international connections between individuals and institutions that had been disrupted during the First World War.
René Laurent Engel encouraged the re-establishment of ties between chemists in the two countries in his position as the scientific representative in a French Mission to the United States.
Victor Grignard of the University of Nancy also encouraged the creation of an American organization. A circular appealed to the Chemists and Manufacturers of America to "extend to our French fellow chemists and manufacturers our moral and financial support and the right hand of good fellowship."
The American section of the Société de Chimie Industrielle was formed on January 18, 1918, following the presentation of the Perkin Medal by the Society of Chemical Industry (American Section) at The Chemists' Club in New York. Engel, as secretary of the parent organization, addressed the meeting. Officers of the newly created American section of the Société de Chimie Industrielle included Leo Baekeland as president, Jerome Alexander as vice-president, Charles Avery Doremus as secretary, and George Frederick Kunz as treasurer. A report describes the Société's purpose as follows:
The first official meeting of the American section of the Société de Chimie Industrielle was held on April 4, 1918 at The Chemists' Club in New York. William H. Nichols, president of the American Chemical Society, welcomed the new organization. Frederick J. LeMaistre reported on "Conditions in the French chemical industries during 1916".
Governance
The Société de Chimie Industrielle (American Section) is now an independent organization. It was granted tax status as a 501(c)(3), a registered nonprofit organization as of 1952. The American Section is directed by a board of officers including a president. , the president of the Société de Chimie Industrielle (American section) is [James M. Weatherall].
Activities
Awards
The International Palladium Medal was instituted in 1958 and first awarded in 1961. The first recipient was Ernest-John Solvay. The medal has generally been given every two years.
The Société has also been involved in nominating and choosing the recipients of the Othmer Gold Medal and the Winthrop-Sears Medal, which are given yearly.
Events
The Société supports a program of monthly speakers featuring CEOs, government leaders, and scientists.
Scholarships
The Société funds scholarships for writers, educators, and historians who place chemistry in historical perspective and explore the influence of chemistry on everyday life.
References
External links
1918 establishments in the United States
Scientific societies based in the United States
Chemical engineering organizations | Société de Chimie Industrielle (American Section) | [
"Chemistry",
"Engineering"
] | 784 | [
"Chemical engineering",
"Chemical engineering organizations"
] |
54,295,335 | https://en.wikipedia.org/wiki/Magnadur | Magnadur is a sintered barium ferrite, specifically BaFe12O19 in an anisotropic form. It is used for making permanent magnets. The material was invented by Mullard and was used initially particularly for focussing rings on cathode-ray tubes. Magnadur magnets retain their magnetism well, and are often used in education. Magnadur can also be used in DC motors.
Physical characteristics
Remanence 0.9 T
Coercivity 110 kA/m
Maximum energy product, 20 kJm - at 86 kAm
References
Ferromagnetic materials | Magnadur | [
"Physics",
"Chemistry"
] | 123 | [
"Inorganic compounds",
"Ferromagnetic materials",
"Inorganic compound stubs",
"Materials",
"Matter"
] |
70,019,055 | https://en.wikipedia.org/wiki/Cebuano%20numerals | The Cebuano numbers are the system of number names used in Cebuano to express quantities and other information related to numbers. Cebuano has two number systems: the native system and the Spanish-derived system. The native system is mostly used for counting small numbers, basic measurement, and for other pre-existing native concepts that deals with numbers. Meanwhile, the Spanish-derived system is mainly used for concepts that only existed post-colonially such as counting large numbers, currency, solar time, and advanced mathematics.
History
Unlike other Philippine languages, the native number system of Cebuano was derived solely from the non-human forms of Proto-Austronesian numerals instead of a combination of both human and non-human numerals, such as in Tagalog and Hiligaynon. The numbers were first recorded by chronicler Antonio Pigafetta during Magellan's expedition.
Types
The native numbers are categorized into four types: cardinal, ordinal, distributive, and multiplicative (also referred to as "viceral" or "adverbial"). The multiples of ten are formed by attaching the circumfix "ka-ø-an" (e.g. kawaloan). Those that are within the 20-60 range undergo the process of metathesis and syncope (e.g. katloan, from katuloan).
Cardinal
Like other Visayan languages, cardinal numbers are linked to the noun with the ligature ka.
usá ka tawo a/one person
kaluhaan ug usá ka bulan twenty-one months
Ordinal
Ordinal numbers in Cebuano are formed using the ika- prefix, except una.
Distributive
Distributive numbers in Cebuano are formed by attaching the tag- prefix to the numerical root. Irregular words may be formed depending on the number being attached to.
Multiplicative
Multiplicative (or viceral) numbers in Cebuano are formed using the ka- prefix. The prefixes "naka-" and "maka-" may also be used to specify if the number is used in the nasugdan or pagasugdan aspect, respectively.
See also
Cebuano language
Cebuano grammar
References
Cebuano language
Numerals | Cebuano numerals | [
"Mathematics"
] | 472 | [
"Numeral systems",
"Numerals"
] |
70,019,903 | https://en.wikipedia.org/wiki/Water%20Protection%20Zone | A Water Protection Zone is a statutory regulation imposed under Schedule 11 to the Water Resources Act 1991. The power was subsequently subsumed into The Water Resources Act (Amendment) (England and Wales) Regulations 2009. The only example in the UK was applied to the River Dee in 1999 as The Water Protection Zone (River Dee Catchment) Designation Order 1999 which covers the whole of the River Dee catchment from the headwaters down to the final potable water abstraction point at Chester
The creation of this protection zone gave powers to the then Environment Agency (now Natural Resources Wales) to monitor and control the use and storage of any potentially polluting substance brought into the catchment for any industrial or commercial operation - a controlled activity as defined by the order. All such controlled activities require a permit to be issued and the conditions of the permit are determined by a risk analysis mathematical model involving the nature of the substance, its quantity and the distance from any vulnerable drinking water intake.
Applications for consent are required to complete a formal application
Following a serious degradation of the quality of the River Wye, there have been calls for a new water protection zone to be established for that river.
References
Rivers
Risk analysis
Mathematical modeling | Water Protection Zone | [
"Mathematics"
] | 236 | [
"Applied mathematics",
"Mathematical modeling"
] |
70,028,071 | https://en.wikipedia.org/wiki/Galileo%20and%20Ulysses%20Dust%20Detectors | The Galileo and Ulysses Dust Detectors are almost identical dust instruments on the Galileo and Ulysses missions. The instruments are large-area (0.1 m2 sensitive area) highly reliable impact ionization detectors of sub-micron and micron sized dust particles. With these instruments the interplanetary dust cloud was characterized between Venus’ and Jupiter's orbits and over the solar poles. A stream of interstellar dust passing through the planetary system was discovered. Close to and inside the Jupiter system streams nanometer sized dust particles that were emitted from volcanoes on Jupiter's moon Io and ejecta clouds around the Galilean moons were discovered and characterized.
Overview
Following the first dust instruments from the Max Planck Institute for Nuclear Physics (MPIK), Heidelberg (Germany) on the HEOS 2 satellite and the Helios spacecraft a new dust instrument was developed by a Team of Scientists and Engineers of Eberhard Grün to detect cosmic dust in the outer planetary system. This instrument had 10 times larger sensitive area (0.1 m2) and employed a multiple coincidence of impact signals in order to cope with the low fluxes of cosmic dust and the hostile environment in the outer planets magnetospheres.
The Galileo and Ulysses dust detectors use impact ionization from hypervelocity impacts of cosmic dust particles onto the hemispherical target. Electrons and ions from the impact plasma are separated by the electric field between the target and the center ion collector. Ions are partly collected by the semi-transparent grid and the center channeltron multiplier. The amplitudes of the impact, the rise-times, and time relations of the charge signals are measured, stored and transmitted to ground. Using this information noise from impacts events were separated and properties (mass and speed) of the impacting dust particles were determined. The center grid of the three grids at the entrance of the detector pick-up the electric charge of the dust particle. Unfortunately, no dust charges were reliably identified by these instruments during their space operation.
The Galileo Dust Detector was developed by the Team of Scientists and Engineers led by Eberhard Grün at the Max Planck Institute for Nuclear Physics (MPIK), Heidelberg (Germany) and was selected in 1977 by NASA to explore the dust environment of Jupiter on board the Galileo Jupiter Orbiter. The Galileo spacecraft was a dual-spin spacecraft with its antenna pointing to Earth. The dust detector was mounted on the spinning section at an angle of 60° with respect to the spin axis. Galileo was launched in 1989 and cruised for 6 years interplanetary space between Venus’ and Jupiter's orbits before it started in 1995 its 7-year path through the Jovian system with several fly-bys of all Galilean moons. The Galileo dust detector operated during the whole mission.
About a year after Galileo the twin instrument was selected for the out-of-ecliptic Ulysses mission. Ulysses was a spinning spacecraft with the dust detector mounted at 85° to the spin axis. Launch of Ulysses was in 1990 and the spacecraft went on a direct trajectory to Jupiter which it reached in 1992 for a swing-by maneuver which put the spacecraft on a heliocentric orbit of 80 degrees inclination. This orbit had a period of 6.2 years and a perihelion of 1.25 AU and an aphelion of 5.4 AU. Ulysses completed 2.5 orbits until the mission was ended. The Ulysses dust detector operated during the whole mission.
The initial Principal Investigator for both instruments was Eberhard Grün. In 1996 the PI-ship was handed over to Harald Krüger from Max Planck Institute for Solar System Research, Göttingen, Germany.
Major discoveries and observations
Interplanetary dust
Galileo and Ulysses traversed interplanetary space from Venus’ orbit (0.7 AU) to Jupiter’s orbit (~5 AU) and about 2 AU above and below the solar poles. During all the time the dust experiments recorded cosmic dust particles that were an important input to a model of interplanetary dust.
Interstellar dust
After Jupiter flyby Ulysses identified a flow of interstellar dust sweeping through the Solar System.
Dust in the Jupiter system
After Jupiter flyby Ulysses detected hyper-velocity streams of nano-dust which are emitted from Jupiter and then couple to the solar magnetic field.
Dust streams from Jupiter, and their interactions with the Jovian satellite Io were detected, as well as ejecta clouds around the Galilean moons.
References
Spacecraft instruments
Scientific instruments
Space science experiments | Galileo and Ulysses Dust Detectors | [
"Technology",
"Engineering"
] | 904 | [
"Scientific instruments",
"Measuring instruments"
] |
65,682,506 | https://en.wikipedia.org/wiki/Magway%20Ltd | Magway is a UK startup noted for its e-commerce and freight delivery system that aims to transport goods in pods that fit in new and existing -diameter pipes, underground and overground, reducing road congestion and air pollution. It uses linear magnetic motors to shuttle pods, designed to accommodate a standard delivery crate (or tote), at approximately .
Founded in 2017 by Rupert Cruise, an engineer on Elon Musk's Hyperloop project, and Phill Davies, a business expert, Magway secured a £0.65 million grant in 2018, through Innovate UK’s 'Emerging and Enabling Technologies' competition, to develop an operational demonstrator. In 2019, £1.58 million was raised through crowdfunding to fund a pilot scheme, and in 2020, Magway was awarded £1.9 million from the UK Government's 'Driving the Electric Revolution Challenge', an initiative launched to coincide with the first meeting of a new Cabinet committee focused on climate change. In September 2020, Magway completed its first full loop of test track in a warehouse in Wembley.
Primarily focused on two freight routes from large consolidation centres near London (Milton Keynes, Buckinghamshire and Hatfield, Hertfordshire) into Park Royal, a west London distribution centre, future plans involve installing of track in decommissioned London gas pipelines, to deliver e-commerce goods from distribution centres direct to consumers in the capital. The design of the pipes is similar to the current underground pipe system in small tunnels that distribute water, gas, and electricity in the city. The pods are powered by electromagnetic wave from magnetic motors that are similar to those used in roller coasters. A proposed route that runs from Milton Keynes to London will have the capacity to transport more than 600 million parcels annually. Outside of urban areas, Magway plans to build its pipe system alongside motorways.
References
Sustainable transport
Transport systems
Linear induction motors
Vacuum systems | Magway Ltd | [
"Physics",
"Technology",
"Engineering"
] | 382 | [
"Transport systems",
"Vacuum",
"Sustainable transport",
"Transport",
"Physical systems",
"Vacuum systems",
"Matter"
] |
61,864,866 | https://en.wikipedia.org/wiki/Electrification%20and%20controls%20technology | Electrification and controls technology are devices that control, service and enhance productivity of industrial handling. Controls interface with hardware such as receivers, cranes and hoists, through a network in order to ensure that equipment operates safely and effectively. Almost every business, including the food, chemical, and automobile industries, uses controls. Some examples of these gadgets are:
Remote controls
Festooning
Drives
Motors
Conductor bars
Anti-collision devices
Weighing devices
Brakes
Resistors
Cabling
Industry definitions
Conductor bar: Insulated energized rails that safely provide power, control and data to moving equipment from a fixed source, much like electric rails on a model train.
Festoon system: A cable management system of rolling trolleys that properly support power, control and data cables to moving equipment from a fixed source.
Cable reel: A cable management device designed to spool and store electrical power, control or data cable, as the equipment moves along its path of motion.
Variable-frequency drive: A type of static controller that safely drives an electric motor by varying the frequency and voltage the motor is supplied. This device minimizes the wear and tear of the mechanical system while allowing precise control and maximizing operator safety.
Radio remote control: Allows an operator to control different types of moving equipment and cranes, meanwhile, providing the operator the best vantage point to the load or operation and physical position for a safe working area.
Load brake: A device used to safely stop linear or rotating motion of equipment through the use of power or friction.
References
Electrification
Control engineering | Electrification and controls technology | [
"Engineering"
] | 304 | [
"Control engineering"
] |
61,869,688 | https://en.wikipedia.org/wiki/Eva%20Regnier | Eva Dorothy Regnier (born 1971) is a decision scientist whose research concerns the interaction between human decision-making and environmental prediction. She is a professor of decision science in the Graduate School of Business and Public Policy of the Naval Postgraduate School.
Education and career
Regnier graduated from the Massachusetts Institute of Technology in 1992 with a bachelor's degree in environmental engineering science. After working from 1993 to 1996 in industry as an environmental engineer, she went to the Georgia Institute of Technology for a master's degree in operations research in 1999 and a Ph.D. in industrial engineering in 2001.
Her dissertation, Discounted Cash Flow Methods and Environmental Decisions, was supervised by Craig Tovey.
She joined the Defense Resources Management Institute of the Naval Postgraduate School in 2001, moved to the Graduate School of Business and Public Policy in 2017, and was promoted to full professor in 2019.
Regnier was president of the INFORMS Forum on Women in Operations Research and Management Science for 2011.
Contributions
Regnier has published well-cited works on volatility in energy markets and on decision-making for evacuations based on hurricane predictions. Other topics in her research include correlations between pirate activity and predicted changes in climate and weather.
Her work on hurricane evacuation was a finalist for the INFORMS Junior Faculty Forum award, and her work developing a tool to simulate the hurricane decision-making process was a finalist in the INFORMS MSOM Practice Based Research Competition. She received the INFORMS Decision Analysis Society Publication Award for her work on probability forecasting.
Selected publications
References
External links
Home page
1971 births
Living people
American industrial engineers
American women engineers
Operations researchers
Environmental engineers
MIT School of Engineering alumni
Georgia Tech alumni
Naval Postgraduate School faculty
American women academics
21st-century American women | Eva Regnier | [
"Chemistry",
"Engineering"
] | 343 | [
"Environmental engineers",
"Environmental engineering"
] |
64,210,868 | https://en.wikipedia.org/wiki/Pop-up%20bicycle%20lane | A pop-up bicycle lane (also known as a pop-up cycle path or corona cycle path) is a temporary bike lane that is used to test, pilot or trial new infrastructure to improve conditions for people riding bicycles. In the event that it is successful, interventions can be implemented permanently.
During the COVID-19 pandemic in particular, many cities set up pop-up bike lanes to quickly provide more space and safety for cyclists in poor road traffic conditions. These were usually intended as temporary cycling infrastructure for the time of the Pandemic. The purpose was primarily to provide more capacity for the rapid increase in demand for cycling and provide a viable alternative to places in close proximity to other people such as public transport.
The cycle paths, which are usually marked with yellow lines and construction site beacons, were usually established by redesignating the kerbside traffic lane or a previous parking lane as a cycle lane. In Berlin, the cost of one kilometre of pop-up cycle paths is around 9500 euros.
History
The term "pop-up bike lane" originated in North America, where, for example, the US city of New York City has launched a number of experiments with short-term cycling infrastructure. The COVID-19 pandemic led to the creation of more space for bicycle traffic in Colombia, initially in the capital city of Bogotá, over a total of more than one hundred kilometres of main roads. The measure was reported internationally. In Germany, pop-up cycle paths were initially set up in the Berlin district of Friedrichshain-Kreuzberg. The first pop-up cycle path in Berlin was created on 25 March 2020 at Hallesches Ufer. The pop-up cycle paths in Berlin were laid out for a limited period until 31 May 2020, with the prospect of a transfer to a permanent cycle infrastructure in accordance with the Berlin Mobility Act. However, the deadline was extended at the end of May until the end of the year.
Mexico City announced a 54-kilometre pop up lane in Av Insurgentes and Eje 4 to create a mobility alternative to help decrease mass transit agglomeration in Metrobus lines. Its permanence will be evaluated according to use. Other Mexican cities that have created pop up bike lanes are Zapopan in Jalisco, San Pedro Garza García in Nuevo León and Puebla, Puebla.
Concept
The Berlin Friedrichshain-Kreuzberg district authority recommends a concept, after which pop-up cycle paths can be set up within ten days and in eleven steps, from the identification of the areas affected, consultation with the authorities to be involved, the ordering of measures and temporary signage, to completion. The following four basic principles are applied in the design process:
traffic separation: physical separation of foot, bicycle and car traffic beyond markings, which, if possible, does not need to be crossed by car traffic for parking
Forgiving infrastructure: infrastructure that minimises the risk of injury in the event of misbehaviour, for example by means of safety distances between the carriageway and the kerb or guide beacons or the use of separation elements made of yielding materials
Predictability: easily understandable and less complex traffic routing
Network approach: Establishment of a transport network to relieve individual road sections
On roads with two lanes in both directions, the right-hand lane including a buffer zone for flowing motorised traffic is completely separated as a cycle lane and existing signs for stationary traffic are covered. Parking of motor vehicles is then no longer permitted and driving is possible on one strip for both cyclists and motorised traffic. On roads with two lanes and a parking strip in both directions, the right lane for cycle traffic, including a buffer zone for both stationary and moving motorised traffic, is separated and the parking strip is maintained. Motorised traffic can then drive on one lane in each direction. In this case, however, vehicles must cross the cycle lane when parking or entering, which is contrary to the above-mentioned principle of traffic separation. On roads with three lanes in each direction, where parking is allowed on the right lane, the right lane is separated as a cycle lane with a buffer zone to the middle lane and the middle lane is designated as a parking lane, so that flowing motorised traffic, stationary motorised traffic and cycle traffic are each provided with one lane. At crossroads with traffic lights, there are possible measures to protect straight ahead or right-hand cycle traffic from motorised traffic turning right. This includes the creation of a temporary protected intersection with temporary kerbstone extensions or alternatively traffic light phases with separate, exclusive green phases for pedestrian and bicycle traffic. If this is not possible, it is recommended to switch the green phase for cyclists before the green phase for motorised traffic.
Political debate in Germany
The establishment of pop-up cycle paths in Berlin by the red-red-green senate was praised by cycling associations such as the ADFC and the associations around the Initiative Volksentscheid Fahrrad and was mostly positively received in the social media. The Berlin-Brandenburg regional association of the ADAC criticised the measure and said that the senate would exploit an emergency situation to pursue particular interests. The CDU and FDP also accused the senate of instrumentalizing the pandemic to turn traffic around. The AfD spoke of "left-wing car-hating policies" and pointed to a decrease in the number of cyclists compared to last year. The ADFC, on the other hand, stated that the total number of distances travelled in the Corona crisis had decreased overall and evaluations by the traffic information centre and public transport showed "that this was far more drastically the case with car traffic, buses and trains than with cycling".
As other cities in Germany initially did not want to set up temporary cycle paths, Deutsche Umwelthilfe sent applications to 204 city administrations, whereupon the cities of Cologne, Frankfurt am Main and Dresden, among others, wanted to consider the option. In several cities, including Stuttgart, cycling associations organised campaigns calling for the creation of pop-up cycle paths.
After a female cyclist coming from a pop-up cycle path was killed by a truck driver turning right at the intersection of Petersburger Straße and Mühsamstraße in June 2020, Siegfried Brockmann, head of the accident research department of the insurers, criticised that pop-up cycle paths alone would not provide a safe solution for the intersection areas as the main danger spots and would thus make people think they were safe. To achieve sufficient safety, the intersections would have to be rebuilt and the traffic lights changed. Brockmann also criticised the short-term installation of the cycle paths without prior measurement of the respective traffic flows. The senate administration replied that the police were involved in every installation of a pop-up cycle path, "in order to consider safety aspects of the respective location together with the road traffic authority". The situation at crossroads and junctions would only change as a result of the provisional cycle lanes to the extent that the visibility conditions would improve significantly in each case.
Scientific impact analysis
A 2021 case-control study of cities found that redistributing street space for "pop-up bike lanes" during the COVID-19 pandemic leads to large additional increases in cycling. These may have substantial environmental and health benefits.
See also
Impact of the COVID-19 pandemic on the environment#Cycling
Tactical urbanism
References
Cycleways
Transport infrastructure | Pop-up bicycle lane | [
"Physics"
] | 1,502 | [
"Physical systems",
"Transport",
"Transport infrastructure"
] |
64,212,921 | https://en.wikipedia.org/wiki/Hyper-IL-6 | Hyper-IL-6 is a designer cytokine, which was generated by the German biochemist Stefan Rose-John. Hyper-IL-6 is a fusion protein of the four-helical cytokine Interleukin-6 and the soluble Interleukin-6 receptor which are covalently linked by a flexible peptide linker. Interleukin-6 on target cells binds to a membrane bound Interleukin-6 receptor. The complex of Interleukin-6 and the Interleukin-6 receptor associate with a second receptor protein called gp130, which dimerises and initiates intracellular signal transduction. Gp130 is expressed on all cells of the human body whereas the Interleukin-6 receptor is only found on few cells such as hepatocytes and some leukocytes. Neither Interleukin-6 nor the Interleukin-6 receptor have a measurable affinity for gp130. Therefore, cells, which only express gp130 but no Interleukin-6 receptor are not responsive to Interleukin-6. It was found, however, that the membrane-bound Interleukin-6 receptor can be cleaved from the cell membrane generating a soluble Interleukin-6 receptor. The soluble Interleukin-6 receptor can bind the ligand Interleukin-6 with similar affinity as the membrane-bound Interleukin-6 receptor and the complex of Interleukin-6 and the soluble Interleukin-6 receptor can bind to gp130 on cells, which only express gp130 but no Interleukin-6 receptor. The mode of signaling via the soluble Interleukin-6 receptor has been named Interleukin-6 trans-signaling whereas Interleukin-6 signaling via the membrane-bound Interleukin-6 receptor is referred to as Interleukin-6 classic signaling. Therefore, the generation of the soluble Interleukin-6 receptor enables cells to respond to Interleukin-6, which in the absence of soluble Interleukin-6 receptor would be completely unresponsive to the cytokine.
Molecular construction of Hyper-IL-6
In order to generate a molecular tool to discriminate between Interleukin-6 classic signaling and Interleukin-6 trans-signaling, a cDNA coding for human Interleukin-6 and a cDNA coding for the human soluble Interleukin-6 receptor were connected by a cDNA coding for a 13 amino acids long linker, which was long enough to bridge the 40 Å distance between the COOH terminus of the soluble Interleukin-6 receptor and the NH2 terminus of human Interleukin-6. The generated cDNA was expressed in yeast cells and in mammalian cells and it was shown that.
Use of Hyper-IL-6 to analyse IL-6 signaling
Hyper-IL-6 has been used to test which cells depend on Interleukin-6 trans-signaling in their response to the cytokine Interleukin-6. To this end, cells were treated with Interleukin-6 and alternatively with Hyper-IL-6. Cells, which respond to Interleukin-6 alone do express an Interleukin-6 receptor whereas cells, which only respond to Hyper-IL-6 but not to Interleukin-6 alone depend in their response to the cytokine on Interleukin-6 trans-signaling. It turned out that hematopoietic stem cells, neural cells, smooth muscle cells and endothelial cells are typical target cells of Interleukin-6 trans-signaling.
The concept of Interleukin-6 trans-signaling
The Hyper-IL-6 protein has also been used to explore the physiologic role of Interleukin-6 trans-signaling in vivo. It turned out that this signaling mode was involved in many types of inflammation and cancer.
Hyper-IL-6 has helped to establish the concept of Interleukin-6 trans-signaling. Interleukin-6 trans-signaling mediates the pro-inflammatory activities of Interleukin-6 whereas Interleukin-6 classic signaling governs the protective and regenerative Interleukin-6 activities. Recently, in breast cancer patients, it was shown with the help of Hyper-IL-6 that IL-6 trans-signaling via phosphoinositid-3-kinase signaling activates disseminated cancer cells long before metastases are formed. In addition, it was demonstrated in mice that Hyper-IL-6 transneuronal delivery enabled functional recovery after severe spinal cord injury.
References
Cytokines
Biochemistry
Interleukins | Hyper-IL-6 | [
"Chemistry",
"Biology"
] | 1,003 | [
"Biochemistry",
"Cytokines",
"nan",
"Signal transduction"
] |
64,215,003 | https://en.wikipedia.org/wiki/Di%C3%B3si%E2%80%93Penrose%20model | The Diósi–Penrose model was introduced as a possible solution to the measurement problem, where the wave function collapse is related to gravity. The model was first suggested by Lajos Diósi when studying how possible gravitational fluctuations may affect the dynamics of quantum systems. Later, following a different line of reasoning, Roger Penrose arrived at an estimation for the collapse time of a superposition due to gravitational effects, which is the same (within an unimportant numerical factor) as that found by Diósi, hence the name Diósi–Penrose model. However, it should be pointed out that while Diósi gave a precise dynamical equation for the collapse, Penrose took a more conservative approach, estimating only the collapse time of a superposition.
The Diósi model
In the Diósi model, the wave-function collapse is induced by the interaction of the system with a classical noise field, where the spatial correlation function of this noise is related to the Newtonian potential. The evolution of the state vector deviates from the Schrödinger equation and has the typical structure of the collapse models equations:
where
is the mass density function, with , and respectively the mass, the position operator and the mass density function of the -th particle of the system. is a parameter introduced to smear the mass density function, required since taking a point-like mass distribution
would lead to divergences in the predictions of the model, e.g. an infinite collapse rate or increase of energy. Typically, two different distributions for the mass density have been considered in the literature: a spherical or a Gaussian mass density profile, given respectively by
and
Choosing one or another distribution does not affect significantly the predictions of the model, as long as the same value for is considered. The noise field in Eq. () has zero average and correlation given by
where “” denotes the average over the noise. One can then understand from Eq. () and () in which sense the model is gravity-related: the coupling constant between the system and the noise is proportional to the gravitational constant , and the spatial correlation of the noise field has the typical form of a Newtonian potential. Similarly to other collapse models, the Diósi–Penrose model shares the following two features:
The model describes a collapse in position.
There is an amplification mechanism, which guarantees that more massive objects localize more effectively.
In order to show these features, it is convenient to write the master equation for the statistical operator corresponding to Eq. ():
It is interesting to point out that this master equation has more recently been re-derived by L. Diósi using a hybrid approach where quantized massive particles interact with classical gravitational fields.
If one considers the master equation in the position basis, introducing with , where is a position eigenstate of the -th particle, neglecting the free evolution, one finds
with
where
is the mass density when the particles of the system are centered at the points , ..., . Eq. () can be solved exactly, and one gets
where
As expected, for the diagonal terms of the density matrix, when , one has , i.e. the time of decay goes to infinity, implying that states with well-localized position are not affected by the collapse. On the contrary, the off-diagonal terms , which are different from zero when a spatial superposition is involved, will decay with a time of decay given by Eq. ().
To get an idea of the scale at which the gravitationally induced collapse becomes relevant, one can compute the time of decay in Eq. () for the case of a sphere with radius and mass in a spatial superposition at a distance . Then the time of decay can be computed) using Eq. () with
where . To give some examples, if one considers a proton, for which kg and m, in a superposition with , one gets years. On the contrary, for a dust grain with kg and m, one gets one gets s. Therefore, contrary to what might be expected considering the weaknesses of gravitational force, the effects of the gravity-related collapse become relevant already at the mesoscopic scale.
Recently, the model have been generalized by including dissipative and non-Markovian effects.
Penrose's proposal
It is well known that general relativity and quantum mechanics, our most fundamental theories for describing the universe, are not compatible, and the unification of the two is still missing. The standard approach to overcome this situation is to try to modify general relativity by quantizing gravity. Penrose suggests an opposite approach, what he calls “gravitization of quantum mechanics”, where quantum mechanics gets modified when gravitational effects become relevant. The reasoning underlying this approach is the following one: take a massive system of well-localized states in space. In this case, the state being well-localized, the induced space–time curvature is well defined. According to quantum mechanics, because of the superposition principle, the system can be placed (at least in principle) in a superposition of two well-localized states, which would lead to a superposition of two different space–times. The key idea is that since space–time metric should be well defined, nature “dislikes” these space–time superpositions and suppresses them by collapsing the wave function to one of the two localized states.
To set these ideas on a more quantitative ground, Penrose suggested that a way for measuring the difference between two space–times, in the Newtonian limit, is
where is the Newtonian gravitational acceleration at the point where the system is localized around . The acceleration can be written in terms of the corresponding gravitational potentials , i.e. . Using this relation in Eq. (), together with the Poisson equation , with giving the mass density when the state is localized around , and its solution, one arrives at
The corresponding decay time can be obtained by the Heisenberg time–energy uncertainty:
which, apart for a factor simply due to the use of different conventions, is exactly the same as the time decay derived by Diósi's model. This is the reason why the two proposals are named together as the Diósi–Penrose model.
More recently, Penrose suggested a new and quite elegant way to justify the need for a gravity-induced collapse, based on avoiding tensions between the superposition principle and the equivalence principle, the cornerstones of quantum mechanics and general relativity. In order to explain it, let us start by comparing the evolution of a generic state in the presence of uniform gravitational acceleration . One way to perform the calculation, what Penrose calls “Newtonian perspective”, consists in working in an inertial frame, with space–time coordinates and solve the Schrödinger equation in presence of the potential (typically, one chooses the coordinates in such a way that the acceleration is directed along the axis, in which case ). Alternatively, because of the equivalence principle, one can choose to go in the free-fall reference frame, with coordinates related to by and , solve the free Schrödinger equation in that reference frame, and then write the results in terms of the inertial coordinates . This is what Penrose calls “Einsteinian perspective”. The solution obtained in the Einsteinian perspective and the one obtained in the Newtonian perspective are related to each other by
Since the two wave functions are equivalent apart from an overall phase, they lead to the same physical predictions, which implies that there are no problems in this situation where the gravitational field always has a well-defined value. However, if the space–time metric is not well defined, then we will be in a situation where there is a superposition of a gravitational field corresponding to the acceleration and one corresponding to the acceleration . This does not create problems as long as one sticks to the Newtonian perspective. However, when using the Einstenian perspective, it will imply a phase difference between the two branches of the superposition given by . While the term in the exponent linear in the time does not lead to any conceptual difficulty, the first term, proportional to , is problematic, since it is a non-relativistic residue of the so-called Unruh effect: in other words, the two terms in the superposition belong to different Hilbert spaces and, strictly speaking, cannot be superposed. Here is where the gravity-induced collapse plays a role, collapsing the superposition when the first term of the phase becomes too large.
Further information on Penrose's idea for the gravity-induced collapse can be also found in the Penrose interpretation.
Experimental tests and theoretical bounds
Since the Diósi–Penrose model predicts deviations from standard quantum mechanics, the model can be tested. The only free parameter of the model is the size of the mass density distribution, given by . All bounds present in the literature are based on an indirect effect of the gravitational-related collapse: a Brownian-like diffusion induced by the collapse on the motion of the particles. This Brownian-like diffusion is a common feature of all objective-collapse theories and, typically, allows to set the strongest bounds on the parameters of these models. The first bound on was set by Ghirardi et al., where it was shown that m to avoid unrealistic heating due to this Brownian-like induced diffusion. Then the bound has been further restricted to m by the analysis of the data from gravitational wave detectors. and later to m by studying the heating of neutron stars.
Regarding direct interferometric tests of the model, where a system is prepared in a spatial superposition, there are two proposals currently considered: an optomechanical setup with a mesoscopic mirror to be placed in a superposition by a laser, and experiments involving superpositions of Bose–Einstein condensates.
See also
Measurement problem
Interpretation of quantum mechanics
Penrose interpretation
Gravitational decoherence
Wave function collapse
Objective-collapse theory
Ghirardi–Rimini–Weber theory
References
Quantum measurement
Interpretations of quantum mechanics | Diósi–Penrose model | [
"Physics"
] | 2,046 | [
"Interpretations of quantum mechanics",
"Quantum measurement",
"Quantum mechanics"
] |
64,216,555 | https://en.wikipedia.org/wiki/OpenVSP | OpenVSP (also Open Vehicle Sketch Pad) — is an open-source parametric aircraft geometry tool originally developed by NASA. It can be used to create 3D models of aircraft and to support engineering analysis of those models.
History
Predecessors to OpenVSP including VSP and Rapid Aircraft Modeler (RAM) were developed by J.R. Gloudemans and others for NASA beginning in the early 1990s. OpenVSP v2.0 was released as open source under the NOSA license in January 2012. Development has been led by Rob McDonald since around 2012 and has been supported by NASA and AFRL among other contributions.
OpenVSP allows the user to quickly generate computer models from ideas, which can then be analyzed. As such, it is especially powerful in generating and evaluating unconventional design concepts.
Features
User interface
OpenVSP displays a graphical user interface upon launch, built with FLTK. A workspace window and a "Geometry Browser" window open. The workspace is where the model is displayed while the Geometry Browser lists individual components in the workspace, such as fuselage and wings. These components can be selected, added or deleted, somewhat like a feature tree in CAD software such as Solidworks. When a component is selected in the Geometry Browser window, a component geometry window opens. This window is used to modify the component.
OpenVSP also provides API capabilities which may be accessed using Matlab, Python or AngelScript.
Geometry modelling
OpenVSP offers a multitude of basic geometries, common to aircraft modelling, which users modify and assemble to create models. Wing, pod, fuselage, and propeller are a few available geometries. Advanced components like body of revolution, duct, conformal geometry and such are also available.
Analysis tools
Besides the geometry modeler, OpenVSP contains multiple tools that help with aerodynamic or structural analysis of models. The tools available are:
CompGeom - mesh generation tool that can handle model intersection and trimming
Mass Properties Analysis - to compute properties like centre of gravity and moment of inertia
Projected Area Analysis - to compute project area
CFD Mesh - to generate meshes that may be used in Computational fluid dynamics analysis software
FEA Mesh - to generate meshes that may be used in FEA analysis software
DegenGeom - to generate various simplified representations of geometry models like point, beam and camber surface models
VSPAERO - for vortex lattice or panel method based aerodynamic and flight dynamic analysis
Wave Drag Analysis - for estimating wave drag of geometries
Parasite Drag Analysis - for estimating parasite drag of geometries based on parameters like wetted area and skin friction coefficient
Surface fitting - for fitting a parametric surface to a point cloud
Texture Manager - for applying image textures to geometry for aiding visualization
FEA Structure - for creating internal structures such as ribs and spars
Compatibility with other software
OpenVSP permits import of multiple geometry formats like STL, CART3D (.tri) and PLOT3D.
Point clouds may also be imported and used to fit a parametric surface.
Geometry created in OpenVSP may be exported as STL, CART3D (.tri), PLOT3D, STEP and IGES, OBJ, SVG, DXF and X3D file formats. These file formats allow geometries to be used for mesh generation and in CFD or FEA software.
Community repository
OpenVSP Hangar
OpenVSP Hangar (also VSP Hangar) provides users a place to upload models and promotes sharing of geometry created in OpenVSP. Each model is allowed revisions with accompanying details on source quality.
Since end of 2023, OpenVSP Hangar has been closed and no backup downloads has been provided.
OpenVSP Airshow
On 22 August 2024, OpenVSP Airshow (also VSP Airshow), a successor to OpenVSP Hangar, has been launched.
OpenVSP Workshop
OpenVSP Workshop — is an offline event where developers and users meet to discuss progress and use of OpenVSP. The Workshop has been held annually since 2012 (except 2018). The 2020 and 2021 Workshops were held online due to the COVID-19 pandemic. The 2024 Workshop was held at the Museum of Flight in Seattle.
Papers, slides and other workshops materials published on OpenVSP wiki site in a few days after workshops ends.
OpenVSP Ground School
OpenVSP Ground School is a set of comprehensive tutorials under development by Brandon Litherland at NASA. Ground school tutorials provide details on OpenVSP features and techniques, along with tutorials for beginner and advanced users, and are hosted on the Langley Research Center website.
See also
Comparison of computer-aided design software
XFOIL
References
External links
VSP Airshow
Free and open-source software
Computer-aided design software
Aerospace companies
Langley Research Center
NASA
Computational fluid dynamics
Finite element software | OpenVSP | [
"Physics",
"Chemistry"
] | 987 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
64,224,034 | https://en.wikipedia.org/wiki/Togni%20reagent%20II | Togni reagent II (1-trifluoromethyl-1,2-benziodoxol-3(1H)-one) is a chemical compound used in organic synthesis for direct electrophilic trifluoromethylation.
History
Synthesis, properties, and reactivity of the compound were first described in 2006 by Antonio Togni and his coworkers at ETH Zurich. The article also contains information on Togni reagent I (1,3-dihydro-3,3-dimethyl-1-(trifluoromethyl)-1,2-benziodoxole).
Preparation
The synthesis consists of three steps. In the first step, 2-iodobenzoic acid is oxidized by sodium periodate and cyclized to 1-hydroxy-1,2-benziodoxol-3(1H)-one. The target compound can then be obtained by acylation with acetic anhydride and subsequent substitution reaction with trifluoromethyltrimethylsilane.
Alternatively, trichloroisocyanuric acid can be used as oxidant in the place of sodium periodate for a newer one-pot synthesis method.
Properties
Physical properties
The compound crystallized in a monoclinic crystal structure. The space group is P21/n with four molecules in the unit cell. From the crystallographic data, a density of 2.365 g·cm−3 was deduced.
Chemical properties
Pure Togni reagent II is metastable at room temperature. Heating it above the melting point will lead to strong exothermic decomposition, in which trifluoroiodomethane (CF3I) is released. The heat of composition at a temperature of 149 °C and higher has been determined to be 502 J·g−1. From recrystallization in acetonitrile, small amounts of trifluoromethyl-2-iodobenzoate and 2-iodobenzyl fluoride were observed as decomposition products. Togni reagent II reacts violently with strong bases and acids, as well as reductants. In tetrahydrofuran, the compound polymerizes.
Uses
Togni reagent II is used for trifluoromethylation of organic compounds. For phenolates, the substitution takes place preferably in the ortho position. It is possible to obtain a second substitution by using an excess of Togni reagent II.
Reactions with alcohols yield the corresponding trifluoromethyl ethers.
Trifluoromethylation of alkenes is possible under copper catalysis.
References
Trifluoromethyl compounds
Iodanes
Lactones
Benzene derivatives
Reagents for organic chemistry | Togni reagent II | [
"Chemistry"
] | 590 | [
"Iodanes",
"Oxidizing agents",
"Reagents for organic chemistry"
] |
68,551,030 | https://en.wikipedia.org/wiki/GRL-0617 | GRL-0617 is a drug which is one of the first compounds discovered that acts as a selective small-molecule inhibitor of the protease enzyme papain-like protease (PLpro) found in some human pathogenic viruses, including the coronavirus SARS-CoV-2. It has been shown to inhibit viral replication in silico and in vitro.
See also
3CLpro-1
Ebselen
GC376
References
Antiviral drugs
1-Naphthyl compounds
Anilines
Benzamides | GRL-0617 | [
"Biology"
] | 112 | [
"Antiviral drugs",
"Biocides"
] |
68,553,796 | https://en.wikipedia.org/wiki/Genetic%20vaccine | A genetic vaccine (also gene-based vaccine) is a vaccine that contains nucleic acids such as DNA or RNA that lead to protein biosynthesis of antigens within a cell. Genetic vaccines thus include DNA vaccines, RNA vaccines and viral vector vaccines.
Properties
Most vaccines other than live attenuated vaccines and genetic vaccines are not taken up by MHC-I-presenting cells, but act outside of these cells, producing only a strong humoral immune response via antibodies. In the case of intracellular pathogens, an exclusive humoral immune response is ineffective. Genetic vaccines are based on the principle of uptake of a nucleic acid into cells, whereupon a protein is produced according to the nucleic acid template. This protein is usually the immunodominant antigen of the pathogen or a surface protein that enables the formation of neutralizing antibodies that inhibit the infection of cells. Subsequently, the protein is broken down at the proteasome into short fragments (peptides) that are imported into the endoplasmic reticulum via the transporter associated with antigen processing, allowing them to bind to MHCI-molecules that are subsequently secreted to the cell surface. The presentation of the peptides on MHC-I complexes on the cell surface is necessary for a cellular immune response. As a result, genetic vaccines and live vaccines generate cytotoxic T-cells in addition to antibodies in the vaccinated individual. In contrast to live vaccines, only parts of the pathogen are used, which means that a reversion to an infectious pathogen cannot occur as it happened during the polio vaccinations with the Sabin vaccine.
Administration
Genetic vaccines are most commonly administered by injection (intramuscular or subcutaneous) or infusion, and less commonly and for DNA, by gene gun or electroporation. While viral vectors have their own mechanisms to be taken up into cells, DNA and RNA must be introduced into cells via a method of transfection. In humans, the cationic lipids SM-102, ALC-0159 and ALC-0315 are used in conjunction with electrically neutral helper lipids. This allows the nucleic acid to be taken up by endocytosis and then released into the cytosol.
Applications
Examples of genetic vaccines approved for use in humans include the RNA vaccines tozinameran and mRNA-1273, the DNA vaccine ZyCoV-D as well as the viral vectors AZD1222, Ad26.COV2.S, Ad5-nCoV, and Sputnik V. In addition, genetic vaccines are being investigated against proteins of various infectious agents, protein-based toxins, as cancer vaccines, and as tolerogenic vaccines for hyposensitization of type I allergies.
History
The first use of a viral vector for vaccination – a Modified Vaccinia Ankara Virus expressing HBsAg – was published by Bernard Moss and colleagues. DNA was used as a vaccine by Jeffrey Ulmer and colleagues in 1993. The first use of RNA for vaccination purposes was described in 1993 by Frédéric Martinon, Pierre Meulien and colleagues and in 1994 by X. Zhou, Peter Liljeström, and colleagues in mice. Martinon demonstrated that a cellular immune response was induced by vaccination with an RNA vaccine. In 1995, Robert Conry and colleagues described that a humoral immune response was also elicited after vaccination with an RNA vaccine. While DNA vaccines were more frequently researched in the early years due to their ease of production, low cost, and high stability to degrading enzymes, but sometimes produced low vaccine responses despite containing immunostimulatory CpG sites, more research was later conducted on RNA vaccines, whose immunogenicity was often better due to inherent adjuvants and which, unlike DNA vaccines, cannot insert into the genome of the vaccinated. Accordingly, the first RNA- and DNA-based vaccines approved for humans were RNA and DNA vaccines used as COVID vaccines. Viral vectors had previously been approved as ebola vaccines.
References
Vaccines
Nucleic acid vaccines
Gene delivery | Genetic vaccine | [
"Chemistry",
"Biology"
] | 847 | [
"Genetics techniques",
"Molecular biology techniques",
"Vaccination",
"Vaccines",
"Gene delivery"
] |
52,973,193 | https://en.wikipedia.org/wiki/Nano-FTIR | Nano-FTIR (nanoscale Fourier transform infrared spectroscopy) is a scanning probe technique that utilizes as a combination of two techniques: Fourier transform infrared spectroscopy (FTIR) and scattering-type scanning near-field optical microscopy (s-SNOM). As s-SNOM, nano-FTIR is based on atomic-force microscopy (AFM), where a sharp tip is illuminated by an external light source and the tip-scattered light (typically back-scattered) is detected as a function of tip position. A typical nano-FTIR setup thus consists of an atomic force microscope, a broadband infrared light source used for tip illumination, and a Michelson interferometer acting as Fourier-transform spectrometer. In nano-FTIR, the sample stage is placed in one of the interferometer arms, which allows for recording both amplitude and phase of the detected light (unlike conventional FTIR that normally does not yield phase information). Scanning the tip allows for performing hyperspectral imaging (i.e. complete spectrum at every pixel of the scanned area) with nanoscale spatial resolution determined by the tip apex size. The use of broadband infrared sources enables the acquisition of continuous spectra, which is a distinctive feature of nano-FTIR compared to s-SNOM.
Nano-FTIR is capable of performing infrared (IR) spectroscopy of materials in ultrasmall quantities and with nanoscale spatial resolution. The detection of a single molecular complex and the sensitivity to a single monolayer has been shown. Recording infrared spectra as a function of position can be used for nanoscale mapping of the sample chemical composition, performing a local ultrafast IR spectroscopy and analyzing the nanoscale intermolecular coupling, among others. A spatial resolution of 10 nm to 20 nm is routinely achieved.
For organic compounds, polymers, biological and other soft matter, nano-FTIR spectra can be directly compared to the standard FTIR databases, which allows for a straightforward chemical identification and characterization.
Nano-FTIR does not require special sample preparation and is typically performed under ambient conditions. It uses an AFM operated in noncontact mode that is intrinsically nondestructive and sufficiently gentle to be suitable for soft-matter and biological sample investigations. Nano-FTIR can be utilized from THz to visible spectral range (and not only in infrared as its name suggests) depending on the application requirements and availability of broadband sources. Nano-FTIR is complementary to tip-enhanced Raman spectroscopy (TERS), SNOM, AFM-IR and other scanning probe methods that are capable of performing vibrational analysis.
Basic principles
Nano-FTIR is based on s-SNOM, where the infrared beam from a light source is focused onto a sharp, typically metalized AFM tip and the backscattering is detected. The tip greatly enhances the illuminating IR light in the nanoscopic volume around its apex, creating a strong near field. A sample, brought into this near field, interacts with the tip electromagnetically and modifies the tip (back)scattering in the process. Thus by detecting tip scattering, one can obtain information about the sample.
Nano-FTIR detects the tip-scattered light interferometrically. The sample stage is placed into one arm of a conventional Michelson interferometer, while a mirror on a piezo stage is placed into another, reference arm. Recording the backscattered signal while translating the reference mirror yields an interferogram. The subsequent Fourier transform of this interferogram returns the near-field spectra of the sample.
Placement of the sample stage into one of the interferometer's arms (instead of outside of the interferometer as typically implemented in conventional FTIR) is a key element of nano-FTIR. It boosts the weak near-field signal due to interference with the strong reference field, helps to eliminate the background caused by parasitic scattering off everything that falls into large diffraction-limited beam focus, and most importantly, allows for recording of both amplitude s and phase φ spectra of the tip-scattered radiation. With the detection of phase, nano-FTIR provides complete information about near fields, which is essential for quantitative studies and many other applications. For example, for soft matter samples (organics, polymers, biomaterials, etc.), φ directly relates to the absorption in the sample material. This permits a direct comparison of nano-FTIR spectra with conventional absorption spectra of the sample material, thus allowing for simple spectroscopic identification according to standard FTIR databases.
History
Nano-FTIR was first described in 2005 in a patent by Ocelic and Hillenbrand as Fourier-transform spectroscopy of tip-scattered light with an asymmetric spectrometer (i.e. the tip/sample placed inside one of the interferometer arms). The first realization of s-SNOM with FTIR was demonstrated in 2006 in the laboratory of F. Keilmann using a mid-infrared source based on a simple version of nonlinear difference-frequency generation (DFG). However, the mid-IR spectra in this realization were recorded using dual comb spectroscopy principles, yielding a discrete set of frequencies and thus demonstrating a multiheterodyne imaging technique rather than nano-FTIR. The first continuous spectra were recorded only in 2009 in the same laboratory using a supercontinuum IR beam also obtained by DFG in GaSe upon superimposing two pulsed trains emitted from Er-doped fiber laser. This source further allowed in 2011 for the first assessment of nanoscale-resolved spectra of SiC with excellent quality and spectral resolution. At the same time, Huth et al. in the laboratory of R. Hillenbrand used IR radiation from a simple glowbar source in combination with the principles of Fourier-transform spectroscopy, to record IR spectra of p-doped Si and its oxides in a semiconductor device. In the same work the term nano-FTIR was first introduced. However, an insufficient spectral irradiance of glowbar sources limited the applicability of the technique to the detection of strongly-resonant excitations such phonons; and the early supercontinuum IR laser sources, while providing more power, had very narrow bandwidth (<300 cm−1). Further attempt to improve the spectral power, while retaining the large bandwidth of a glowbar source was made by utilizing the IR radiation from a high temperature argon arc source (also known as plasma source). However, due to lack of commercial availability and rapid development of the IR supercontinium laser sources, plasma sources are not widely utilized in nano-FTIR.
The breakthrough in nano-FTIR came upon the development of high-power broadband mid-IR laser sources, which provided large spectral irradiance in a sufficiently large bandwidth (mW-level power in ~1000 cm-1 bandwidth) and enabled truly broadband nanoscale-resolved material spectroscopy capable of detecting even the weakest vibrational resonances. Particularly, it has been shown that nano-FTIR is capable of measuring molecular fingerprints which match well with far-field FTIR spectra, owing to the asymmetry of the nano-FTIR spectrometer that provides phase and thus gives access to the molecular absorption. Recently, the first nanoscale-resolved infrared hyperspectral imaging of a co-polymer blend was demonstrated, which allowed for the application of statistical techniques such as multivariate analysis – a widely used tool for heterogeneous sample analysis.
Additional boost to the development of nano-FTIR came from the utilization of the synchrotron radiation that provide extreme bandwidth, yet at the expense of weaker IR spectral irradiance compared to broadband laser sources.
Commercialization
The nano-FTIR technology has been commercialized by neaspec – a Germany-based spin-off company of the Max Planck Institute of Biochemistry founded by Ocelic, Hillenbrand and Keilmann in 2007 and based on the original patent by Ocelic and Hillenbrand. The detection module optimized for broadband illumination sources was first made available in 2010 as a part of the standard neaSNOM microscope system. At this time, broadband IR-lasers have not been yet commercially available, however experimental broadband IR-lasers prove that the technology works perfect and that it has a huge application potential across many disciplines. The first nano-FTIR was commercially available in 2012 (supplied with still experimental broadband IR-laser sources), becoming the first commercial system for broadband infrared nano-spectroscopy. In 2015 neaspec develops and introduces Ultrafast nano-FTIR, the commercial version of ultrafast nano-spectroscopy. Ultrafast nano-FTIR is a ready-to-use upgrade for nano-FTIR to enable pump-probe nano-spectroscopy at best-in-class spatial resolution. The same year the development of a cryo-neaSNOM – the first system of its kind to enable nanoscale near-field imaging & spectroscopy at cryogenic temperatures – was announced.
Advanced capabilities
Synchrotron beamlines integration
Nano-FTIR systems can be easily integrated into synchrotron radiation beamlines. The use of synchrotron radiation allows for acquisition of an entire mid-infrared spectrum at once. Synchrotrons radiation has already been utilized in synchrotron infrared microscopectroscopy - the technique most widely used in biosciences, providing information on chemistry on microscales of virtually all biological specimens, like bone, plants, and other biological tissues. Nano-FTIR brings the spatial resolution to 10-20 nm scale (vs. ~2-5 μm in microspectroscopy), which has been utilized for broadband spatially-resolved spectroscopy of crystalline and phase-change materials, semiconductors, minerals, biominerals and proteins.
Ultrafast spectroscopy
Nano-FTIR is highly suitable for performing local ultrafast pump-probe spectroscopy due to intereferometric detection and an intrinsic ability to vary the probe delay time. It has been applied for studies of ultrafast nanoscale plasmonic phenomena in Graphene, for performing nanospectroscopy of InAs nanowires with subcycle resolution and for probing the coherent vibrational dynamics of nanoscopic ensembles.
Quantitative studies
The availability of both amplitude and phase of the scattered field and theoretically well understood signal formation in nano-FTIR allow for the recovery of both real and imaginary parts of the dielectric function, i.e. finding the index of refraction and the extinction coefficient of the sample. While such recovery for arbitrarily-shaped samples or samples exhibiting collective excitations, such as phonons, requires a resource-demanding numerical optimization, for soft matter samples (polymers, biological matter and other organic materials) the recovery of the dielectric function could often be performed in real time using fast semi-analytical approaches. One of such approaches is based on the Taylor expansion of the scattered field with respect to a small parameter that isolates the dielectric properties of the sample and allows for a polynomial representation of measured near-field contrast. With an adequate tip-sample interaction model and with known measurement parameters (e.g. tapping amplitude, demodulation order, reference material, etc.), the sample permittivity can be determined as a solution of a simple polynomial equation
Subsurface analysis
Near-field methods, including nano-FTIR, are typically viewed as a technique for surface studies due to short probing ranges of about couple tip radii (~20-50 nm). However it has been demonstrated that within such probing ranges, s-SNOM is capable of detecting subsurface features to some extents, which could be used for the investigations of samples capped by thin protective layers, or buried polymers, among others.
As a direct consequence of being quantitative technique (i.e. capable of highly reproducible detection of both near-field amplitude & phase and well understood near-field interaction models), nano-FTIR also provides means for the quantitative studies of the sample interior (within the probing range of the tip near field, of course). This is often achieved by a simple method of utilizing signals recorded at multiple demodulation orders naturally returned by nano-FTIR in the process of background suppression. It has been shown that higher harmonics probe smaller volumes below the tip, thus encoding the volumetric structure of a sample. This way, nano-FTIR has a demonstrated capability for the recovery of thickness and permittivity of layered films and nanostructures, which has been utilized for the nanoscale depth profiling of multiphase materials and high-Tc cuprate nanoconstriction devices patterned by focused ion beams. In other words, nano-FTIR has a unique capability of recovering the same information about thin-film samples that is typically returned by ellipsometry or impedance spectroscopy, yet with nanoscale spatial resolution. This capability proved crucial for disentangling different surface states in topological insulators.
Operation in liquid
Nano-FTIR uses scattered IR light to obtain information about the sample and has the potential to investigate electrochemical interfaces in-situ/operando and biological (or other) samples in their natural environment, such as water. The feasibility of such investigations has already been demonstrated by acquisition of nano-FTIR spectra through a capping Graphene layer on top of a supported material or through Graphene suspended on a perforated silicon nitride membrane (using the same s-SNOM platform that nano-FTIR utilizes).
Cryogenic environment
Reveling the fundamentals of phase transitions in superconductors, correlated oxides, Bose-Einstein condensates of surface polaritons, etc. require spectroscopic studies at the characteristically nanometer length scales and in cryogenic environment. Nano-FTIR is compatible with cryogenic s-SNOM that has already been utilized for reveling a nanotextured coexistence of metal and correlated Mott insulator phases in Vanadium oxide near the metal-insulator transition.
Special atmosphere environments
Nano-FTIR can be operated in different atmospheric environments by enclosing the system into an isolated chamber or a glove box. Such operation has already been used for the investigation of highly reactive Lithium-ion battery components.
Applications
Nano-FTIR has a plenitude of applications, including polymers and polymer composites, organic films, semiconductors, biological research (cell membranes, proteins structure, studies of single viruses), chemistry and catalysis, photochemistry, minerals and biominerals, geochemistry, corrosion and materials sciences, low-dimensional materials, photonics, energy storage, cosmetics, pharmacology and environmental sciences.
Materials and chemical sciences
Nano-FTIR has been used for the nanoscale spectroscopic chemical identification of polymers and nanocomposites, for in situ investigation of structure and crystallinity of organic thin films, for strain characterization and relaxation in crystalline materials and for high-resolution spatial mapping of catalytic reactions, among others.
Biological and pharmaceutical sciences
Nano-FTIR has been used for investigation of protein secondary structure, bacterial membrane, detection and studies of single viruses and protein complexes. It has been applied to the detection of biominerals in bone tissue. Nano-FTIR, when coupled with THz light, can also be applied to cancer and burn imaging with high optical contrast.
Semiconductor industry and research
Nano-FTIR has been used for nanoscale free carrier profiling and quantification of free carrier concentration in semiconductor devices, for evaluation of ion beam damage in nanoconstriction devices, and general spectroscopic characterization of semiconductor materials.
Theory
High-harmonic demodulation for background suppression
The nano-FTIR interferometrically detects light scattered from the tip-sample system, . The power at the detector can be written as
where is the reference field. The scattered field can be written as
and is dominated by parasitic background scattering, , from the tip shaft, cantilever sample roughness and everything else which falls into the diffraction-limited beam focus. To extract the near-field signal, , originating from the "hot-spot" below the tip apex (which carries the nanoscale-resolved information about the sample properties) a small harmonic modulation of the tip height H (i.e. oscillating the tip) with frequency Ω is provided and the detector signal is demodulated at higher harmonics of this frequency nΩ with n=1,2,3,4,... The background is nearly insensitive to small variations of the tip height and is nearly eliminated for sufficiently high demodulation orders (typically ). Mathematically this can be shown by expanding and into a Fourier series, which yields the following (approximated) expression for the demodulated detector signal:
where is the complex-valued number that is obtained by combining the lock-in amplitude, , and phase, , signals, is the n-th Fourier coefficient of the near-field contribution and C. C. stands for the complex conjugate terms. is the zeroth-order Fourier coefficient of the background contribution and is often called the multiplicative background because it enters the detector signal as a product with . It cannot be removed by the high-harmonic demodulation alone. In nano-FTIR the multiplicative background is eliminated as described below.
Asymmetric FTIR spectrometer
To acquire a spectrum, the reference mirror is continuously translated while recording the demodulated detector signal as a function of the reference mirror position , yielding an interferogram . This way the phase of the reference field changes according to for each spectral component of the reference field and the detector signal can thus be written as
where is the reference field at zero delay . To obtain the nano-FTIR spectrum, , the interferogram is Fourier-transformed with respect to . The second term in the above equation does not depend on the reference mirror position and after the Fourier transformation contributes only to the DC signal. Thus for only the near-field contribution multiplied by the reference field stays in the acquired spectrum:
This way, besides providing the interferometric gain, the asymmetric interferometer utilized in nano-FTIR also eliminates the multiplicative background, which otherwise could be a source of various artifacts and is often overlooked in other s-SNOM based spectroscopies.
Normalization
Following the standard FTIR practice, spectra in nano-FTIR are normalized to those obtained on a known, preferably spectrally-flat reference material. This eliminates the generally unknown reference field and any instrumental functions, yielding spectra of the near-field contrast:
Near-field contrast spectra are generally complex-valued, reflecting on the possible phase delay of the sample-scattered field with respect to the reference. Near-field contrast spectra depend nearly exclusively on the dielectric properties of sample material and can be used for its identification and characterization.
Nano-FTIR absorption spectroscopy
For the purpose of describing near-field contrasts for optically thin samples composed of polymers, organics, biological matter and other soft matter (so called weak oscillators), the near-field signal to a good approximation can be expressed as:
,
where is the surface response function that depends on the complex-valued dielectric function of the sample and can be also viewed as the reflection coefficient for evanescent waves that constitute the near field of the tip. That is, the spectral dependence of is determined exclusively by the sample reflection coefficient. The latter is purely real and acquires an imaginary part only in narrow spectral regions around the sample absorption lines. This means that the spectrum of an imaginary part of the near-field contrast resembles the conventional FTIR absorbance spectrum, , of the sample material:. It is therefore convenient to define the nano-FTIR absorption , which directly relates to the sample absorbance spectrum:
It can be used for direct sample identification and characterization according to the standard FTIR databases without the need of modeling the tip-sample interaction.
For phononic and plasmonic samples in the vicinity of the corresponding surface surface resonances, the relation might not hold. In such cases the simple relation between and can not be obtained, requiring modeling of the tip-sample interaction for spectroscopic identification of such samples.
Analytical and numerical simulations
Significant efforts have been put towards simulating nano-FTIR electric field and complex scattering signal through numerical methods (using commercial proprietary software such as CST Microwave Studio, Lumerical FDTD, and COMSOL Multiphysics) as well as through analytical models (such as through finite dipole and point dipole approximations). Analytical simulations tend to be more simplified and inaccurate, while numerical methods are more rigorous but computationally expensive.
References
External links
Infrared Nanoscopy Laboratory of Fritz Keilman (Ludwigs-Maximilians-Universität)
Nanooptics group of Rainer Hillenbrand (CIC nanoGUNE)
Nano-Optics & Metamaterials group of Thomas Taubner (RWTH Aachen)
Infrared Nano-Optics of Quantum Materials group of Dmitri Basov (UC San Diego)
Scanning probe microscopy
Infrared spectroscopy
Scientific techniques | Nano-FTIR | [
"Physics",
"Chemistry",
"Materials_science"
] | 4,360 | [
"Spectrum (physical sciences)",
"Infrared spectroscopy",
"Scanning probe microscopy",
"Microscopy",
"Nanotechnology",
"Spectroscopy"
] |
51,433,179 | https://en.wikipedia.org/wiki/Cosmon | In physical cosmology, a cosmon or cosmonium is a hypothetical form of matter. The idea was originally proposed by Georges Lemaître, who suggested the concept of a 'primeval atom’ (L'Hypothèse de l'Atome Primitif) 1946, leading up to the theory of the Big Bang. He illustrated the idea by imagining an object 30 times larger than the volume of the sun containing all the matter of the Universe. Its density would be around . In his view, this occurred somewhere between 20 and 60 billion years ago.
The idea of a primeval “super-atom” lived on and was developed forward by Maurice Goldhaber in 1956. In his proposal there would have been a point, which had been called a universon, that would have collapsed into a cosmon and an anti-cosmon pair. Goldhaber was questioned why is there any matter if equal amounts of matter and antimatter were formed in the Big Bang. One explanation for this is the asymmetry of matter meaning that there could have been slightly more matter than antimatter, for instance 1001 matter particles to every 1000 antimatter. In Goldhaber's model, cosmon and anticosmon would have flown apart and therefore explaining issue without asymmetry.
In 1989, Hans Dehmelt attempted to modernize the idea of the primeval atom. In this hypothesis, The cosmonium would have been the heaviest form of matter at the beginning of the Big Bang.
References
Physical cosmology
Hypothetical elementary particles | Cosmon | [
"Physics",
"Astronomy"
] | 325 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Unsolved problems in physics",
"Astrophysics",
"Hypothetical elementary particles",
"Physics beyond the Standard Model",
"Physical cosmology"
] |
51,437,973 | https://en.wikipedia.org/wiki/Panting%20%28ship%20construction%29 | Panting refers to the tendency of steel hull plating to flex in and out like an oil can being squeezed when a ship is pitching. This occurs when a ship is making headway in waves. Panting creates significant stress on a ship's hull. It is potentially dangerous and can result in flooding and the separation of the hull and deck. The British battleship HMS Rodney suffered significant leaking from panting. Addressing panting is an essential component of ship design. It is typically countered by reinforcing the bow and the stern with beams and stringers.
References
Naval architecture
Shipbuilding | Panting (ship construction) | [
"Engineering"
] | 114 | [
"Naval architecture",
"Shipbuilding",
"Marine engineering"
] |
51,443,362 | https://en.wikipedia.org/wiki/Data%20augmentation | Data augmentation is a statistical technique which allows maximum likelihood estimation from incomplete data. Data augmentation has important applications in Bayesian analysis, and the technique is widely used in machine learning to reduce overfitting when training machine learning models, achieved by training models on several slightly-modified copies of existing data.
Synthetic oversampling techniques for traditional machine learning
Synthetic Minority Over-sampling Technique (SMOTE) is a method used to address imbalanced datasets in machine learning. In such datasets, the number of samples in different classes varies significantly, leading to biased model performance. For example, in a medical diagnosis dataset with 90 samples representing healthy individuals and only 10 samples representing individuals with a particular disease, traditional algorithms may struggle to accurately classify the minority class. SMOTE rebalances the dataset by generating synthetic samples for the minority class. For instance, if there are 100 samples in the majority class and 10 in the minority class, SMOTE can create synthetic samples by randomly selecting a minority class sample and its nearest neighbors, then generating new samples along the line segments joining these neighbors. This process helps increase the representation of the minority class, improving model performance.
Data augmentation for image classification
When convolutional neural networks grew larger in mid-1990s, there was a lack of data to use, especially considering that some part of the overall dataset should be spared for later testing. It was proposed to perturb existing data with affine transformations to create new examples with the same labels, which were complemented by so-called elastic distortions in 2003, and the technique was widely used as of 2010s. Data augmentation can enhance CNN performance and acts as a countermeasure against CNN profiling attacks.
Data augmentation has become fundamental in image classification, enriching training dataset diversity to improve model generalization and performance. The evolution of this practice has introduced a broad spectrum of techniques, including geometric transformations, color space adjustments, and noise injection.
Geometric Transformations
Geometric transformations alter the spatial properties of images to simulate different perspectives, orientations, and scales. Common techniques include:
Rotation: Rotating images by a specified degree to help models recognize objects at various angles.
Flipping: Reflecting images horizontally or vertically to introduce variability in orientation.
Cropping: Removing sections of the image to focus on particular features or simulate closer views.
Translation: Shifting images in different directions to teach models positional invariance.
Color Space Transformations
Color space transformations modify the color properties of images, addressing variations in lighting, color saturation, and contrast. Techniques include:
Brightness Adjustment: Varying the image's brightness to simulate different lighting conditions.
Contrast Adjustment: Changing the contrast to help models recognize objects under various clarity levels.
Saturation Adjustment: Altering saturation to prepare models for images with diverse color intensities.
Color Jittering: Randomly adjusting brightness, contrast, saturation, and hue to introduce color variability.
Noise Injection
Injecting noise into images simulates real-world imperfections, teaching models to ignore irrelevant variations. Techniques involve:
Gaussian Noise: Adding Gaussian noise mimics sensor noise or graininess.
Salt and Pepper Noise: Introducing black or white pixels at random simulates sensor dust or dead pixels.
Data augmentation for signal processing
Residual or block bootstrap can be used for time series augmentation.
Biological signals
Synthetic data augmentation is of paramount importance for machine learning classification, particularly for biological data, which tend to be high dimensional and scarce. The applications of robotic control and augmentation in disabled and able-bodied subjects still rely mainly on subject-specific analyses. Data scarcity is notable in signal processing problems such as for Parkinson's Disease Electromyography signals, which are difficult to source - Zanini, et al. noted that it is possible to use a generative adversarial network (in particular, a DCGAN) to perform style transfer in order to generate synthetic electromyographic signals that corresponded to those exhibited by sufferers of Parkinson's Disease.
The approaches are also important in electroencephalography (brainwaves). Wang, et al. explored the idea of using deep convolutional neural networks for EEG-Based Emotion Recognition, results show that emotion recognition was improved when data augmentation was used.
A common approach is to generate synthetic signals by re-arranging components of real data. Lotte proposed a method of "Artificial Trial Generation Based on Analogy" where three data examples provide examples and an artificial is formed which is to what is to . A transformation is applied to to make it more similar to , the same transformation is then applied to which generates . This approach was shown to improve performance of a Linear Discriminant Analysis classifier on three different datasets.
Current research shows great impact can be derived from relatively simple techniques. For example, Freer observed that introducing noise into gathered data to form additional data points improved the learning ability of several models which otherwise performed relatively poorly. Tsinganos et al. studied the approaches of magnitude warping, wavelet decomposition, and synthetic surface EMG models (generative approaches) for hand gesture recognition, finding classification performance increases of up to +16% when augmented data was introduced during training. More recently, data augmentation studies have begun to focus on the field of deep learning, more specifically on the ability of generative models to create artificial data which is then introduced during the classification model training process. In 2018, Luo et al. observed that useful EEG signal data could be generated by Conditional Wasserstein Generative Adversarial Networks (GANs) which was then introduced to the training set in a classical train-test learning framework. The authors found classification performance was improved when such techniques were introduced.
Mechanical signals
The prediction of mechanical signals based on data augmentation brings a new generation of technological innovations, such as new energy dispatch, 5G communication field, and robotics control engineering. In 2022, Yang et al. integrate constraints, optimization and control into a deep network framework based on data augmentation and data pruning with spatio-temporal data correlation, and improve the interpretability, safety and controllability of deep learning in real industrial projects through explicit mathematical programming equations and analytical solutions.
See also
Oversampling and undersampling in data analysis
Surrogate data
Generative adversarial network
Variational autoencoder
Data pre-processing
Convolutional neural network
Regularization (mathematics)
Data preparation
Data fusion
References
Machine learning | Data augmentation | [
"Engineering"
] | 1,328 | [
"Artificial intelligence engineering",
"Machine learning"
] |
41,394,066 | https://en.wikipedia.org/wiki/Spherical%20roller%20thrust%20bearing | A spherical roller thrust bearing is a rolling-element bearing of thrust type that permits rotation with low friction, and permits angular misalignment. The bearing is designed to take radial loads, and heavy axial loads in one direction. Typically these bearings support a rotating shaft in the bore of the shaft washer that may be misaligned in respect to the housing washer. The misalignment is possible due to the spherical internal shape of the house washer.
Construction
Spherical roller thrust bearings consist of a shaft washer (for radial bearings often called "inner ring"), a housing washer (for radial bearings often called "outer ring"), asymmetrical rollers and a cage. There are also bearing units available that can take axial loads in two directions.
History
The spherical roller thrust bearing was introduced by SKF in 1939. The design of the early bearings is similar to the design that is still in use in modern machines.
Designs
The internal design of the bearing is not standardised by ISO, so it varies between different manufacturers and different series. Some of the design parameters are:
Roller shape and dimensions
Flange design
Non-rotational notches in house washer
The spherical roller thrust bearings have the highest load rating density of all thrust bearings.
Dimensions
External dimensions of spherical roller bearings are standardised by ISO in the standard ISO 104:2015.
Some common series of spherical roller bearings are:
292
293
294
Materials
Bearing rings and rolling elements can be made of a number of different materials, but the most common is "chrome steel", a material with approximately 1.5% chrome content. Such "chrome steel" has been standardized by a number of authorities, and there are therefore a number of similar materials, such as: AISI 52100 (USA), 100CR6 (Germany), SUJ2 (Japan) and GCR15 (China).
Some common materials for bearing cages:
Sheet steel (stamped or laser-cut)
Brass (stamped or machined)
Steel (machined)
The choice of material is mainly done by the manufacturing volume and method. For large-volume bearings, cages are often of stamped sheet-metal, whereas low volume series often have cages of machined brass or machined steel.
Manufacturers
Some manufacturers of spherical roller bearings are SKF, Schaeffler, Timken Company and NSK Ltd.
Applications
Spherical roller thrust bearings are used in industrial applications, where there are heavy axial loads, moderate speeds and possibly misalignment.
Some common application areas are:
Gearboxes
Pulp and paper processing equipment (notably refiners)
Marine propulsion and offshore drilling
Cranes and swing bridges
Water turbines
Extruders for injection molding
See also
Bearing (mechanical)
Rolling-element bearing
Self-aligning ball bearing
Spherical plain bearing
Spherical roller bearing
Tapered roller bearing
Thrust bearing
References
Bearings (mechanical)
Rolling-element bearings
Mechanical engineering
Swedish inventions | Spherical roller thrust bearing | [
"Physics",
"Engineering"
] | 585 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
41,396,440 | https://en.wikipedia.org/wiki/Frank%E2%80%93Kasper%20phases | Topologically close pack (TCP) phases, also known as Frank-Kasper (FK) phases, are one of the largest groups of intermetallic compounds, known for their complex crystallographic structure and physical properties. Owing to their combination of periodic and aperiodic structure, some TCP phases belong to the class of quasicrystals. Applications of TCP phases as high-temperature structural and superconducting materials have been highlighted; however, they have not yet been sufficiently investigated for details of their physical properties. Also, their complex and often non-stoichiometric structure makes them good subjects for theoretical calculations.
History
In 1958, Frederick C. Frank and John S. Kasper, in their original work investigating many complex alloy structures, showed that non-icosahedral environments form an open-end network which they called the major skeleton, and is now identified as the declination locus. They came up with the methodology to pack asymmetric icosahedra into crystals using other polyhedra with larger coordination numbers. These coordination polyhedra were constructed to maintain topological close packing (TCP).
Unit-cell geometries classification
Based on the tetrahedral units, FK crystallographic structures are classified into low and high polyhedral groups denoted by their coordination numbers (CN) referring to the number of atom centering the polyhedron.
Some atoms have an icosahedral structure with low coordination, labeled CN12. Some others have higher coordination numbers of 14, 15 and 16, labeled CN14, CN15, and CN16, respectively. These atoms with higher coordination numbers form uninterrupted networks connected along the directions where the five-fold icosahedral symmetry is replaced by six-fold local symmetry. The sites of 12-coordination are called minor sites and those with more than 12-fold coordination are major sites.
Classic FK phases
The most common members of a FK-phases family are: A15, Laves phases, σ, μ, M, P, and R.
A15 phases
A15 phases are intermetallic alloys with an average coordination number (ACN) of 13.5 and eight A3B stoichiometry atoms per unit cell where two B atoms are surrounded by CN12 polyhedral (icosahedra), and six A atoms are surrounded by CN14 polyhedral. Nb3Ge is a superconductor with A15 structure.
Laves phases
The three Laves phases are intermetallic compounds composed of CN12 and CN16 polyhedra with AB2 stoichiometry, commonly seen in binary metal systems like MgZn2. Due to the small solubility of AB2 structures, Laves phases are almost line compounds, though sometimes they can have a wide homogeneity region.
σ, μ, M, P, and R phases
The sigma (σ) phase is an intermetallic compound known as the one without definite stoichiometric composition and formed at the electron/atom ratio range of 6.2 to 7. It has a primitive tetragonal unit cell with 30 atoms. CrFe is a typical alloy crystallizing in the σ phase at the equiatomic composition. With physical properties adjustable based on its structural components, or its chemical composition provided a given structure.
The μ phase has an ideal A6B7 stoichiometry, with its prototype W6Fe7, containing rhombohedral cell with 13 atoms. While many other Frank-Kasper alloy types have been identified, more continue to be found. The alloy Nb10Ni9Al3 is the prototype for the M phase. It has orthorhombic space group with 52 atoms per unit cell. The alloy Cr9Mo21Ni20 is the prototype for the P-phase. It has a primitive orthorhombic cell with 56 atoms. The alloy Co5Cr2Mo3 is the prototype for the R-phase which belongs to the rhombohedral space group with 53 atoms per cell.
Applications
FK phase materials have been pointed out for their high-temperature structure and as superconducting materials. Their complex and often non-stoichiometric structure makes them good subjects for theoretical calculations.
A15, Laves and σ are the most applicable FK structures with interesting fundamental properties.
The A15 compounds include important intermetallic superconductors such as: Nb3Sn, Nb3Al, and V3Ga with applications including wires for high-field superconducting magnets. Nb3Sn is also being investigated as a potential material for fabricating superconducting radio frequency cavities.
Small extents of σ phase considerably decreases the flexibility and impairment in erosion resistance. While addition of refractory elements like W, Mo, or Re to FK phases helps to enhance the thermal properties in such alloys as steels or nickel-based superalloys, it increases the risk of unwanted precipitation in intermetallic compounds.
See also
Complex metallic alloys
References
Crystal structure types | Frank–Kasper phases | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,036 | [
"Inorganic compounds",
"Metallurgy",
"Crystal structure types",
"Crystallography",
"Alloys",
"Intermetallics",
"Condensed matter physics"
] |
47,360,281 | https://en.wikipedia.org/wiki/Wild-type%20transthyretin%20amyloid | Wild-type transthyretin amyloid (WTTA), also known as senile systemic amyloidosis (SSA), is a disease that typically affects the heart and tendons of elderly people. It is caused by the accumulation of a wild-type (that is to say a normal) protein called transthyretin. This is in contrast to a related condition called transthyretin-related hereditary amyloidosis where a genetically mutated transthyretin protein tends to deposit much earlier than in WTTA due to abnormal conformation and bioprocessing.
It belongs to a group of diseases called amyloidosis, chronic progressive conditions linked to abnormal deposition of normal or abnormal proteins, because these proteins are misshapen and cannot be properly degraded and eliminated by the cell metabolism.
Signs and symptoms
Wild-type transthyretin amyloid accumulates mainly in the heart, where it causes stiffness and often thickening of its walls, leading consequently to shortness of breath and intolerance to exercise, called diastolic dysfunction. Excessively slow heart rate can also occur, such as in sick sinus syndrome, with ensuing fatigue and dizziness. Wild-type transthyretin deposition is also a common cause of carpal tunnel syndrome in elderly men, which may cause pain, tingling and loss of sensation in the hands. Some patients may develop carpal tunnel syndrome as an initial symptom of wild-type transthyretin amyloid.
There appears to be an increased risk of developing hematuria or blood in the urine due to urological lesions.
Natural course
The disorder typically affects the heart and its prevalence increases in older age groups. Men are affected much more frequently than women, and up to 25% of men over the age of 80 may have evidence of WTTA.
Patients often present with increased thickness of the wall of the main heart chamber, the left ventricle. People affected by WTT amyloidosis are likely to have required a pacemaker before diagnosis and have a high incidence of a partial electrical blockage of the heart, known as the left bundle branch block. Low ECG signals such as QRS complexes are widely considered a marker of cardiac amyloidosis.
A much better survival has been reported for patients with WTTA as opposed to cardiac AL amyloidosis.
Diagnosis
The condition is suspected in an elderly person, especially male, presenting with symptoms of heart failure such as shortness of breath or swollen legs, and or disease of the electrical system of the heart with ensuing slow heart rate, dizziness or fainting spells. The diagnosis is confirmed on the basis of a biopsy, which can be treated with a special stain called Congo Red that will be positive in this condition, and immunohistochemistry. However, this disease can now non-invasively be diagnosed with the help of Tc-99m pyrophosphate scintigraphy.
Treatment
No drug has been shown to be able to arrest or slow down the process of this condition. There is promise that two drugs, tafamidis and diflunisal, may improve the outlook, since they were demonstrated in randomized clinical trials to benefit patient affected by the related condition FAP-1 otherwise known as transthyretin-related hereditary amyloidosis. Permanent pacing can be employed in cases of symptomatic slow heart rate (bradycardia). Heart failure medications can be used to treat symptoms of difficulty breathing and congestion.
A 2021 investigational first-in-human study demonstrated that NTLA-2001, a therapeutic agent based on the CRISPR-Cas9 system, induces targeted knockout of the transthyretin protein.
Orphan drug status for transthyretin (TTR) amyloidosis
Because of preliminary data suggesting the drug may have activity, the U.S. FDA in 2013 granted tolcapone "orphan drug status" in studies aiming at the treatment of transthyretin familial amyloidosis (ATTR). However, tolcapone was not FDA approved for the treatment of this disease.
See also
Transthyretin-related hereditary amyloidosis
Amyloidosis
References
External links
The Amyloidosis Center at Boston University
Mayo Clinic Definition
A Patient Guide to Amyloidosis
Amyloidosis
Histopathology
Structural proteins | Wild-type transthyretin amyloid | [
"Chemistry"
] | 876 | [
"Histopathology",
"Microscopy"
] |
47,361,435 | https://en.wikipedia.org/wiki/Teardrop%20%28electronics%29 | A teardrop is typically drop-shaped feature on a printed circuit board and can be found on the junction of vias or contact pads.
Purpose
The main purpose of teardrops is to enhance structural integrity in presence of thermal or mechanical stresses, for example due to vibration or flexing. Structural integrity may be compromised, e.g., by misalignment during drilling, so that too much copper may be removed by the drill hole in the area where a trace connects to the pad or via. An extra advantage is the enlarging of manufacturing tolerances, making manufacturing easier and cheaper.
Shape
While a typical shape of a teardrop is straight-line tapering, they may be concave. This type of teardrop is also called filleting or straight. To produce a snowman-shaped teardrop, a secondary pad of smaller size is added at the junction overlapping with the primary pad (hence the nickname).
Necking
For similar reasons, a technique called trace necking reduces (or necks down) the width of a trace that approaches a narrower pad of a surface-mounted device or a through-hole with a diameter that is less than the width of the trace, or when the trace passes through bottlenecks (for example, between the pads of a component).
References
External links
High Frequency PCB
Printed circuit board manufacturing | Teardrop (electronics) | [
"Engineering"
] | 275 | [
"Electrical engineering",
"Electronic engineering",
"Printed circuit board manufacturing"
] |
47,361,597 | https://en.wikipedia.org/wiki/Cornforth%20rearrangement | In organic chemistry, the Cornforth rearrangement is a rearrangement reaction of a 4-acyloxazole in which the group attached to an acyl on position 4 and the substituent on position 5 of an oxazole ring exchange places. It was first reported in 1949, and is named for John Cornforth. The reaction is used in the synthesis of amino acids, where the corresponding oxazoles occur as intermediates.
Overview
In the original work, Cornforth used 5-ethoxy-2-phenyloxazole-4-carboxamide (R1 = phenyl, R2 = ethoxy, R3 = amino).
The reaction also works, however, with a large number of other carbonyl-substituted 1,3-oxazoles.
In the early 1970s, the reaction was further researched by Michael Dewar. It was shown that the reaction gave good yields, over 90%, when using nitrogen-containing heterocycles at the R3 position.
Mechanism
The mechanism of the Cornforth rearrangement begins by a thermal pericyclic ring opening which furnishes a nitrile intermediate 1, which then undergoes rearrangement to the oxazole, which is isomeric to the starting compound.
The ylide intermediate has several resonance contributors and the stability of said structures affects the outcome of the reaction, since the intermediate will revert to the starting material if the third resonance structure is most stable. Whether the reaction takes place is dependent on the energy difference between the starting material and the product.
References
Rearrangement reactions
Oxazoles
Name reactions | Cornforth rearrangement | [
"Chemistry"
] | 336 | [
"Name reactions",
"Rearrangement reactions",
"Organic chemistry stubs",
"Organic reactions"
] |
59,222,107 | https://en.wikipedia.org/wiki/Pairing%20strategy | In a positional game, a pairing strategy is a strategy that a player can use to guarantee victory, or at least force a draw. It is based on dividing the positions on the game-board into disjoint pairs. Whenever the opponent picks a position in a pair, the player picks the other position in the same pair.
Example
Consider the 5-by-5 variant of Tic-tac-toe. We can create 12 pairwise-disjoint pairs of board positions, denoted by 1,...,12 below:
Note that the central element (denoted by *) does not belong to any pair; it is not needed in this strategy.
Each horizontal, vertical or diagonal line contains at least one pair. Therefore the following pairing strategy can be used to force a draw: "whenever your opponent chooses an element of pair i, choose the other element of pair i". At the end of the game, you have an element of each winning-line. Therefore, you guarantee that the other player cannot win.
Since both players can use this strategy, the game is a draw.
This example is generalized below for an arbitrary Maker-Breaker game. In such a game, the goal of Maker is to occupy an entire winning-set, while the goal of Breaker is to prevent this by owning an element in each winning-set.
Pairing strategy for Maker
A pairing-strategy for Maker requires a set of element-pairs such that:
All pairs are pairwise-disjoint;
Every set that contains at least one element from each pair, contains some winning-set.
Whenever Breaker picks an element of a pair, Maker picks the other element of the same pair. At the end, Maker's set contains at least one element from each pair; by condition 2, he occupies an entire winning-set (this is true even when Maker plays second).
As an example, consider a game-board containing all vertices in a perfect binary tree except the root. The winning-sets are all the paths from the leaf to one of the two children of the root. We can partition the elements into pairs by pairing each element with its sibling. The pairing-strategy guarantees that Maker wins even when playing second. If Maker plays first, he can win even when the game-board contains also the root: in the first step he just picks the root, and from then on plays the above pairing-strategy.
Pairing strategy for Breaker
A pairing-strategy for Breaker requires a set of element-pairs such that:
All pairs are pairwise-disjoint;
Every winning-set contains at least one pair.
Whenever Maker picks an element of a pair, Breaker picks the other element of the same pair. At the end, Breaker has an element in each pair; by condition 2, he has an element in each winning-set.
An example of such pairing-strategy for 5-by-5 tic-tac-toe is shown above. show other examples for 4x4 and 6x6 tic-tac-toe.
Another simple case when Breaker has a pairing-strategy is when all winning-sets are pairwise-disjoint and their size is at least 2.
References
Positional games | Pairing strategy | [
"Mathematics"
] | 660 | [
"Recreational mathematics",
"Game theory",
"Combinatorial game theory",
"Combinatorics"
] |
59,225,922 | https://en.wikipedia.org/wiki/Sand%20reinforced%20polyester%20composite | Sand reinforced polyester composites (SPCs), are building materials with sand acting as reinforcement in the composite. Pioneers in using sand reinforced composites include German business men Gerhard Dust and Gunther Plötner, who made sand reinforced composite bricks with polyester resin and hardener to provide emergency relief housing for those affected by the 2010 earthquake in Haiti. Sand was used in the composites because of its abundance and ease in obtaining.
Composition
The composition of sand is highly variable depending on the origin of the sand. The most common material found in non-tropical, coastal, and inland sand is silica usually in the form of quartz – which is considerably hard and one of the most common minerals resistant to weathering.
Preparation
Drying sand
Mixing with alternate material(s)
Adding a hardener to the mixture (such as methyl ethyl ketone peroxide)
Pouring mixture into mold and drying
Releasing from mold and smoothing
Properties
SPCs decrease water absorption because of the hydrophobic nature of sand.
The compression strength of SPCs is typically lower than non-sand reinforced composites.
Flexural strength of SPCs decreases with an increasing weight percent of sand. The composite becomes increasingly brittle as the weight percent of sand increases.
A greater weight percent of sand increases the composite's hardness (Vickers hardness test) – sand has reinforcing capabilities.
Thermal conductivity decreases with a greater weight percent of sand. Sand has insulating properties.
References
Composite materials | Sand reinforced polyester composite | [
"Physics"
] | 289 | [
"Materials",
"Composite materials",
"Matter"
] |
71,478,579 | https://en.wikipedia.org/wiki/NLTS%20conjecture | In quantum information theory, the no low-energy trivial state (NLTS) conjecture is a precursor to a quantum PCP theorem (qPCP) and posits the existence of families of Hamiltonians with all low-energy states of non-trivial complexity. It was formulated by Michael Freedman and Matthew Hastings in 2013. NLTS is a consequence of one aspect of qPCP problems the inability to certify an approximation of local Hamiltonians via NP completeness. In other words, it is a consequence of the QMA complexity of qPCP problems. On a high level, it is one property of the non-Newtonian complexity of quantum computation. NLTS and qPCP conjectures posit the near-infinite complexity involved in predicting the outcome of quantum systems with many interacting states. These calculations of complexity would have implications for quantum computing such as the stability of entangled states at higher temperatures, and the occurrence of entanglement in natural systems. A proof of the NLTS conjecture was presented and published as part of STOC 2023.
NLTS property
The NLTS property is the underlying set of constraints that forms the basis for the NLTS conjecture.
Definitions
Local hamiltonians
A k-local Hamiltonian (quantum mechanics) is a Hermitian matrix acting on n qubits which can be represented as the sum of Hamiltonian terms acting upon at most qubits each:
The general k-local Hamiltonian problem is, given a k-local Hamiltonian , to find the smallest eigenvalue of . is also called the ground-state energy of the Hamiltonian.
The family of local Hamiltonians thus arises out of the k-local problem. Kliesch states the following as a definition for local Hamiltonians in the context of NLTS:
Let I ⊂ N be an index set. A family of local Hamiltonians is a set of Hamiltonians {H(n)}, n ∈ I, where each H(n) is defined on n finite-dimensional subsystems (in the following taken to be qubits), that are of the form
where each Hm(n) acts non-trivially on O(1) qubits. Another constraint is the operator norm of Hm(n) is bounded by a constant independent of n and each qubit is only involved in a constant number of terms Hm(n).
Topological order
In physics, topological order is a kind of order in the zero-temperature phase of matter (also known as quantum matter). In the context of NLTS, Kliesch states: "a family of local gapped Hamiltonians is called topologically ordered if any ground states cannot be prepared from a product state by a constant-depth circuit".
NLTS property
Kliesch defines the NLTS property thus:
Let I be an infinite set of system sizes. A family of local Hamiltonians {H(n)}, n ∈ I has the NLTS property if there exists ε > 0 and a function f : N → N such that
for all n ∈ I, H(n) has ground energy 0,
⟨0n|U†H(n)U|0n⟩ > εn for any depth-d circuit U consisting of two qubit gates and for any n ∈ I with n ≥ f(d).
NLTS conjecture
There exists a family of local Hamiltonians with the NLTS property.
Quantum PCP conjecture
Proving the NLTS conjecture is an obstacle for resolving the qPCP conjecture, an even harder theorem to prove. The qPCP conjecture is a quantum analogue of the classical PCP theorem. The classical PCP theorem states that satisfiability problems like 3SAT are NP-hard when estimating the maximal number of clauses that can be simultaneously satisfied in a hamiltonian system. In layman's terms, classical PCP describes the near-infinite complexity involved in predicting the outcome of a system with many resolving states, such as a water bath full of hundreds of magnets. qPCP increases the complexity by trying to solve PCP for quantum states. Though it hasn't been proven yet, a positive proof of qPCP would imply that quantum entanglement in Gibbs states could remain stable at higher-energy states above absolute zero.
NLETS proof
NLTS on its own is difficult to prove, though a simpler no low-error trivial states (NLETS) theorem has been proven, and that proof is a precursor for NLTS.
NLETS is defined as:
Let k > 1 be some integer, and {Hn}n ∈ N be a family of k-local Hamiltonians. {Hn}n ∈ N is NLETS if there exists a constant ε > 0 such that any ε-impostor family F = {ρn}n ∈ N of {Hn}n ∈ N is non-trivial.
References
Quantum information theory
Conjectures
Conjectures that have been proved | NLTS conjecture | [
"Mathematics"
] | 1,004 | [
"Unsolved problems in mathematics",
"Conjectures",
"Conjectures that have been proved",
"Mathematical problems",
"Mathematical theorems"
] |
71,481,267 | https://en.wikipedia.org/wiki/Carl%20Hermann%20Medal | The Carl Hermann Medal is the highest award in the field of crystallography from the German Crystallographic Society. It is named after the German physicist and professor of crystallography Carl Hermann, who along with Paul Peter Ewald, created the Strukturbericht designation system for crystallographic prototypes. The medal is awarded annually during the annual meeting of the society
Carl Hermann Medal recipients
1996 Gerhard Borrmann
1997 Hartmut Bärnighausen
1998 Siegfried Haussühl
1999 George Sheldrick
2000 Heinz Jagodzinski
2001 Theo Hahn, Hans Wondratschek
2002 Friedrich Liebau
2003 Hans-Joachim Bunge
2004 Wolfram Saenger
2005 Peter Paufler
2006 Werner Fischer
2008 Hans Burzlaff
2009 Armin Kirfel
2010 Wolfgang Jeitschko
2011 Gernot Heger
2013 Emil Makovicky
2014 Axel T. Brünger
2015 Peter Luger
2016 Hartmut Fueß
2017 Wolfgang Neumann
2018 Walter Steurer
2019 Georg E. Schulz
2020 Dieter Fenske
2021 Karl Fischer
2022 Wulf Depmeier
2023 Rolf Hilgenfeld
2024 Juri Grin
See also
Ewald Prize
References
External links
Recipients of the Carl Hermann Medal from the German Crystallographic Society
Awards established in 1996
Medals
Crystallography awards
German science and technology awards | Carl Hermann Medal | [
"Chemistry",
"Materials_science",
"Technology"
] | 267 | [
"Science and technology awards",
"Crystallography awards",
"Crystallography",
"Science award stubs"
] |
71,483,551 | https://en.wikipedia.org/wiki/Spectrum%20%28physical%20sciences%29 | In the physical sciences, the term spectrum was introduced first into optics by Isaac Newton in the 17th century, referring to the range of colors observed when white light was dispersed through a prism.
Soon the term referred to a plot of light intensity or power as a function of frequency or wavelength, also known as a spectral density plot.
Later it expanded to apply to other waves, such as sound waves and sea waves that could also be measured as a function of frequency (e.g., noise spectrum, sea wave spectrum). It has also been expanded to more abstract "signals", whose power spectrum can be analyzed and processed. The term now applies to any signal that can be measured or decomposed along a continuous variable, such as energy in electron spectroscopy or mass-to-charge ratio in mass spectrometry. Spectrum is also used to refer to a graphical representation of the signal as a function of the dependent variable.
Etymology
Electromagnetic spectrum
Electromagnetic spectrum refers to the full range of all frequencies of electromagnetic radiation and also to the characteristic distribution of electromagnetic radiation emitted or absorbed by that particular object. Devices used to measure an electromagnetic spectrum are called spectrograph or spectrometer. The visible spectrum is the part of the electromagnetic spectrum that can be seen by the human eye. The wavelength of visible light ranges from 390 to 700 nm. The absorption spectrum of a chemical element or chemical compound is the spectrum of frequencies or wavelengths of incident radiation that are absorbed by the compound due to electron transitions from a lower to a higher energy state. The emission spectrum refers to the spectrum of radiation emitted by the compound due to electron transitions from a higher to a lower energy state.
Light from many different sources contains various colors, each with its own brightness or intensity. A rainbow, or prism, sends these component colors in different directions, making them individually visible at different angles. A graph of the intensity plotted against the frequency (showing the brightness of each color) is the frequency spectrum of the light. When all the visible frequencies are present equally, the perceived color of the light is white, and the spectrum is a flat line. Therefore, flat-line spectra in general are often referred to as white, whether they represent light or another type of wave phenomenon (sound, for example, or vibration in a structure).
In radio and telecommunications, the frequency spectrum can be shared among many different broadcasters. The radio spectrum is the part of the electromagnetic spectrum corresponding to frequencies lower below 300 GHz, which corresponds to wavelengths longer than about 1 mm. The microwave spectrum corresponds to frequencies between 300 MHz (0.3 GHz) and 300 GHz and wavelengths between one meter and one millimeter. Each broadcast radio and TV station transmits a wave on an assigned frequency range, called a channel. When many broadcasters are present, the radio spectrum consists of the sum of all the individual channels, each carrying separate information, spread across a wide frequency spectrum. Any particular radio receiver will detect a single function of amplitude (voltage) vs. time. The radio then uses a tuned circuit or tuner to select a single channel or frequency band and demodulate or decode the information from that broadcaster. If we made a graph of the strength of each channel vs. the frequency of the tuner, it would be the frequency spectrum of the antenna signal.
In astronomical spectroscopy, the strength, shape, and position of absorption and emission lines, as well as the overall spectral energy distribution of the continuum, reveal many properties of astronomical objects. Stellar classification is the categorisation of stars based on their characteristic electromagnetic spectra. The spectral flux density is used to represent the spectrum of a light-source, such as a star.
In radiometry and colorimetry (or color science more generally), the spectral power distribution (SPD) of a light source is a measure of the power contributed by each frequency or color in a light source. The light spectrum is usually measured at points (often 31) along the visible spectrum, in wavelength space instead of frequency space, which makes it not strictly a spectral density. Some spectrophotometers can measure increments as fine as one to two nanometers and even higher resolution devices with resolutions less than 0.5 nm have been reported. the values are used to calculate other specifications and then plotted to show the spectral attributes of the source. This can be helpful in analyzing the color characteristics of a particular source.
Mass spectrum
A plot of ion abundance as a function of mass-to-charge ratio is called a mass spectrum. It can be produced by a mass spectrometer instrument. The mass spectrum can be used to determine the quantity and mass of atoms and molecules. Tandem mass spectrometry is used to determine molecular structure.
Energy spectrum
In physics, the energy spectrum of a particle is the number of particles or intensity of a particle beam as a function of particle energy. Examples of techniques that produce an energy spectrum are alpha-particle spectroscopy, electron energy loss spectroscopy, and mass-analyzed ion-kinetic-energy spectrometry.
Displacement
Oscillatory displacements, including vibrations, can also be characterized spectrally.
For water waves, see wave spectrum and tide spectrum.
Sound and non-audible acoustic waves can also be characterized in terms of its spectral density, for example, timbre and musical acoustics.
Acoustical measurement
In acoustics, a spectrogram is a visual representation of the frequency spectrum of sound as a function of time or another variable.
A source of sound can have many different frequencies mixed. A musical tone's timbre is characterized by its harmonic spectrum. Sound in our environment that we refer to as noise includes many different frequencies. When a sound signal contains a mixture of all audible frequencies, distributed equally over the audio spectrum, it is called white noise.
The spectrum analyzer is an instrument which can be used to convert the sound wave of the musical note into a visual display of the constituent frequencies. This visual display is referred to as an acoustic spectrogram. Software based audio spectrum analyzers are available at low cost, providing easy access not only to industry professionals, but also to academics, students and the hobbyist. The acoustic spectrogram generated by the spectrum analyzer provides an acoustic signature of the musical note. In addition to revealing the fundamental frequency and its overtones, the spectrogram is also useful for analysis of the temporal attack, decay, sustain, and release of the musical note.
Continuous versus discrete spectra
In the physical sciences, the spectrum of a physical quantity (such as energy) may be called continuous if it is non-zero over the whole spectrum domain (such as frequency or wavelength) or discrete if it attains non-zero values only in a discrete set over the independent variable, with band gaps between pairs of spectral bands or spectral lines.
The classical example of a continuous spectrum, from which the name is derived, is the part of the spectrum of the light emitted by excited atoms of hydrogen that is due to free electrons becoming bound to a hydrogen ion and emitting photons, which are smoothly spread over a wide range of wavelengths, in contrast to the discrete lines due to electrons falling from some bound quantum state to a state of lower energy. As in that classical example, the term is most often used when the range of values of a physical quantity may have both a continuous and a discrete part, whether at the same time or in different situations. In quantum systems, continuous spectra (as in bremsstrahlung and thermal radiation) are usually associated with free particles, such as atoms in a gas, electrons in an electron beam, or conduction band electrons in a metal. In particular, the position and momentum of a free particle has a continuous spectrum, but when the particle is confined to a limited space its spectrum becomes discrete.
Often a continuous spectrum may be just a convenient model for a discrete spectrum whose values are too close to be distinguished, as in the phonons in a crystal.
The continuous and discrete spectra of physical systems can be modeled in functional analysis as different parts in the decomposition of the spectrum of a linear operator acting on a function space, such as the Hamiltonian operator.
The classical example of a discrete spectrum (for which the term was first used) is the characteristic set of discrete spectral lines seen in the emission spectrum and absorption spectrum of isolated atoms of a chemical element, which only absorb and emit light at particular wavelengths. The technique of spectroscopy is based on this phenomenon.
Discrete spectra are seen in many other phenomena, such as vibrating strings, microwaves in a metal cavity, sound waves in a pulsating star, and resonances in high-energy particle physics. The general phenomenon of discrete spectra in physical systems can be mathematically modeled with tools of functional analysis, specifically by the decomposition of the spectrum of a linear operator acting on a functional space.
In classical mechanics
In classical mechanics, discrete spectra are often associated to waves and oscillations in a bounded object or domain. Mathematically they can be identified with the eigenvalues of differential operators that describe the evolution of some continuous variable (such as strain or pressure) as a function of time and/or space.
Discrete spectra are also produced by some non-linear oscillators where the relevant quantity has a non-sinusoidal waveform. Notable examples are the sound produced by the vocal cords of mammals. and the stridulation organs of crickets, whose spectrum shows a series of strong lines at frequencies that are integer multiples (harmonics) of the oscillation frequency.
A related phenomenon is the appearance of strong harmonics when a sinusoidal signal (which has the ultimate "discrete spectrum", consisting of a single spectral line) is modified by a non-linear filter; for example, when a pure tone is played through an overloaded amplifier, or when an intense monochromatic laser beam goes through a non-linear medium. In the latter case, if two arbitrary sinusoidal signals with frequencies f and g are processed together, the output signal will generally have spectral lines at frequencies , where m and n are any integers.
In quantum mechanics
In quantum mechanics, the discrete spectrum of an observable refers to the pure point spectrum of eigenvalues of the operator used to model that observable.
Discrete spectra are usually associated with systems that are bound in some sense (mathematically, confined to a compact space). The position and momentum operators have continuous spectra in an infinite domain, but a discrete (quantized) spectrum in a compact domain and the same properties of spectra hold for angular momentum, Hamiltonians and other operators of quantum systems.
The quantum harmonic oscillator and the hydrogen atom are examples of physical systems in which the Hamiltonian has a discrete spectrum. In the case of the hydrogen atom the spectrum has both a continuous and a discrete part, the continuous part representing the ionization.
See also
References
Structure | Spectrum (physical sciences) | [
"Physics"
] | 2,204 | [
"Waves",
"Physical phenomena",
"Spectrum (physical sciences)"
] |
71,486,679 | https://en.wikipedia.org/wiki/Trioctylphosphine%20selenide | Trioctylphosphine selenide (TOPSe) is an organophosphorus compound with the formula SeP(C8H17)3. It is used as a source of selenium in the preparation of cadmium selenide. TOPSe is a white, air-stable solid that is soluble in organic solvents. The molecule features a tetrahedral phosphorus center.
Preparation and use
TOPSe is usually prepared by oxidation of trioctylphosphine with elemental selenium:
Often the reaction is conducted without isolation of the TOPSe.
As a solution with trioctylphosphine oxide, TOPSe reacts with dimethylcadmium to give cadmium selenide. The mechanism is proposed to proceed in two steps, beginning with the formation of cadmium metal followed by its oxidation with the TOPSe. Similarly it has been used to produce lead selenide.
References
Organophosphorus compounds
Selenides | Trioctylphosphine selenide | [
"Chemistry"
] | 196 | [
"Organophosphorus compounds",
"Organic compounds",
"Functional groups"
] |
49,104,078 | https://en.wikipedia.org/wiki/Aluminium%20monobromide | Aluminium monobromide is a chemical compound with the empirical formula AlBr. It forms from the reaction of HBr with Al metal at high temperature. It disproportionates near room temperature:
6/n "[AlBr]n" → Al2Br6 + 4 Al
This reaction is reversed at temperatures higher than 1000 °C.
A more stable compound of aluminium and bromine is aluminium tribromide.
See also
Aluminium monofluoride
Aluminium monochloride
Aluminium monoiodide
External links
Aluminium monobromide, NIST Standard Reference Data Program
Aluminium(I) compounds
Bromides | Aluminium monobromide | [
"Chemistry"
] | 125 | [
"Bromides",
"Inorganic compounds",
"Inorganic compound stubs",
"Salts"
] |
49,109,176 | https://en.wikipedia.org/wiki/Technofeminism | Technofeminism is a theoretical and practical framework that explores the intersections between technology, gender, and power. Rooted in feminist thought, it critically examines how technology shapes, reinforces, or disrupts gender inequalities and seeks to envision more equitable futures through technological design and use.
The term is widely attributed to Judy Wajcman, a sociologist and feminist scholar. Wajcman introduced the concept in her influential 2004 book, TechnoFeminism.
Historically, technofeminism is closely linked to cyberfeminism, a concept which emerged in the early 1990s. The origins of cyber- and technofeminism are strongly attributed to the references of Donna Haraway's A Cyborg Manifesto. Since the 1990s, numerous feminist movements developed, addressing feminism and technology in various ways, and through different perspectives. Networks, ideas and concepts can overlap.
Technofeminism is often examined in conjunction with intersectionality, a term coined by Kimberlé Crenshaw which analyzes the relationships among various identities, such as race, socioeconomic status, sexuality, gender, and more.
TechnoFeminism book
Overview
TechnoFeminism is a book by academic sociologist Judy Wajcman which reframes the relationship between gender and technologies, and presents a feminist reading of the woman-machine relationship. It argues against a technocratic ideology, posing instead a thesis of society and technology being mutually constitutive. She supports this with examples of feminist history related to reproductive technologies and automation. It is considered a key contributor to the rise of feminist technoscience as a field.
Reception
According to a review in the American Journal of Sociology, Wajcman convincingly argues that "analyses of everything from transit systems to pap smears must include a technofeminist awareness of men's and women's often different positions as designers, manufacturing operatives, salespersons, purchasers, profiteers, and embodied users of such technologies."
In the journal Science, Technology and Human Values, Sally Wyatt notes that the "theoretical insights from feminist technoscience (can and should) be useful for empirical research as well as for political change and action" and that one way of moving towards this is "return to production and work as research sites because so much work in recent years has focused on consumption, identity, and representation."
Editions
Adding to the print edition, which has been reprinted several times, E-book editions of TechnoFeminism were released in 2013. The book has been translated into Spanish as El Tecnofeminismo.
Academic contexts
Scholars, such as Lori Beth De Hertogh, Liz Lane, and Jessica Oulette, as well as Angela Haas, have spoken out about the lack of technofeminist scholarship, especially in the context of overarching technological research.
A primary concern of technofeminism is the relationship between historical and societal norms, and technology design and implementation. Technofeminist scholars actively work to illuminate the often unnoticed inequities ingrained in systems and come up with solutions to combat them. They also research how technology can be used for positive ends, especially for marginalized groups.
Angela Haas
Angela Haas focuses on technofeminism as a predecessor of "digital cultural rhetorics research", the focus of her scholarship. The interactions between these two fields have led scholars to analyze the intersectional nature of technology, and how this intersectionality results in tools that do not serve all users.
Haas also explores how marginalized groups interact with digital technologies. Specific areas analyzed include how revealing aspects of one's identity influences their ability to exist online. Although at times digital spaces do not cater to marginalized groups, one example being the idea that someone who identifies as homosexual is perceived as "sexual in every situation", which alters how the online community they are a part of interacts with them.
However, at times, technology can be renewed to serve women and marginalized groups. Haas uses the example of the vibrator to prove this point. While it is now associated with female empowerment, the tool was originally used to control women suffering from "hysteria".
De Hertogh et al.
Lori Beth De Hertogh, Liz Lane, and Jessica Ouellette expanded upon previous scholars' work, placing it within the specific context of the "Computers and Composition" journal. In their work, the scholars analyzed frequencies of the term "technofeminism/t" and associated words in the "Computers and Composition" journal. Unfortunately, the occurrences were limited, leading the scholars to call for increased use of the term "technofeminism" in scholarly materials and increased intersectional frameworks in mainstream technology literature.
Kerri Elise Hauman
Kerri Hauman explores technofeminist themes in her PhD dissertation, specifically discussing how feminism exists in digital spaces. Using the example of "Feministing", a blog serving those invested in "feminist activism", Hauman applies various rhetorical frameworks (such as invitational rhetoric and rhetorical ecologies) to understand how online platforms can further social justice initiatives in some ways, but promote the exclusion of disadvantaged groups in others.
See also
Digital rhetoric
Feminist technoscience
Cyberfeminism
References
Further reading
Farquharson, Karen "Book review: 'TechnoFeminism', by Judy Wajcman" Australian Journal of Emerging Technologies and Society, Vol. 2, no. 2 (2004), pp. 156–157
Sarah M. Brown "TechnoFeminism (review)" NWSA Journal Volume 19, Number 3, Fall 2007 pp. 225–227
2004 non-fiction books
Books about the Internet
Books about feminism
Women and science
Politics and technology | Technofeminism | [
"Technology"
] | 1,158 | [
"Women and science",
"Women in science and technology"
] |
49,109,446 | https://en.wikipedia.org/wiki/Losses%20in%20electrical%20systems | In an electrical or electronic circuit or power system part of the energy in play is dissipated by unwanted effects, including energy lost by unwanted heating of resistive components (electricity is also used for the intention of heating, which is not a loss), the effect of parasitic elements (resistance, capacitance, and inductance), skin effect, losses in the windings and cores of transformers due to resistive heating and magnetic losses caused by eddy currents, hysteresis, unwanted radiation, dielectric loss, corona discharge, and other effects. There are also losses during electric power transmission.
In addition to these losses of energy, there may be non-technical loss of revenue and profit, leading to electrical energy generated not being paid for, primarily due to theft. These losses include meter tampering and bypassing, arranged false meter readings, faulty meters, and un-metered supply. Non-technical losses are reported to account for up to 40% of the total electricity distributed in some countries. Technical and human errors in meter readings, data processing and billing may occur, and may lead to either over-charging or under-charging.
Parasitic losses in electricity production
With regard to electricity production, "parasitic loss" it is any of the loads or devices powered by the generator, not contributing to net electric yield. It is found by subtracting productive yield from gross yield:
where:
GY is gross electric yield (the output of the generator);
PY is productive yield (the electricity which is made available to external electric loads)
PL is parasitic load.
See also
Electric power transmission
Standby power
Leakage (electronics)
References
Electric power | Losses in electrical systems | [
"Physics",
"Engineering"
] | 337 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
49,112,440 | https://en.wikipedia.org/wiki/Orientation%20sheaf | In the mathematical field of algebraic topology, the orientation sheaf on a manifold X of dimension n is a locally constant sheaf oX on X such that the stalk of oX at a point x is the local homology group
(in the integer coefficients or some other coefficients).
Let be the sheaf of differential k-forms on a manifold M. If n is the dimension of M, then the sheaf
is called the sheaf of (smooth) densities on M. The point of this is that, while one can integrate a differential form only if the manifold is oriented, one can always integrate a density, regardless of orientation or orientability; there is the integration map:
If M is oriented; i.e., the orientation sheaf of the tangent bundle of M is literally trivial, then the above reduces to the usual integration of a differential form.
See also
There is also a definition in terms of dualizing complex in Verdier duality; in particular, one can define a relative orientation sheaf using a relative dualizing complex.
References
External links
Two kinds of orientability/orientation for a differentiable manifold
Algebraic topology
Orientation (geometry) | Orientation sheaf | [
"Physics",
"Mathematics"
] | 234 | [
"Algebraic topology",
"Topology stubs",
"Fields of abstract algebra",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
44,260,712 | https://en.wikipedia.org/wiki/SNP%20annotation | Single nucleotide polymorphism annotation (SNP annotation) is the process of predicting the effect or function of an individual SNP using SNP annotation tools. In SNP annotation the biological information is extracted, collected and displayed in a clear form amenable to query. SNP functional annotation is typically performed based on the available information on nucleic acid and protein sequences.
Introduction
Single nucleotide polymorphisms (SNPs) play an important role in genome wide association studies because they act as primary biomarkers. SNPs are currently the marker of choice due to their large numbers in virtually all populations of individuals. The location of these biomarkers can be tremendously important in terms of predicting functional significance, genetic mapping and population genetics. Each SNP represents a nucleotide change between two individuals at a defined location. SNPs are the most common genetic variant found in all individual with one SNP every 100–300 bp in some species. Since there is a massive number of SNPs on the genome, there is a clear need to prioritize SNPs according to their potential effect in order to expedite genotyping and analysis.
Annotating large numbers of SNPs is a difficult and complex process, which need computational methods to handle such a large dataset. Many tools available have been developed for SNP annotation in different organisms: some of them are optimized for use with organisms densely sampled for SNPs (such as humans), but there are currently few tools available that are species non-specific or support non-model organism data. The majority of SNP annotation tools provide computationally predicted putative deleterious effects of SNPs. These tools examine whether a SNP resides in functional genomic regions such as exons, splice sites, or transcription regulatory sites, and predict the potential corresponding functional effects that the SNP may have using a variety of machine-learning approaches. But the tools and systems that prioritize functionally significant SNPs, suffer from few limitations: First, they examine the putative deleterious effects of SNPs with respect to a single biological function that provide only partial information about the functional significance of SNPs. Second, current systems classify SNPs into deleterious or neutral group.
Many annotation algorithms focus on single nucleotide variants (SNVs), considered more rare than SNPs as defined by their minor allele frequency (MAF). As a consequence, training data for the corresponding prediction methods may be different and hence one should be careful to select the appropriate tool for a specific purpose. For the purposes of this article, "SNP" will be used to mean both SNP and SNV, but readers should bear in mind the differences.
SNP annotation
For SNP annotation, many kinds of genetic and genomic information are used. Based on the different features used by each annotation tool, SNP annotation methods may be split roughly into the following categories:
Gene based annotation
Genomic information from surrounding genomic elements is among the most useful information for interpreting the biological function of an observed variant. Information from a known gene is used as a reference to indicate whether the observed variant resides in or near a gene and if it has the potential to disrupt the protein sequence and its function. Gene based annotation is based on the fact that non-synonymous mutations can alter the protein sequence and that splice site mutation may disrupt the transcript splicing pattern.
Knowledge based annotation
Knowledge base annotation is done based on the information of gene attribute, protein function and its metabolism. In this type of annotation more emphasis is given to genetic variation that disrupts the protein function domain, protein-protein interaction and biological pathway. The non-coding region of genome contain many important regulatory elements including promoter, enhancer and insulator, any kind of change in this regulatory region can change the functionality of that protein. The mutation in DNA can change the RNA sequence and then influence the RNA secondary structure, RNA binding protein recognition and miRNA binding activity.
Functional annotation
This method mainly identifies variant function based on the information whether the variant loci are in the known functional region that harbor genomic or epigenomic signals. The function of non-coding variants are extensive in terms of the affected genomic region and they involve in almost all processes of gene regulation from transcriptional to post translational level
Transcriptional gene regulation
Transcriptional gene regulation process depends on many spatial and temporal factors in the nucleus such as global or local chromatin states, nucleosome positioning, TF binding, enhancer/promoter activities. Variant that alter the function of any of these biological processes may alter the gene regulation and cause phenotypic abnormality. Genetic variants that located in distal regulatory region can affect the binding motif of TFs, chromatin regulators and other distal transcriptional factors, which disturb the interaction between enhancer/silencer and its target gene.
Alternative splicing
Alternative splicing is one of the most important components that show functional complexity of genome. Modified splicing has significant effect on the phenotype that is relevance to disease or drug metabolism. A change in splicing can be caused by modifying any of the components of the splicing machinery such as splice sites or splice enhancers or silencers. Modification in the alternative splicing site can lead to a different protein form which will show a different function. Humans use an estimated 100,000 different proteins or more, so some genes must be capable of coding for a lot more than just one protein. Alternative splicing occurs more frequently than was previously thought and can be hard to control; genes may produce tens of thousands of different transcripts, necessitating a new gene model for each alternative splice.
RNA processing and post transcriptional regulation
Mutations in the untranslated region (UTR) affect many post-transcriptional regulation. Distinctive structural features are required for many RNA molecules and cis-acting regulatory elements to execute effective functions during gene regulation. SNVs can alter the secondary structure of RNA molecules and then disrupt the proper folding of RNAs, such as tRNA/mRNA/lncRNA folding and miRNA binding recognition regions.
Translation and post translational modifications
Single nucleotide variant can also affect the cis-acting regulatory elements in mRNA's to inhibit/promote the translation initiation. Change in the synonymous codons region due to mutation may affect the translation efficiency because of codon usage biases. The translation elongation can also be retarded by mutations along the ramp of ribosomal movement. In the post-translational level, genetic variants can contribute to proteostasis and amino acid modifications. However, mechanisms of variant effect in this field are complicated and there are only a few tools available to predict variant's effect on translation related modifications.
Protein function
Non-synonymous is the variant in exons that change the amino acid sequence encoded by the gene, including single base changes and non frameshift indels. It has been extremely investigated the function of non-synonymous variants on protein and many algorithms have been developed to predict the deleteriousness and pathogenesis of single nucleotide variants (SNVs). Classical bioinformatics tools, such as SIFT, Polyphen and MutationTaster, successfully predict the functional consequence of non-synonymous substitution. PopViz webserver provides a gene-centric approach to visualize the mutation damage prediction scores (CADD, SIFT, PolyPhen-2) or the population genetics (minor allele frequency) versus the amino acid positions of all coding variants of a certain human gene. PopViz is also cross-linked with UniProt database, where the protein domain information can be found, and to then identify the predicted deleterious variants fall into these protein domains on the PopViz plot.
Evolutionary conservation and nature selection
Comparative genomics approaches were used to predict the function-relevant variants under the assumption that the functional genetic locus should be conserved across different species at an extensive phylogenetic distance. On the other hand, some adaptive traits and the population differences are driven by positive selections of advantageous variants, and these genetic mutations are functionally relevant to population specific phenotypes. Functional prediction of variants’ effect in different biological processes is pivotal to pinpoint the molecular mechanism of diseases/traits and direct the experimental validation.
List of available SNP annotation tools
To annotate the vast amounts of available NGS data, currently a large number of SNPs annotation tools are available. Some of them are specific to specific SNPs while others are more general. Some of the available SNPs annotation tools are as follows SNPeff, Ensembl Variant Effect Predictor (VEP), ANNOVAR, FATHMM, PhD-SNP, PolyPhen-2, SuSPect, F-SNP, AnnTools, SeattleSeq, SNPit, SCAN, Snap, SNPs&GO, LS-SNP, Snat, TREAT, TRAMS, Maviant, MutationTaster, SNPdat, Snpranker, NGS – SNP, SVA, VARIANT, SIFT, LIST-S2, PhD-SNP and FAST-SNP. The functions and approaches used in SNPs annotation tools are listed below.
Algorithms used in annotation tools
Variant annotation tools use machine learning algorithms to predict variant annotations. Different annotation tools use different algorithms. Common algorithms include:
Interval/Random forest-eg.MutPred, SNPeff
Neural networks-eg.SNAP
Support Vector Machines-e.g. PhD-SNP, SNPs&GO
Bayesian classification-eg.PolyPhen-2
Comparison of variant annotation tools
A large number of variant annotation tools are available for variant annotation. The annotation by different tools does not always agree amongst each other, as the defined rules for data handling differ between applications. It is frankly impossible to perform a perfect comparison of the available tools. Not all tools have the same input and output nor the same functionality. Below is a table of major annotation tools and their functional area.
Application
Different annotations capture diverse aspects of variant function. Simultaneous use of multiple, varied functional annotations could improve rare variants association analysis power of whole exome and whole genome sequencing studies. Some tools have been developed to enable functionally-informed phenotype-genotype association analysis for common and rare variants by incorporating functional annotations in biobank-scale cohorts.
Conclusions
The next generation of SNP annotation webservers can take advantage of the growing amount of data in core bioinformatics resources and use intelligent agents to fetch data from different sources as needed. From a user's point of view, it is more efficient to submit a set of SNPs and receive results in a single step, which makes meta-servers the most attractive choice. However, if SNP annotation tools deliver heterogeneous data covering sequence, structure, regulation, pathways, etc., they must also provide frameworks for integrating data into a decision algorithms, and quantitative confidence measures so users can assess which data are relevant and which are not.
References
Molecular biology
Bioinformatics
Genomics | SNP annotation | [
"Chemistry",
"Engineering",
"Biology"
] | 2,361 | [
"Bioinformatics",
"Biological engineering",
"Biochemistry",
"Molecular biology"
] |
44,260,845 | https://en.wikipedia.org/wiki/Beam%20and%20Warming%20scheme | In numerical mathematics, Beam and Warming scheme or Beam–Warming implicit scheme introduced in 1978 by Richard M. Beam and R. F. Warming, is a second order accurate implicit scheme, mainly used for solving non-linear hyperbolic equations. It is not used much nowadays.
Introduction
This scheme is a spatially factored, non iterative, ADI scheme and uses implicit Euler to perform the time Integration. The algorithm is in delta-form, linearized through implementation of a Taylor-series. Hence observed as increments of the conserved variables. In this an efficient factored algorithm is obtained by evaluating the spatial cross derivatives explicitly. This allows for direct derivation of scheme and efficient solution using this computational algorithm. The efficiency is because although it is three-time-level scheme, but requires only two time levels of data storage. This results in unconditional stability. It is centered and needs the artificial dissipation operator to guarantee numerical stability.
The delta form of the equation produced has the advantageous property of stability (if existing) independent of the size of the time step.
The method
Consider the inviscid Burgers' equation in one dimension
Burgers' equation in conservation form,
where :
Taylor series expansion
The expansion of :
This is also known as the trapezoidal formula.
Note that for this equation,
Tri-diagonal system
Resulting tri-diagonal system:
This resulted system of linear equations can be solved using the modified tridiagonal matrix algorithm, also known as the Thomas algorithm.
Dissipation term
Under the condition of shock wave, dissipation term is required for nonlinear hyperbolic equations such as this. This is done to keep the solution under control and maintain convergence of the solution.
This term is added explicitly at level to the right hand side. This is always used for successful computation where high-frequency oscillations are observed and must be suppressed.
Smoothing term
If only the stable solution is required, then in the equation to the right hand side a second-order smoothing term is added on the implicit layer.
The other term in the same equation can be second-order because it has no influence on the stable solution if
The addition of smoothing term increases the number of steps required by three.
Properties
This scheme is produced by combining the trapezoidal formula, linearization, factoring, Padt spatial differencing, the homogeneous property of the flux vectors (where applicable), and hybrid spatial differencing and is most suitable for nonlinear systems in conservation-law form. ADI algorithm retains the order of accuracy and the steady-state property while reducing the bandwidth of the system of equations.
Stability of the equation is
-stable under CFL :
The order of Truncation error is
The result is smooth with considerable overshoot (that does not grow much with time).
References
Finite differences
Numerical differential equations
Computational fluid dynamics | Beam and Warming scheme | [
"Physics",
"Chemistry",
"Mathematics"
] | 579 | [
"Mathematical analysis",
"Computational fluid dynamics",
"Finite differences",
"Computational physics",
"Fluid dynamics"
] |
55,875,682 | https://en.wikipedia.org/wiki/Complementarity%20plot | The complementarity plot (CP) is a graphical tool for structural validation of atomic models for both folded globular proteins and protein-protein interfaces. It is based on a probabilistic representation of preferred amino acid side-chain orientation, analogous to the preferred backbone orientation of Ramachandran plots). It can potentially serve to elucidate protein folding as well as binding. The upgraded versions of the software suite is available and maintained in github for both folded globular proteins as well as inter-protein complexes. The software is included in the bioinformatic tool suites OmicTools and Delphi tools.
Background
Validation of three dimensional protein crystal structures are traditionally based on a multitude of parameters ranging from (i) the distribution of residues in the Ramachandran plot, (ii) deviations from ideality, for bond lengths and angles, (iii) atomic short contacts (steric clash scores), (iv) the distribution of the side-chain conformers (rotamers) and, (v) hydrogen bonding parameters. The advent of the complementarity plot as a structural validation tool for proteins essentially provides a conjugation of the traditional approaches. CP detects both local errors in atomic coordinates and also correctly matches an amino acid sequence to its native three dimensional fold situated amid decoys. The Complementarity Plot is based on the combined use of shape and electrostatic complementarity of completely / partially buried residues with respect to their environment constituted by rest of the polypeptide chain and is a sensitive indicator of the harmony or disharmony of interior residues with regard to the short and long range forces sustaining the native fold. The term 'Complementarity Plot' (CP) is perhaps a misnomer as there are actually three plots, each serving a given range of solvent exposure of the plotted residues (CP1, CP2, CP3 for burial bins 1, 2, 3).
Pictorial description
The complementarity plot has been largely inspired by the Ramachnadran Plot in its design (but not in its physicochemical attributes). Ramachandran Plot is deterministic in nature, in contrast, CP is probabilistic. Ramachandran plot deals with main-chain torsion angles and errors in such parameters are essentially locally restricted. In contrast, CP deals with geometric and electrostatic fit of the interior side-chains with their local and non-local neighborhood. Disharmony (misfit) in these conjugated parameters may arise due to a plethora of errors coming from bond angles or torsions from effectively the whole folded polypeptide chain. However, analogous to the Ramachandran Plot, the region within the first contour is termed 'probable' (analogous to the 'allowed' region), between the first and second contour, 'less probable' ('partially allowed') and outside the second contour 'improbable' ('disallowed').
Applications
CP has a multitude of applications in experimental as well as in computational structural biology. Thorough investigation of the effect of small errors in both main- and side-chain bond angles / torsions on the overall fold shows that the CP is effective in the detection of these errors even while failure of the other already existing parameters based on prohibition of local steric overlap and deviation from ideality. Consequences of such small angular errors are not restricted locally, resulting in geometric and electrostatic misfit of interior residues throughout the fold, potentially detectable by the CPs. These errors may arise from (i) misfitting of side-chain torsions/ wrong rotamer assignments (especially relevant for low-resolution structures), (ii) incorrect tracing of the main-chain trajectories during refinement (resulting in low-intensity errors diffused over the entire polypeptide chain). CP can also detect packing anomalies, and, in particular, can potentially signal unbalanced partial charges within protein interiors. It is useful in homology modeling and protein design. A version of the plot (CPint) has also been built and made available to probe similar errors in protein-protein interfaces.
CPdock
In contrast to the residue-wise plots, there is also a variant available for the Complementarity Plot, namely CPdock for plotting single Sc, EC values for the protein-protein interface and adjudging thereby the quality of the complex atomic structure (either experimentally solved or computationally built) therein. Sc, EC are shape and electrostatic complementarities computed for 'interacting protein-protein surfaces' originally proposed by Peter Colman and co-workers in the 1990s. CPdock was primarily developed as a scoring function to serve as an initial filter in protein-protein docking and can be a very helpful tool in protein design - as has lately been demonstrated in COVID-research both in scoring as well as in the evaluation of docked complexes to eliminate the effect of co-substrate binding in a targeted inhibitor binding.
Software
CP@SINP: http://www.saha.ac.in/biop/www/sarama.html
CP: https://github.com/nemo8130/SARAMA-updated
CPint: https://github.com/nemo8130/SARAMAint-updated
CPdock: https://github.com/nemo8130/CPdock
EnCPdock (web-server): https://scinetmol.in/EnCPDock/
References
Protein–protein interaction assays | Complementarity plot | [
"Chemistry",
"Biology"
] | 1,151 | [
"Biochemistry methods",
"Protein–protein interaction assays"
] |
55,878,980 | https://en.wikipedia.org/wiki/Strength%20of%20glass | Glass typically has a tensile strength of . However, the theoretical upper bound on its strength is orders of magnitude higher: . This high value is due to the strong chemical Si–O bonds of silicon dioxide. Imperfections of the glass, such as bubbles, and in particular surface flaws, such as scratches, have a great effect on the strength of glass and decrease it even more than for other brittle materials. The chemical composition of the glass also impacts its tensile strength. The processes of thermal and chemical toughening can increase the tensile strength of glass.
Glass has a compressive strength of .
Strength of glass fiber
Glass fibers have a much higher tensile strength than regular glass (200-500 times stronger than regular glass). This is due to the reduction of flaws in glass fibers and the small cross sectional area of glass fibers, constraining maximum defect size.
Strength of fiberglass
Fiberglass's strength depends on the type. S-glass has a strength of while E-glass and C-glass have a strength of .
Hardness
Glass has a hardness of 6.5 on the Mohs scale of mineral hardness.
References
Glass physics | Strength of glass | [
"Physics",
"Materials_science",
"Engineering"
] | 232 | [
"Glass engineering and science",
"Glass physics",
"Condensed matter physics"
] |
55,879,362 | https://en.wikipedia.org/wiki/Molybdocene%20dihydride | Molybdocene dihydride is the organomolybdenum compound with the formula (η5-C5H5)2MoH2. Commonly abbreviated as Cp2MoH2, it is a yellow air-sensitive solid that dissolves in some organic solvents.
The compound is prepared by combining molybdenum pentachloride, sodium cyclopentadienide, and sodium borohydride. The dihydride converts to molybdocene dichloride upon treatment with chloroform.
The compound adopts a "clamshell" structure where the Cp rings are not parallel.
References
Hydrido complexes
Metallocenes
Organomolybdenum compounds
Cyclopentadienyl complexes
Molybdenum(IV) compounds | Molybdocene dihydride | [
"Chemistry"
] | 173 | [
"Organometallic chemistry",
"Cyclopentadienyl complexes"
] |
67,159,816 | https://en.wikipedia.org/wiki/Osserman%E2%80%93Xavier%E2%80%93Fujimoto%20theorem | In the mathematical field of differential geometry, the Osserman–Xavier–Fujimoto theorem concerns the Gauss maps of minimal surfaces in the three-dimensional Euclidean space. It says that if a minimal surface is immersed and geodesically complete, then the image of the Gauss map either consists of a single point (so that the surface is a plane) or contains all of the sphere except for at most four points.
Bernstein's theorem says that a minimal graph in which is geodesically complete must be a plane. This can be rephrased to say that the Gauss map of a complete immersed minimal surface in is either constant or not contained within an open hemisphere. As conjectured by Louis Nirenberg and proved by Robert Osserman in 1959, in this form Bernstein's theorem can be generalized to say that the image of the Gauss map of a complete immersed minimal surface in either consists of a single point or is dense within the sphere.
Osserman's theorem was improved by Frederico Xavier and Hirotaka Fujimoto in the 1980s. They proved that if the image of the Gauss map of a complete immersed minimal surface in omits more than four points of the sphere, then the surface is a plane. This is optimal, since it was shown by Konrad Voss in the 1960s that for any subset of the sphere whose complement consists of zero, one, two, three, or four points, there exists a complete immersed minimal surface in whose Gauss map has image . Particular examples include Riemann's minimal surface, whose Gauss map is surjective, the Enneper surface, whose Gauss map omits one point, the catenoid and helicoid, whose Gauss maps omit two points, and Scherk's first surface, whose Gauss map omits four points.
It is also possible to study the Gauss map of minimal surfaces of higher codimension in higher-dimensional Euclidean spaces. There are a number of variants of the results of Osserman, Xavier, and Fujimoto which can be studied in this setting.
References
Sources
External links
Weisstein, Eric W. "Nirenberg's Conjecture." From MathWorld–A Wolfram Web Resource.
Theorems in differential geometry
Conjectures that have been proved | Osserman–Xavier–Fujimoto theorem | [
"Mathematics"
] | 469 | [
"Theorems in differential geometry",
"Geometry stubs",
"Conjectures that have been proved",
"Geometry",
"Theorems in geometry",
"Mathematical problems",
"Mathematical theorems"
] |
67,160,105 | https://en.wikipedia.org/wiki/Dungey%20Cycle | The Dungey cycle, officially proposed by James Dungey in 1961, is a phenomenon that explains interactions between a planet's magnetosphere and solar wind. Dungey originally proposed a cyclic behavior of magnetic reconnection between Earth's magnetosphere and flux of solar wind. This reconnection explained previously observed dynamics within Earth's magnetosphere. The rate of reconnection in the beginning of the cycle is dependent on the orientation of the interplanetary magnetic field as well as the resultant plasma conditions at the site of reconnection. On Earth, the reconnection cycle takes around 1 hour, but this differs from planet to planet.
Cyclic Behavior
The Dungey cycle occurs within three stages:
In the first stage, solar flux and the magnetopause connect, creating an opening in the magnetopause in which the solar wind can enter the magnetosphere. This opening is called the dayside reconnection and occurs on the side of the magnetosphere facing the solar wind source.
In the second stage, the flux travels in the direction of the solar wind across the magnetosphere.
In the third stage, at the magnetotail, reconnection closes the open flux, allowing for a new cycle to begin. This reconnection is called nightside reconnection.
Dungey's proposal originally put forth an explanation that the cycle is at steady state, and that the reconnection during stage one and three are equal. However, later work has found that the rate of reconnection is variable and affected by conditions at both the dayside reconnection site as well as the magnetotail.
Effect of interplanetary magnetic field orientation
The rate of reconnection at the magnetopause is heavily dependent on the orientation of the interplanetary magnetic field. Reconnection at the magnetopause occurs at higher rates when there is a stronger southward component to the field. This allows for solar wind with arbitrarily small shear angles to reconnect at the magnetopause. Under normal circumstances, the difference in field strength between the magnetopause and the surrounding fields only allow for solar winds with large shear angles to reconnect. A strong southward component normalizes the difference in field strength between the magnetopause and surrounding fields.
References
Geomagnetism
Planetary science
Solar phenomena
Space plasmas | Dungey Cycle | [
"Physics",
"Astronomy"
] | 471 | [
"Space plasmas",
"Physical phenomena",
"Astrophysics",
"Planetary science",
"Solar phenomena",
"Stellar phenomena",
"Astronomical sub-disciplines"
] |
67,168,311 | https://en.wikipedia.org/wiki/Hindustan%20Syringes%20%26%20Medical%20Devices | Hindustan Syringes & Medical Devices is one of the major world firms manufacturing medical syringes and one of the few producing a special type of syringe suitable for making efficient use of the Pfizer–BioNTech COVID-19 vaccine.
The New Delhi factories have been producing 2.5 billion syringes a year, increasing their capacity because of the coronavirus pandemic. Two thirds of the capacity is for the market in India but there is global demand, increased by stockpiling, from the US and Europe where investment focused on the vaccine development rather than syringe manufacture. The United Nations is also being supplied for the COVAX programme. Before the pandemic, global production was about 16 billion syringes per annum but only 5 to 10 percent were used for vaccination and immunisation. Now 8 to 10 billion vaccination syringes are required.
To make best use of the Pfizer–BioNTech vaccine a dose no larger than 0.3 millilitres is required which allows six or even seven doses to be extracted from each vial. The device must also be a low dead space syringe so scarcely anything is left in the syringe after injection and the syringe itself must break after use so that there is no possibility of repeated use spreading infection. For this reason Japan ordered 15 million syringes at the beginning of 2021 and deliveries started within a month.
Hindustan Syringes & Medical Devices was established in 1957 and is a family run business. In 1995 new machines were required for an increase in production and so private capital was needed. The latest ramping up could be achieved very quickly because no further investment was required.
References
Medical equipment
Drug delivery devices
Companies based in New Delhi
Medical and health organisations based in India
Indian companies established in 1957
Biotechnology companies of India | Hindustan Syringes & Medical Devices | [
"Chemistry",
"Biology"
] | 378 | [
"Pharmacology",
"Drug delivery devices",
"Medical equipment",
"Medical technology"
] |
42,812,764 | https://en.wikipedia.org/wiki/Selection%20shadow | The selection shadow is a concept involved with the evolutionary theories of aging that states that selection pressures on an individual decrease as an individual ages and passes sexual maturity, resulting in a "shadow" of time where selective fitness is not considered. Over generations, this results in maladaptive mutations that accumulate later in life due to aging being non-adaptive toward reproductive fitness. The concept was first worked out by J. B. S. Haldane and Peter Medawar in the 1940s, with Medawar creating the first graphical model.
Model
The model developed by Medawar states that due to the dangerous conditions and pressures from the environment, including predators and diseases, most individuals in the wild die not long after sexual maturity. Therefore, there is a low probability for individuals to survive to an advanced age and suffer the effects related to aging. In conjunction with this, the effects of natural selection decrease as age increases, so that later individual performance is ignored by selection forces. This results in beneficial mutations not being selected for if they only have a positive result later in life, along with later in life deleterious mutations not being selected against. Due to the fitness of an individual not being affected once it is past its reproductive prime, later mutations and effects are considered to be in the "shadow" of selection.
This concept would later be adapted into Medawar's 1952 mutation accumulation hypothesis, which was itself expanded upon by George C. Williams in his 1957 antagonistic pleiotropy hypothesis.
A classical requirement and constraint of the model is that the number of individuals within a population that live to reach senescence must be small in number. If this is not true for a population, then the effects of old age will not be under a selection shadow and instead affect adaptation and evolution of the population as a whole. At the same time, however, this requirement has been challenged by increasing evidence of senescence being more common in wild populations than previously expected, especially among birds and mammals, while the effects of the selection shadow remain present.
Medawar's Test Tube model
Medawar developed a theoretical model to demonstrate his thought process which explained that most animals will die before aging will be the ultimate cause for death in that animal. This would be from environmental factors such as large storms, drought, and fires, and predation. Medawar wanted to demonstrate this possibility by using test tubes to get his point across. The test tubes would be used to represent a population of species. If one of these test tubes were to theoretically break, this would represent an individual animal dying. Randomly, test tubes would then be broken in the population to keep his model realistic. The broken test tubes would be replaced with a new one, which represents a new animal being born into the population. Over time, the model showed that test tubes over a certain age would decline in the population as new test tubes were put in. The overall results in Medawar’s thought model demonstrated an exponential decline in the survivor curve which resulted in the population having a half life. The amount of older animals, or test tubes in the population would then be harder to maintain and ultimately die. Medawar created this model to ultimately explain what would realistically happen in actual life.
Criticism
Some scientists, however, have criticized the idea of aging being non-adaptive, instead adopting the theory of "death by design". This theory follows the work of August Weismann, which states that aging specifically evolved as an adaptation, and disagrees with Medawar's model as a perceived oversimplification of the impact older organisms have on evolution. It is also claimed that older organisms have a higher reproductive capacity due to being better fit in order to reach their age, rather than their capacity being equal as in Medawar's calculations.
References
Life extension
Evolutionary biology
Senescence
Evolutionary theories of biological ageing | Selection shadow | [
"Chemistry",
"Biology"
] | 783 | [
"Senescence",
"Evolutionary biology",
"Metabolism",
"Cellular processes"
] |
42,812,909 | https://en.wikipedia.org/wiki/Hibakujumoku | Hibakujumoku (; also called survivor tree or A-bombed tree in English) is a Japanese term for a tree that survived the atomic bombings of Hiroshima and Nagasaki in 1945. The term is from and .
Damage
The heat emitted by the explosion in Hiroshima within the first three seconds at a distance of three kilometres from the hypocenter was about 40 times greater than that from the Sun. The initial radiation level at the hypocenter was approximately 240 Gy. According to Hiroshima and Nagasaki: The Physical, Medical, and Social Effects of the Atomic Bombings, plants suffered damage only in the portions exposed above ground, while portions underground were not directly damaged.
Regeneration
The rate of regeneration differed by species. Active regeneration was shown by broad-leaved trees. Approximately 170 trees that grew in Hiroshima in 2011 had actually been there prior to the bombing. The oleander was designated the official flower of Hiroshima for its remarkable vitality.
Types of hibakujumoku
Hibakujumoku species are listed in the UNITAR database, shown below, combined with data from Hiroshima and Nagasaki: The Physical, Medical, and Social Effects of the Atomic Bombings. A more extensive list, including distance from the hypocenter for each tree, is available in Survivors: The A-bombed Trees of Hiroshima.
List
Surviving trees in Nagasaki
Although not as well known as the hibakujumoku in Hiroshima, there are a number of similar survivors in the vicinity of the hypocenter in Nagasaki. Approximately 50 of these trees have been documented in English.
See also
Hibakusha, humans that survived the atomic bombs
List of individual trees
References
Atomic bombings of Hiroshima and Nagasaki
Trees of Japan
Individual trees in Japan
Radiation effects | Hibakujumoku | [
"Physics",
"Materials_science",
"Engineering"
] | 353 | [
"Physical phenomena",
"Materials science",
"Radiation",
"Condensed matter physics",
"Radiation effects"
] |
42,818,038 | https://en.wikipedia.org/wiki/Rhazinilam | Rhazinilam is an alkaloid first isolated in 1965 by Linde from the Melodinus australis plant. It was later isolated from the shrub Rhazya stricta as well as from other organisms.
Biological activity
Rhazinilam has activity similar to that of colchicine, taxol and vinblastine, acting as a spindle poison.
Total synthesis
Rhazinilam was first synthesized in 1973 by Smith and coworkers, and multiple subsequent times.
Trauner synthesis
References
Pyrroles
Alkaloids
Total synthesis
Plant toxins | Rhazinilam | [
"Chemistry"
] | 122 | [
"Biomolecules by chemical classification",
"Chemical ecology",
"Natural products",
"Plant toxins",
"Organic compounds",
"Chemical synthesis",
"Total synthesis",
"Alkaloids"
] |
70,046,013 | https://en.wikipedia.org/wiki/Hydrogen%20cryomagnetics | Hydrogen cryomagnetics is a term used to denote the use of cryogenic liquid hydrogen to cool the windings of an electromagnet. A key benefit of hydrogen cryomagnetics is that low temperature liquid hydrogen can be deployed simultaneously both as a cryogen to cool electromagnet windings and as an energy carrier . That is, powerful synergistic benefits are likely to arise when hydrogen is used as a fuel and as a coolant. Even without the fuel/coolant synergies, hydrogen cryomagnetics is an attractive option for the cooling of superconducting electromagnets as it eliminates dependence upon increasingly scarce and expensive liquid helium. For hydrogen cryomagnetic applications specialist hydrogen-cooled electromagnets are wound using either copper or superconductors. Liquid-hydrogen-cooled copper-wound magnets work well as pulsed field magnets. Superconductors have the property that they can operate continuously and very efficiently as electrical resistive losses are almost entirely avoided. Most commonly the term "hydrogen cryomagnetics" is used to denote the use of cryogenic liquid hydrogen directly, or indirectly, to enable high temperature superconductivity in electromagnet windings.
Hydrogen cryomagnetics is especially useful where high magnetic fields are required, such as in high torque electric motors. At atmospheric pressure liquid hydrogen boils at approximately 20.3 K (-259.3 °C). Liquid hydrogen at such a temperature is significantly colder than the temperatures at which superconductivity can first be induced in a range of important high temperature superconductors including yttrium barium copper oxide (YBCO), because YBCO has a superconducting transition temperature (Tc) of 93 K. The operation of YBCO-based superconducting magnets at a temperature more than 70 K below Tc allows for the use of very high current densities and very high magnetic fields without loss of superconductivity. The materials properties of YBCO are such that it cannot be made into ductile wires although much progress has been made towards high field YBCO electromagnets based on the use of tapes rather than wires. Another superconductor suitable for hydrogen cryomagnetic use is magnesium diboride. Magnesium diboride is a conventional superconductor and it can be prepared in flexible wires facilitating its potential application in, for example, tokamak fusion reactors. Magnesium diboride has a transition temperature of 39 K. While at atmospheric pressure liquid hydrogen is cold enough to cool magnesium diboride into the superconducting state, there are advantages to pumping on the hydrogen so as to lower its temperature still further when in use such a magnet winding (this uses the same physics that says that the boiling point of water can be reduced by reducing the pressure above the liquid, see e.g.). Generally the greater the difference between conductor temperature and superconducting transition temperature the better. Liquid hydrogen is not the only way cryogenically to cool a magnet, indeed conventionally superconductors are cooled using liquid helium at 4.2K and for conventional conductor pulsed magnets (including copper) most attention has been given to liquid nitrogen at 77 K. Liquid hydrogen can be expected to drive better performance than liquid nitrogen and, as discussed below, liquid hydrogen avoids several concerns around helium availability.
Any use of hydrogen cryomagnetics requires careful consideration of hydrogen safety.
Hydrogen cryomagnetics is concept distinct from the use of higher temperature gaseous hydrogen as a coolant in power plant turbines.
Origins
The term hydrogen cryomagnetics was first used in a text panel forming part of an article by Professor WJ Nuttall and Professor BA Glowacki published in July 2008 in Nuclear Engineering International. The concept was returned to in an Institute of Physics conference held in Manchester England in April 2010. The presentation was delivered by Professor WJ Nuttall and co-authored by Professor BA Glowacki and Dr L Bromberg. The journey to the term also involved thinking around Hydrogen as a Fuel and as a Coolant – from the superconductivity perspective. Earlier related consideration of liquid hydrogen as a cryogenic coolant includes work by Glowacki and co-authors from 2005 and 2006. The concept of hydrogen cryomagnetics has been further elaborated and discussed in 2012, 2015 and 2019.
Attributes
The emergence of hydrogen cryomagnetics can be expected to benefit from the development of strong industrial interest in liquid hydrogen that can be expected to occur for other reasons, including the growth of a general hydrogen economy and the need to transport and store bulk hydrogen. Global interest is growing in the emergence of a hydrogen economy in which hydrogen is a low-carbon energy carrier sourced from renewables (green hydrogen) or alternatively from natural gas with carbon capture and storage (this is sometimes termed "blue hydrogen"). When pipelines are unavailable. the use of liquefied hydrogen for the bulk transport and distribution of hydrogen molecules has been found to be the more efficient than high pressure gas cylinders when moving the large quantities over the large distances. Hydrogen (as liquid or gas) is an energy storage system in competition with electric battery technology. Hydrogen wins out over batteries for the largest quantitites of energy stored over the longest period. Hydrogen fuel cells are win out over battery electric technologies for the heaviest forms of transportation - such as trains, trucks and buses Hydrogen technology is in competition with battery technology and gaseous hydrogen technology is in competition with liquid hydrogen technology. As these competitive forces pay out it is quite possible that a significant role will emerge for liquid hydrogen as a stationary long-term and large-scale energy storage system and fuelling system for heavier vehicles. In such scenarios, the emerging economic role of liquid hydrogen production and distribution can be expected to greatly favour the subsequent use of hydrogen in cryomagnetic applications.
Avoiding the problems of helium
The conventional way to cool superconducting magnets is to use liquid helium (atmospheric pressure boiling point 4.2K). Helium is a by product of the current natural gas industry and its fluctuating price and availability have been a cause of much concern in recent years. Improved efficiency of use, and the avoidance of waste, can be expected to stretch helium supplies. Further natural gas sourced helium cannot necessarily be expected to continue if natural gas is to be phased out on a journey to Net-Zero. There is a need for those helium using sectors that can substitute away from helium to do so. Those users that could safely switch to hydrogen cryomagnetics could see a significant reduction in operating costs and avoid risks associated with helium supply scarcity.
Better electric motors
In the twentieth century the dominant type of electric motor was an induction motor using tightly wound copper wire coils to generate the necessary internal magnetic fields. More recently, and in part spurred on by the growth in battery electric vehicles, there has been much innovation in permanent magnet motors. These rely on high field permanent magnets relying on rare earth minerals. Hydrogen cryomagnetics provides for the possibility of superconducting induction motors cooled by liquid hydrogen at approximately 20K. Such cryogenic liquid might be available on a vehicle (such as an airplane, train, truck, bus or even car) if high purity hydrogen is used for on-board fuel cell electricity generation.
Liquid hydrogen - a source of high purity hydrogen
The boil off gas from a tank of liquid hydrogen can be expected to be extremely pure and clean. In a sense the liquid hydrogen has been distilled. Extended operation of Fuel Cell Electric Vehicles, for example, relies on the need to protect fuel cell membranes and catalysts from contamination. Fuel cell degradation in use can have many causes, but nevertheless fuel purity (in normal conditions and in the case of refuelling equipment failure) can be expected to be a major concern for any system relying on high pressure hydrogen gas handling.
Potential applications
Various potential applications of hydrogen cryomagnetics have been reviewed by Mojarrad and co-workers in 2022. Some potential applications are listed below.
Fusion energy
The concept of applied hydrogen cryomagnetics first emerged in connection with magnetically confined nuclear fusion. WJ Nuttall had proposed in 2004 that the commercialisation of fusion energy might be via the international oil companies rather than via electricity. For technical and economic reasons fusion energy might be a viable means to produce liquid hydrogen for the hydrogen economy in ways reminiscent of today's liquefied natural gas economy. Conventional tokamak fusion is likely to require very large amounts of expensive and scarce liquid helium to cool superconducting magnets. Liquid helium is a key consumable in the conventional paradigm. Noting the potential abundance of liquid hydrogen at a future fusion facility owned by one of today's international oil companies it would seem natural to use the cryogenic hydrogen to help break the dependence on helium. Hydrogen cryomagnetics has the potential to facilitate tokamak fusion energy. These ideas came together as a concept known as 'Fusion Island' developed by WJ Nuttall, BA Glowacki and RH Clarke. The Fusion Island concept was outlined further in 2008 and 2021. Commonwealth Fusion Systems in Massachusetts is actively exploring superconducting magnet technologies cooled to liquid hydrogen temperatures.
Aviation
Another significant opportunity for hydrogen cryomagnetics lies in low emissions aviation. Airbus, Rolls-Royce and collaborators have been pioneering the use of liquid hydrogen in aircraft propulsion. Writing in Aviation Week in April 2021, Thierry Dubois observed: "Airbus has launched an ambitious demonstration program for the use of superconducting technology. It is aiming at a major efficiency improvement. The idea stems from both the difficulty of designing an electric-propulsion architecture with conventional wiring and the opportunity to use liquid hydrogen as a cold source. Superconducting materials require cryogenic temperatures." Hydrogen cryomagnetics permits the on aircraft use of hydrogen fuel cell technology to generate electricity to drive high torque HTS based electric motors capable of driving propellers or ducted fans at high efficiency. The Advanced Superconducting Motor Experimental Demonstrator (ASuMED) programme funded by the European Union, is working on a 99% efficient superconducting aircraft engine with a power-to-weight ratio of 20 kW/kg. Researchers at Moscow Aviation Institute have proposed a design for a 5MW hydrogen cryomagnetic aero engine. Even before the benefits to be obtained from the use hydrogen cryomagnetic superconducting induction motors hydrogen is attracting much interest as a low emission aviation fuel of the future. Airbus has an active hydrogen program as do other major industrial concerns in global aviation.
Metals processing industry
Hydrogen cryomagnetics has potentially beneficial synergistic links with the emerging low emission steel industry as being pioneered by SSAB in Sweden. Hydrogen is being developed as an alternative to coking coal for the reduction of iron ores to produce pig iron ('smelting'). The use of hydrogen for such purposes would greatly strengthen links between hydrogen and steel making. With that in mind, if a forge were to have access to cryogenic liquid hydrogen then large scale magnetic induction forging based upon hydrogen cryomagnetic technology could be extremely economically attractive, especially for billet heating.
References
Cryogenics
Cryogenics | Hydrogen cryomagnetics | [
"Physics"
] | 2,298 | [
"Applied and interdisciplinary physics",
"Cryogenics"
] |
61,872,938 | https://en.wikipedia.org/wiki/Phenylcarbylamine%20chloride | Phenylcarbylamine chloride is a chemical compound that was used as a chemical warfare agent. It is an oily liquid with an onion-like odor. Classified as an isocyanide dichloride, this compound is a lung irritant with lachrymatory effects.
Synthesis
Phenylcarbylamine chloride is produced by chlorination of phenyl isothiocyanate.
See also
Chloropicrin
Phosgene
References
Lachrymatory agents
Pulmonary agents
Phenyl compounds
Imino compounds
Organochlorides | Phenylcarbylamine chloride | [
"Chemistry"
] | 118 | [
"Chemical weapons",
"Organic chemistry stubs",
"Organic compounds",
"Lachrymatory agents",
"Organic compound stubs",
"Pulmonary agents"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.