id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
40,442,433 | https://en.wikipedia.org/wiki/Acetylfentanyl | Acetylfentanyl (acetyl fentanyl) is an opioid analgesic drug that is an analog of fentanyl. Studies have estimated acetylfentanyl to be 15 times more potent than morphine, which would mean that despite being somewhat weaker than fentanyl, it is nevertheless still several times stronger than pure heroin. It has never been licensed for medical use and instead has only been sold on the illicit drug market. Acetylfentanyl was discovered at the same time as fentanyl itself and had only rarely been encountered on the illicit market in the late 1980s. However, in 2013, Canadian police seized 3 kilograms of acetylfentanyl. As a μ-opioid receptor agonist, acetylfentanyl may serve as a direct substitute for oxycodone, heroin or other opioids. Common side effects of fentanyl analogs are similar to those of fentanyl itself, which include itching, nausea, and potentially fatal respiratory depression. Fentanyl analogs have killed hundreds of people throughout Europe and the former Soviet republics since the most recent resurgence in use began in Estonia in the early 2000s, and novel derivatives continue to appear.
Deaths
Europe
Acetylfentanyl has been analytically confirmed in 32 fatalities in four European member states between 2013 and August 2015, Germany (2), Poland (1), Sweden (27), and the United Kingdom (2).
Russia
Twelve deaths have been associated with acetylfentanyl in Russia since 2012.
United States
The Centers for Disease Control and Prevention (CDC) issued a health alert to report that between March 2013 and May 2013, 14 overdose deaths related to injected acetylfentanyl had occurred among intravenous drug users (ages between 19 and 57 years) in Rhode Island. After confirming five overdoses in one county, including a fatality, Pennsylvania asked coroners and medical examiners across the state to screen for acetylfentanyl. As a result of this investigation, Pennsylvania confirmed at least one acetylfentanyl overdose death and attributed at least 50 fatalities to either fentanyl or acetylfentanyl during the first half of 2013. In July 2015, the DEA informed about 52 confirmed fatalities involving acetylfentanyl in the United States between 2013 and 2015.
Japan
One fatal poisoning caused by intravenous injection of a "bath salt" product containing acetylfentanyl mixed with 4'-Methoxy-α-pyrrolidinopentiophenone (a substituted cathinone) has been reported in 2016.
Legal status
Canada
As an analog of fentanyl, acetylfentanyl is a Schedule I controlled drug.
China
As of October 2015, acetylfentanyl is a controlled substance in China.
United States
Acetylfentanyl is a Schedule I controlled substance as of May 2015.
Switzerland
, acetylfentanyl is a controlled substance in Switzerland.
United Kingdom
Acetylfentanyl was made a class A drug as an analogue of fentanyl in 1986.
Overdose
Acetylfentanyl overdosage has been reported to closely resemble heroin overdosage clinically. Additionally, while naloxone (Narcan) is effective in treating acetylfentanyl overdose, larger than normal doses of the antidote may be required.
Detection in body fluids
Acetylfentanyl may be quantitated in blood, plasma, or urine by liquid chromatography-mass spectrometry to confirm a diagnosis of poisoning in hospitalized patients or to provide evidence in a medicolegal death investigation. Postmortem peripheral blood acetylfentanyl concentrations have been in a range of 89–945 μg/L in victims of acute overdosage.
See also
3-Methylbutyrfentanyl
3-Methylfentanyl
4-Fluorofentanyl
α-Methylfentanyl
Butyrfentanyl
Furanylfentanyl
Homofentanyl
List of fentanyl analogues
References
Further reading
General anesthetics
Synthetic opioids
Piperidines
Anilides
Acetamides
Mu-opioid receptor agonists
Janssen Pharmaceutica
Belgian inventions
Euphoriants
Fentanyl | Acetylfentanyl | [
"Chemistry"
] | 895 | [
"Highly-toxic chemical substances",
"Harmful chemical substances"
] |
40,442,731 | https://en.wikipedia.org/wiki/Single-phase%20generator | Single-phase generator (also known as single-phase alternator) is an alternating current electrical generator that produces a single, continuously alternating voltage. Single-phase generators can be used to generate power in single-phase electric power systems. However, polyphase generators are generally used to deliver power in three-phase distribution system and the current is converted to single-phase near the single-phase loads instead. Therefore, single-phase generators are found in applications that are most often used when the loads being driven are relatively light, and not connected to a three-phase distribution, for instance, portable engine-generators. Larger single-phase generators are also used in special applications such as single-phase traction power for railway electrification systems.
Designs
Revolving armature
The design of revolving armature generators is to have the armature part on a rotor and the magnetic field part on stator. A basic design, called elementary generator, is to have a rectangular loop armature to cut the lines of force between the north and south poles. By cutting lines of force through rotation, it produces electric current. The current is sent out of the generator unit through two sets of slip rings and brushes, one of which is used for each end of the armature. In this two-pole design, as the armature rotates one revolution, it generates one cycle of single phase alternating current (AC). To generate an AC output, the armature is rotated at a constant speed having the number of rotations per second to match the desired frequency (in hertz) of the AC output.
The relationship of armature rotation and the AC output can be seen in this series of pictures. Due to the circular motion of the armature against the straight lines of force, a variable number of lines of force will be cut even at a constant speed of the motion. At zero degrees, the rectangular arm of the armature does not cut any lines of force, giving zero voltage output. As the armature arm rotates at a constant speed toward the 90° position, more lines are cut. The lines of force are cut at most when the armature is at the 90° position, giving out the most current on one direction. As it turns toward the 180° position, lesser number of lines of force are cut, giving out lesser voltage until it becomes zero again at the 180° position. The voltage starts to increase again as the armature heads to the opposite pole at the 270° position. Toward this position, the current is generated on the opposite direction, giving out the maximum voltage on the opposite side. The voltage decrease again as it completes the full rotation. In one rotation, the AC output is produced with one complete cycle as represented in the sine wave.
More poles can also be added to single-phase generator to allow one rotation to produce more than one cycle of AC output. In an example on the left, the stator part is reconfigured to have 4 poles which are equally spaced. A north pole is adjacent to the two south poles. The shape of the armature at the rotor part is also changed. It is no longer a flat rectangle. The arm is bent 90 degrees. This allows one side of the armature to interact with a north pole while the other side interacts with a south pole similarly to the two-pole configuration. The current is still delivered out through the two sets of slip rings and brushes in the same fashion as in the two-pole configuration. The difference is that a cycle of AC output can be completed after a 180 degree rotation of the armature. In one rotation, the AC output will be two cycles. This increases the frequency of the output of the generator. More poles can be added to achieves higher frequency at the same rotation speed of the generator, or same frequency of output at the lower rotation speed of the generator depending on the applications.
This design also allows us to increase the output voltage by modifying the shape of the armature. We can add more rectangular loops to the armature as seen on the picture on the right. The additional loops at the armature arm are connected in series, which are actually additional windings of the same conductor wire to form a coil in rectangular shape. In this example, there are 4 windings in the coil. Since the shapes of all windings are the same, the amount of the lines of force will be cut at the same amount in the same direction at the same time in all windings. This creates in phase AC output for these 4 windings. As a result, the output voltage is increased 4 time as shown in the sine wave in the diagram.
Revolving field
The design of revolving field generators is to have the armature part on stator and the magnetic field part on rotor. A basic design of revolving field single-phase generator is shown on the right. There are two magnetic poles, north and south, attached to a rotor and two coils which are connected in series and equally spaced on stator. The windings of the two coils are in reverse direction to have the current to flow in the same direction because the two coils always interact with opposing polarities. Since poles and coils are equally spaced and the locations of the poles match to the locations of the coils, the magnetic lines of force are cut at the same amount at any degree of the rotor. As a result, the voltages induced to all windings have the same value at any given time. The voltages from both coils are "in phase" to each other. Therefore, the total output voltage is two times the voltage induced in each winding. In the figure, at the position where pole number 1 and coil number 1 meet, the generator produces the highest output voltage on one direction. As the rotor turns 180 degrees, the output voltage is alternated to produce the highest voltage on the other direction. The frequency of the AC output in this case equals to the number of rotations of the rotor per second.
This design can also allow us to increase the output frequency by adding more poles. In this example on the right, we have 4 coils connected in series on the stator and the field rotor has 4 poles. Both coils and poles are equally spaced. Each pole has opposite polarity to its neighbors which are angled at 90 degrees. Each coils also have opposite winding to its neighbors. This configuration allows the lines of force at 4 poles to be cut by 4 coils at the same amount at a given time. At each 90-degree rotation, the voltage output polarity is switched from one direction to the other. Therefore, there are 4 cycles of the AC output in one rotation. As the 4 coils are wired in series and their outputs are "in phase", the AC output of this single-phase generator will have 4 times the voltage of that generated by each individual coil.
A benefit of the revolving field design is that if the poles are permanent magnets, then there is no need to use any slip ring and brush to deliver electricity out of the generator as the coils are stationary and can be wired directly from the generator to the external loads.
Small generators
Single-phase generators that people are familiar with are usually small. The applications are for standby generators in case of main power supply is interrupted and for supplying temporary power on construction sites.
Another application is in small wind technology. Although most of wind turbines use three-phase generators, single-phase generators are found in some of the small wind turbine models with rated power outputs of up with 55 kW. The single-phase models are available in the vertical axis wind turbines (VAWT) and Horizontal-axis wind turbines (HAWT).
Power stations
In the very early days of electricity generation, the generators at power stations had been single-phase AC or direct current. The direction of the power industry were changing in 1895 when more efficient polyphase generators were successfully implemented at Adams Hydroelectric Generating Plant which was the first large-scale polyphase power station. Newer power stations started to adopt the polyphase system. In the 1900s, many railways started the electrification of their lines. During that time, the single-phase AC system had been widely used for their traction power networks beside the direct current system. The early generators for those single-phase traction networks are single-phase generators. Even with newer three-phase motors which were introduced to some modern trains, the single-phases transmission for traction networks survive their time and are still in use in many railways today. However, many traction power stations have replaced their generators over time to use three-phase generators and convert into single-phase for transmission.
Hydro
In the early development of hydroelectricity, single-phase generators played an important role in demonstrating the benefits of alternating current. In 1891, a 3,000 volts and 133 Hz single-phase generator of 100 horsepower was installed at Ames Hydroelectric Generating Plant which was belt-connected with Pelton water wheel. The power was transmitted through cables to power an identical motor at the mill. The plant was the first to generate alternating current electric power for industrial application and it was a demonstration of the efficiency in AC transmission. This was a precedent to larger for much larger plants such as the Edward Dean Adams Power Plant in Niagara Falls, New York in 1895. However, the larger plants were operated using polyphase generators for greater efficiency. That left the applications for single-phase hydroelectricity generation to special cases such as in light loads.
An example of using single-phase in a special case was implemented in 1902 at St. Louis Municipal Electric Power Plant. A 20 kW single-phase generator was direct-connected to a Pelton water wheel to generate electricity enough to power light loads. This was an early demonstration of in-conduit hydro to capture energy from water flow in the public water pipeline. The energy for the water main in this case was not created by gravity, but the water was pumped by a larger steam engine at a water pumping station to supply water to customers. The decision to have water pumped by a larger engine then take some of the energy from water flow to power a smaller generator using water wheel was based on the cost. At the time, steam engines were not efficient and cost effective for a 20 kW system. Therefore, they installed a steam water pump to have enough energy to maintain water pressure for customer and to drive a small generator at the same time.
The main usage of single-phase hydroelectricity generation today is to supply power for traction network for railways. Many electrical transmission networks for railways especially in Germany rely on single-phase generation and transmission which are still in use today. A notable power station is Walchensee Hydroelectric Power Station in Bavaria. The station takes water from elevated Lake Walchensee to drive eight turbines that drive the generators. Four of those are three-phase generators to supply the power grid. The other four are single-phase generators are connected to Pelton turbines which have combined capacity of 52 MW to supply the German 15 kV AC railway electrification.
Similar single-phase hydroelectricity generations are also used in another variance of railway electrification system in the United States. A power station at Safe Harbor Dam in Pennsylvania provides power generation for both public utilities and for Amtrak railway. Two out of its 14 turbines are connected to two single-phase generators to supply Amtrak's 25 Hz traction power system. The two turbines are of Kaplan type with 5 blades rated 42,500 horsepower.
Steam
In the early years, steam engines were used as prime movers of generators. An installation at St. Louis Municipal Electric Power Plant in the 1900s was an example of using steam engines with single-phase generators. The St. Louis plant used compound steam engine to drive a 100 kW single-phase generator which produced current at rated power of 1,150 volts.
The steam engines were also used during the twentieth century in power stations for traction networks which had single-phase power distribution for specific railways. A special set of single-phase generators with steam turbines at Waterside Generating Station in New York City in 1938 was an example of such generation and distribution systems. The single-phase generators were eventually retired in the late 1970s due to concerns of a turbine failure in another station. The generators were replaced by two transformers to reduce from another three-phase power source to existing single-phase catenary power. Eventually, the transformers were replaced by two solid-state cycloconverter instead.
Nuclear
Normally, nuclear power plants are used as base load stations with very high capacities to supply power to the grids. Neckarwestheim I in Neckarwestheim is a unique nuclear power plant in that it is equipped with high-capacity single-phase generators to supply Deutsche Bahn railway with specific AC voltage at frequency of 16 2/3 Hz. The pressurized water reactor transport thermal energy to two turbines and generators which are rated for 187 MW and 152 MW.
See also
Alternator
Polyphase coil
References
Electrical generators
AC power | Single-phase generator | [
"Physics",
"Technology"
] | 2,646 | [
"Physical systems",
"Electrical generators",
"Machines"
] |
40,446,411 | https://en.wikipedia.org/wiki/Cell%20Calcium | Cell Calcium is a monthly peer-reviewed scientific journal published by Elsevier that covers the field of cell biology and focuses mainly on calcium signalling and metabolism in living organisms.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, Cell Calcium has a 2022 impact factor of 4.0.
References
External links
Elsevier academic journals
Molecular and cellular biology journals
Monthly journals
Academic journals established in 1980
English-language journals | Cell Calcium | [
"Chemistry"
] | 92 | [
"Molecular and cellular biology journals",
"Molecular biology"
] |
45,122,752 | https://en.wikipedia.org/wiki/S%C3%B8ren%20Galatius | Søren Galatius (born 1 August 1976) is a Danish mathematician who works as a professor of mathematics at the University of Copenhagen. He works in algebraic topology, where one of his most important results concerns the homology of the automorphisms of free groups. He is also known for his joint work with Oscar Randal-Williams on moduli spaces of manifolds, comprising several papers.
Life
Galatius was born in Randers, Denmark. He earned his PhD from Aarhus University in 2004 under the supervision of Ib Madsen. He then joined the Stanford University faculty, first with a temporary position as a Szegő Assistant Professor and then two years later with a tenure-track position, eventually becoming full professor in 2011. He relocated to the University of Copenhagen in 2016.
Recognition
In 2010, Galatius won the Silver Medal of the Royal Danish Academy of Sciences and Letters.
In 2012, he became one of the inaugural fellows of the American Mathematical Society.
He was an invited speaker at the 2014 International Congress of Mathematicians, speaking about his joint work with Oscar Randal-Williams. In 2017, he won an Elite Research Prize from the Danish Government for his work. In 2022 he was awarded the Clay Research Award jointly with Oscar Randal-Williams.
Selected publications
References
External links
1976 births
Living people
Danish mathematicians
21st-century American mathematicians
Aarhus University alumni
Stanford University Department of Mathematics faculty
Fellows of the American Mathematical Society
People from Randers
Topologists
Academic staff of the University of Copenhagen | Søren Galatius | [
"Mathematics"
] | 299 | [
"Topologists",
"Topology"
] |
45,135,496 | https://en.wikipedia.org/wiki/Second%20solar%20spectrum | The second solar spectrum is an electromagnetic spectrum of the Sun that shows the degree of linear polarization. The term was coined by V. V. Ivanov in 1991. The polarization is at a maximum close to the limb (edge) of the Sun, thus the best place to observe such a spectrum is from just inside the limb. It is also possible to get polarized light from outside the limb, but since this is much dimmer compared to the disk of the Sun, it is very easily polluted by scattered light.
The second solar spectrum differs significantly from the solar spectrum determined by the intensity of light.
Large effects come around the Ca II K and H line. These have broad effects 200 Å wide and show a sign reversal at their centers. Molecular lines with stronger polarization than the background due to MgH and C2 are common. Rare-earth elements stand out far more than expected from the intensity spectrum.
Other odd lines include Li I at 6708 Å which has 0.005% more polarization at its peak, but is almost unobservable in the intensity spectrum. The Ba II 4554 Å appears as a triplet in the second solar spectrum. This is due to differing isotopes and hyperfine structure.
Two lines at 5896 Å 4934 Å being the D1 lines of sodium and barium were predicted not to be polarized, but nevertheless are present in this spectrum.
Continuum
The continuum in the spectrum is the light with wavelengths between the lines. Polarization in the continuum is due to Rayleigh scattering by neutral hydrogen atoms (H I) and Thomson scattering by free electrons. Most of the opacity in the sun is due to the hydride ion, H− which however does not alter polarization. In 1950 Subrahmanyan Chandrasekhar came up with a solution for the degree of polarization due to scattering, and predicted 11.7% polarization at the limb of the Sun. But nowhere near this level is observed. What happens at the limb is that there is a forest of spicules poking out from the edge, so it is not possible to get parallel to such a rough surface.
For most of the solar disk the degree of linear polarization of the continuum is under 0.1%, but it rises to 1% at the limb. The polarization also depends strongly on the wavelength, and for near ultraviolet 3000 Å the light near the limb is 100 times more polarized than red light at 7000 Å. At the limit of the Balmer series a change happens where at shorter wavelengths more bound-bound Balmer series transitions cause more opacity. This extra opacity drops the polarization degree by a factor of two near 3746 Å.
References
Spectroscopy | Second solar spectrum | [
"Physics",
"Chemistry"
] | 556 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
45,136,023 | https://en.wikipedia.org/wiki/Mechanical%20device%20test%20stands | A mechanical device test stand is one specific type of test stand. It is a facility used to develop, characterize and test mechanical components. The facility allows for the testing of the component and, it offers measurement of several physical variables associated to the functionality of the component. Such components could be electromechanical, motors or tools. The intended use of the test stand is for compliance testing of predetermined desired values and fatigue testing.
A sophisticated mechanical component test stand houses several integrated measurement and control (imc) components, such as sensors, data acquisition devices and actuators to control the component. The sensors measure several physical variables, such as:
Strain/multi-axial strain
Mechanical stress
Rigidity/stiffness
Angle
Vibration/oscillation signals
Information gathered from the sensors is processed and logged through the use of data acquisition systems. Actuators allow for attaining a desired state. Test stands for mechanical devices are often custom-built according to the requirements of the customer. They often include a feedback control system.
Applications for mechanical device test stands
Research and Development of components, motors and tools (e.g., the testing of sun roofs, engine hoods, welding equipment. etc.), typically found in factory environments.
End of production line at an OEM factory. Changing of tested components can take place automatically.
Mechanical device testing for research and development
Research and Development (R&D) activities on mechanical components have necessitated sophisticated mechanical component test stands. For example, automobile OEMs or aviation OEMs are usually interested in developing mechanical components that meet the following objectives:
Provide high durability
High efficiency
High performance/quality
Keep costs down
Consequently, R&D mechanical component test stands perform a variety of exercises including measurement, control and recording of several relevant engine variables.
Typical tests include
Determine efficiency
Structural analysis
Determine durability: e.g., aging tests
Gain further knowledge about the mechanical component
Fourier analysis
Order analysis
Fatigue analysis
See also
engine test stand
electric motor test stand
References
Mechanical tests
Test equipment | Mechanical device test stands | [
"Engineering"
] | 409 | [
"Mechanical tests",
"Mechanical engineering"
] |
46,493,377 | https://en.wikipedia.org/wiki/The%20Algorithm%20Auction | The Algorithm Auction is the world's first auction of computer algorithms. Created by Ruse Laboratories, the initial auction featured seven lots and was held at the Cooper Hewitt, Smithsonian Design Museum on March 27, 2015.
Five lots were physical representations of famous code or algorithms, including a signed, handwritten copy of the original Hello, World! C program by its creator Brian Kernighan on dot-matrix printer paper, a printed copy of 5,000 lines of Assembly code comprising the earliest known version of Turtle Graphics, signed by its creator Hal Abelson, a necktie containing the six-line qrpff algorithm capable of decrypting content on a commercially produced DVD video disc, and a pair of drawings representing OkCupid's original Compatibility Calculation algorithm, signed by the company founders. The qrpff lot sold for $2,500.
Two other lots were “living algorithms,” including a set of JavaScript tools for building applications that are accessible to the visually impaired and the other is for a program that converts lines of software code into music. Winning bidders received, along with artifacts related to the algorithms, a full intellectual property license to use, modify, or open-source the code. All lots were sold, with Hello World receiving the most bids.
Exhibited alongside the auction lots were a facsimile of the Plimpton 322 tablet on loan from Columbia University, and Nigella, an art-world facing computer virus named after Nigella Lawson and created by cypherpunk and hacktivist Richard Jones.
Sebastian Chan, Director of Digital & Emerging Media at the Cooper–Hewitt, attended the event remotely from Milan, Italy via a Beam Pro telepresence robot.
Effects
Following the auction, the Museum of Modern Art held a salon titled The Way of the Algorithm highlighting algorithms as "a ubiquitous and indispensable component of our lives."
References
Algorithms
2015 in computing
Contexts for auctions | The Algorithm Auction | [
"Mathematics"
] | 391 | [
"Algorithms",
"Mathematical logic",
"Applied mathematics"
] |
46,494,027 | https://en.wikipedia.org/wiki/LibreCMC | LibreCMC is a GNU/Linux-libre distribution for computers with minimal resources, such as the Ben NanoNote, ath9k-based Wi-Fi routers, and other hardware with emphasis on free software. Based on OpenWrt, the project's goal is to aim for compliance with the GNU Free System Distribution Guidelines (GNU FSDG) and ensure that the project continues to meet these requirements set forth by the Free Software Foundation (FSF). LibreCMC does not support ac (Wi-Fi 5) or ax (Wi-Fi 6) due to a lack of free chipsets.
As of 2020, releases do not utilize codenames anymore. The acronym "CMC" in the libreCMC name stands for "Concurrent Machine Cluster".
History
On April 23, 2014, libreCMC's first public release is mentioned in a Trisquel forum. On September 4, 2014, the Free Software Foundation (FSF) added libreCMC to its list of endorsed distributions. Shortly afterwards, on September 12, 2014, the FSF awarded their Respects Your Freedom (RYF) Certification to a new router pre-installed with libreCMC.
On May 2, 2015, libreCMC merged with the LibreWRT project. LibreWRT, initially developed as a case study, was listed by the website prism-break.org as one of the alternatives to proprietary firmware, but today the website lists libreCMC.
On March 10, 2016, the FSF awarded their RYF certification to a new router pre-installed with libreCMC.
On March 29, 2017, libreCMC began its first release based upon the LEDE (Linux Embedded Development Environment) 17.01 codebase.
On January 3, 2020, libreCMC began its first release based upon the OpenWrt 19.07 codebase.
Release history
Source
List of supported hardware
LibreCMC supports the following devices:
Buffalo (Melco subsidiary)
WZR-HP-G300NH
WHR-HP-G300NH
Netgear
WNDR3800: v1.x
TP-Link
TL-MR3020: v1
TL-WR741ND: v1 - v2, v4.20 - v4.27
TL-WR841ND: v5.x, v8.x, v9.x, v10.x, v11.x, v12.x
TL-WR842ND: v1, v2
TL-WR1043ND: v1.x, v2.x, v3.x, v4.x, v5.x
ThinkPenguin
TPE-NWIFIROUTER2
TPE-R1100
TPE-R1200
TPE-R1300
TPE-R1400
Qi-Hardware
Ben Nanonote
See also
Comparison of Linux distributions
Linksys WRT54G series
List of router firmware projects
References
External links
LibreCMC package installation tutorial
Build automation
Custom firmware
Embedded Linux
Embedded Linux distributions
Free routing software
Free software only Linux distributions
Homebrew software
Wi-Fi
Linux distributions without systemd
Linux distributions | LibreCMC | [
"Technology"
] | 658 | [
"Wireless networking",
"Wi-Fi"
] |
46,496,928 | https://en.wikipedia.org/wiki/Fotmal | The fotmal (, "foot-measure"; ), also known as the foot (), formel, fontinel, and fotmell, was an English unit of variable weight particularly used in measuring production, sales, and duties of lead.
Under the Assize of Weights and Measures, it was equal to 70 Merchants' pounds and made up of a load of lead. Elsewhere, it was made of 70 avoirdupois pounds and made up load. According to Kiernan, in 16th-century Derbyshire, the fotmal was divided into "boles" and made up of a fother, meaning it was considered to be 84 avoirdupois pounds.
It continued to be used until the 16th century.
References
Citations
Bibliography
Obsolete units of measurement
Units of mass | Fotmal | [
"Physics",
"Mathematics"
] | 167 | [
"Obsolete units of measurement",
"Matter",
"Quantity",
"Units of mass",
"Mass",
"Units of measurement"
] |
46,498,257 | https://en.wikipedia.org/wiki/Cenocell | Cenocell is a patented concrete-like structural material that is manufactured without the addition of Portland cement. It was invented by Mulalo Doyoyo at the Georgia Institute of Technology.
Cenocell is produced from a chemical reaction involving fly ash or bottom ash and various organic chemicals. The chemical reaction produces foaming, and results in grey slurry that resembles bread dough. The mixture is then cured in ovens at temperatures near 100°C. The result is a homogenous mixture with a high strength and low weight.
See also
Sulfur concrete
References
Concrete
Building materials
Composite materials
Sustainable building
Heterogeneous chemical mixtures
South African inventions | Cenocell | [
"Physics",
"Chemistry",
"Engineering"
] | 134 | [
"Structural engineering",
"Sustainable building",
"Building engineering",
"Composite materials",
"Architecture",
"Construction",
"Materials",
"Chemical mixtures",
"Heterogeneous chemical mixtures",
"Concrete",
"Matter",
"Building materials"
] |
46,502,056 | https://en.wikipedia.org/wiki/Lead%20burning | Lead burning is a welding process used to join lead sheet. It is a manual process carried out by gas welding, usually oxy-acetylene.
Uses
Lead burning is carried out for roofing work in sheet lead, or for the formation of custom-made rainwater goods: gutters, downspouts and decorative hoppers. Decorative leadworking may also use lead burning, particularly where a waterproof joint is required as for planters. Lead burning is thus part of traditional plumber's work, in its original sense of a worker in lead (Latin: plumbum). Although rare and specialised, this work is still carried out today and not just for restoration of historical buildings. Most lead sheet work is formed and sealed by bossing, a mechanical fold or crimp. This is adequate for roofing that sheds water, but is insufficiently watertight when standing water sits upon it and so an impermeable burned joint is needed.
Lead burning is not used as part of plumbing work for installed pipework. Lead piping has long been considered obsolete, owing to the health aspects. Even where lead piping, or lead-sheathed cable, still needs to be jointed, this is carried out with a wiped joint, rather than a burned joint. Wiping a lead joint is a soldering process, using plumber's solder (80% lead / 20% tin) and is carried out at low temperature, with a natural-draught propane blowtorch. Today, even wiped joints are rare and where an existing lead pipe must be connected to, a proprietary mechanical joint is more likely to be used.
In some rare cases within the chemical industry, lead burning is used for pipework, where acid-resistant tanks and pipes are required to be made of lead rather than steel. Niche uses for lead burning include the manufacture of lead plates for lead-acid batteries and for electro-plating electrodes.
Process
Lead burning is an autogenous welding process. Two sheets of lead are formed mechanically to lie close against each other. They are then heated with the torch flame and flow together. No filler rod is required, the sheets form their own filler (autogenous welding). Neither is a flux used. Soldering, by contrast, uses a solder alloy that is some compatible alloy showing eutectic behaviour. This gives a melting point lower than the base metal, allowing a soldering process rather than welding. A filler rod may be needed for some welds, if there is no convenient way to form sufficient close overlap at a sheet edge. Offcuts of the same lead sheet are used as this filler. Excessive use of a filler, rather than an initial close fit, is considered a sign of poor technique.
The torch used for lead burning is a small, hot, gas flame. Oxy-acetylene is most commonly used, as it is easily portable. A small size #0 nozzle is usually used, sometimes with a miniature torch body, but the torch is otherwise the same as that used for steel or copper work. A variety of fuel gases may be used, but to achieve the high temperature needed, an oxygen supply is always used. Fuel gases may be acetylene, natural gas or hydrogen. Oxy-hydrogen is considered to be the best, but is not easily portable. Oxy-natural gas is cheapest and is often used on fixed workbenches. As it is less hot, it cannot be used for some awkward positional (overhead) welding. Oxy-acetylene is the most common, as much leadwork is carried out on site and this is easily portable.
A neutral flame is used. A reducing flame (fuel rich) gives trouble with soot deposits in the weld. An oxidising flame burns the lead and creates lead oxide dross, leading to poor welds with low malleability.
History
Lead burning requires a gas torch as autogenous processes require an intense, controllable flame that can be applied to a small area. It was first developed along with the early growth of the bulk chemical industry, as acid manufacture required leakproof lead vessels and flow process plumbing to be made. At the same time, coal gas was increasingly available for domestic lighting. By using a mouth-blown blowpipe, a gas flame could reach temperatures adequate for lead burning. Larger equipment could use mechanical fans.
Before this, leadworking used either manual bossing or wiped soldered joints to seal it.
Safety
Fire risk
Lead burning, and lead soldering, are some of the few building processes which still requires the on-site use of a naked flame. This has obvious safety hazards and lead working has been implicated in some fires during restoration work on historical buildings.
Health
Metallic lead is relatively safe to work with, although lead oxide dross formed on the surface of lead is more easily absorbed by the body, thus much more of a hazard. As lead burning is a high temperature process, it creates a significant hazard from such dross. Safety precautions are relatively simple: goggles to protect the eyes from molten metal splash, overalls or dustcoat kept in the lead workshop to stop contamination spreading, and dedicated workbenches equipped with air extraction.
Regular lead burners should be screened for accumulated lead exposure. Industrially this is done by weekly checks for blue lines around the gums, a simple indicator for heavy metal poisoning, and by regular urine testing.
See also
Operation Pluto, a WWII petrol pipeline built from lead piping joined by burning
Wiped joint, a soldering process for lead
References
Welding
Burning | Lead burning | [
"Engineering"
] | 1,143 | [
"Welding",
"Mechanical engineering"
] |
46,502,363 | https://en.wikipedia.org/wiki/Protein%20%26%20Cell | Protein & Cell is a monthly peer-reviewed open access journal covering protein and cell biology. It was established in 2010 and is published by Springer Science+Business Media. The editor-in-chief is Zihe Rao (Nankai University). According to the Journal Citation Reports, the journal has a 2018 impact factor of 7.575.
Genetic modification of human embryos controversy
In 2015, the journal sparked controversy when it published a paper reporting results of an attempt to alter the DNA of non-viable human embryos to correct a mutation that causes beta thalassemia, a lethal heritable disorder. According to the paper's lead author, the paper had previously been rejected by both Nature and Science in part because of ethical concerns; the journals did not comment to reporters.
References
External links
Springer Science+Business Media academic journals
Academic journals established in 2010
Monthly journals
Molecular and cellular biology journals
Open access journals | Protein & Cell | [
"Chemistry"
] | 185 | [
"Molecular and cellular biology journals",
"Molecular biology"
] |
46,504,226 | https://en.wikipedia.org/wiki/Normal%20homomorphism | In algebra, a normal homomorphism is a ring homomorphism that is flat and is such that for every field extension L of the residue field of any prime ideal , is a normal ring.
References
Ring theory
Morphisms | Normal homomorphism | [
"Mathematics"
] | 46 | [
"Functions and mappings",
"Algebra stubs",
"Mathematical structures",
"Mathematical objects",
"Ring theory",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations",
"Algebra",
"Morphisms"
] |
46,504,669 | https://en.wikipedia.org/wiki/Zenith%20Flash-matic | The Zenith Flash-Matic was the first wireless remote control, invented by Eugene Polley in 1955. It had only one button that was used to power on and off, channel up, channel down, and mute. The Flash-matic's phototechnology was a significant innovation in television and allowed for wireless signal transfer previously exclusive to radio.
Design and production
Earlier remotes could turn sets on/off and change channels, but were connected to the TV with a cable. The Flash-matic came in response to consumer complaints about the inconvenience of these cables running from the transmitter to the TV monitor. Earlier remotes served as the central control system for transmitting complex signals to the receiving monitor. The Flash-matic instead placed the complexity in the receiver as opposed to the transmitter. It used a directional beam of light to control a television outfitted with four photo cells in the corners of the screen. The light signal would activate one of the four control functions, which turned the picture and sound on or off, and turned the channel tuner dial clockwise or counter-clockwise. The bottom receptors received the signal to mute and power on/off, and the upper cells received signals to channel up/down. In order for the light beam to be received by the monitor, the remote control had to be directed towards one of the four photocells. The system responded to full-spectrum light so it could be activated or interfered with by other light sources including indoor light bulbs and the sun. Despite these defects, the Flash-matic remained in high demand. In September 1955, Zenith apologized for its inability to meet the consumer demand. The Flash-matic was soon replaced by better control systems. The "Zenith Space Command" remote control went into production in 1956 with aims to improve upon the Flash-matic's design.
Advertisement Campaign
The Flash-matic was marketed as an interactive technology and tuning device for the television. Muting was the first control feature to be included on a remote control but unavailable on the monitor. The advertising campaign for the Flash-matic remote control emphasized its ability to "shut off annoying commercials while the picture remains on the screen." The mute button was explicitly designed for the purpose of tuning out the commercials. As a result of this new feature as well as the Flash-matic's pistol shape, the Flash-matic's ad campaigns invited viewers to "shoot" the annoying commercials or announcers that they were tired of listening to, while leaving the image present so that the viewer would know when to turn the sound back on.
See also
Zenith Space Commander, mechanical ultrasonic remote control of the 1960s
References
Further reading
External links
Owner's manual
Remote control
Television technology
Products introduced in 1955 | Zenith Flash-matic | [
"Technology"
] | 558 | [
"Information and communications technology",
"Television technology"
] |
46,504,825 | https://en.wikipedia.org/wiki/Open%20letter%20on%20artificial%20intelligence%20%282015%29 | In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.
Background
By 2014, both physicist Stephen Hawking and business magnate Elon Musk had publicly voiced the opinion that superhuman artificial intelligence could provide incalculable benefits, but could also end the human race if deployed incautiously. At the time, Hawking and Musk both sat on the scientific advisory board for the Future of Life Institute, an organisation working to "mitigate existential risks facing humanity". The institute drafted an open letter directed to the broader AI research community, and circulated it to the attendees of its first conference in Puerto Rico during the first weekend of 2015. The letter was made public on January 12.
Purpose
The letter highlights both the positive and negative effects of artificial intelligence. According to Bloomberg Business, Professor Max Tegmark of MIT circulated the letter in order to find common ground between signatories who consider super intelligent AI a significant existential risk, and signatories such as Professor Oren Etzioni, who believe the AI field was being "impugned" by a one-sided media focus on the alleged risks. The letter contends that:
The potential benefits (of AI) are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
One of the signatories, Professor Bart Selman of Cornell University, said the purpose is to get AI researchers and developers to pay more attention to AI safety. In addition, for policymakers and the general public, the letter is meant to be informative but not alarmist. Another signatory, Professor Francesca Rossi, stated that "I think it's very important that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues".
Concerns raised by the letter
The signatories ask: How can engineers create AI systems that are beneficial to society, and that are robust? Humans need to remain in control of AI; our AI systems must "do what we want them to do". The required research is interdisciplinary, drawing from areas ranging from economics and law to various branches of computer science, such as computer security and formal verification. Challenges that arise are divided into verification ("Did I build the system right?"), validity ("Did I build the right system?"), security, and control ("OK, I built the system wrong, can I fix it?").
Short-term concerns
Some near-term concerns relate to autonomous vehicles, from civilian drones and self-driving cars. For example, a self-driving car may, in an emergency, have to decide between a small risk of a major accident and a large probability of a small accident. Other concerns relate to lethal intelligent autonomous weapons: Should they be banned? If so, how should 'autonomy' be precisely defined? If not, how should culpability for any misuse or malfunction be apportioned?
Other issues include privacy concerns as AI becomes increasingly able to interpret large surveillance datasets, and how to best manage the economic impact of jobs displaced by AI.
Long-term concerns
The document closes by echoing Microsoft research director Eric Horvitz's concerns that:
we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes – and that such powerful systems would threaten humanity. Are such dystopic outcomes possible? If so, how might these situations arise? ... What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an "intelligence explosion"?
Existing tools for harnessing AI, such as reinforcement learning and simple utility functions, are inadequate to solve this; therefore more research is necessary to find and validate a robust solution to the "control problem".
Signatories
Signatories include physicist Stephen Hawking, business magnate Elon Musk, the entrepreneurs behind DeepMind and Vicarious, Google's director of research Peter Norvig, Professor Stuart J. Russell of the University of California, Berkeley, and other AI experts, robot makers, programmers, and ethicists. The original signatory count was over 150 people, including academics from Cambridge, Oxford, Stanford, Harvard, and MIT.
Notes
External links
Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter
Open letters
Computing and society
Existential risk from artificial general intelligence | Open letter on artificial intelligence (2015) | [
"Technology"
] | 1,074 | [
"Computing and society",
"Existential risk from artificial general intelligence"
] |
43,362,985 | https://en.wikipedia.org/wiki/Perforene | In March 2013, Lockheed Martin announced that it was developing a family of membranes made from graphene, under the trademark Perforene.
The most promising application is seawater desalination. With holes as small as one nanometer in diameter, the membranes would trap sodium, chlorine and other ions, while allowing water molecules to pass through easily.Performance expectations (relative to the use of reverse osmosis membranes) include:
Up to 5x increase in flux across the membrane
Fouling reduction of up to 80%
Approximately 100x less energy and pressure required. (This claim was reported by Reuters. Lockheed Martin's current product datasheet predicts a more modest reduction in energy consumption: 10 - 20%.)
In addition to the desalination industry, Lockheed Martin plans to market Perforene variants in the following fields:
Waste water treatment
Pharmaceutical material harvest and purification
Energy/power generation
Mining
Food and beverage
Manufacturing
The product was not expected to be released until 2020.
Media reaction
Bruce Sterling commented for Wired, "if this graphene vaporware actually worked out in practice, we’d have to forgive Lockheed Martin for everything else they’ve ever done — plus maybe even give them Nobels and McMansion palaces in former deserts."
The Water Desalination Report evaluated Lockheed Martin's claims that it had developed a membrane that will desalinate water “at a fraction of the cost of industry-standard RO systems” as "ridiculous and very premature."
References
External links
Descaling Systems & Water Softeners
Environmental issues with water
Filters
Fresh water
Lockheed Martin
Water supply
Water treatment
Water desalination
Water technology
Membrane technology
Graphene | Perforene | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 336 | [
"Hydrology",
"Water desalination",
"Separation processes",
"Water treatment",
"Chemical equipment",
"Fresh water",
"Filters",
"Water pollution",
"Membrane technology",
"Filtration",
"Environmental engineering",
"Water technology",
"Water supply"
] |
43,364,518 | https://en.wikipedia.org/wiki/Isentropic%20analysis | In meteorology, isentropic analysis is a technique used to find the vertical and horizontal motion of airmasses during an adiabatic (i.e. non-heat-exchanging) process above the planetary boundary layer. The change of state of air parcels following isentropic surfaces does not involve exchange of heat with the environment. Such an analysis can also evaluate the airmass stability in the vertical dimension and whether an air parcel crossing such a surface will result in convective or stratiform clouds. It is based on the study of weather maps or vertical cross-sections of the potential temperature values in the troposphere.
On a synoptic scale, isentropic analysis is associated with weather fronts: warm fronts are found where the wind crosses lines of a chosen potential temperature from lower heights to higher ones, while cold fronts are where the wind crosses descending heights. Synoptic clouds and precipitations can thus be better found with these areas of advection than with conventional isobaric maps. From a mesoscale point of view, an air parcel moving vertically will cross isolines of potential temperature and it will be unstable if the value of those lines decrease with altitude, or stable if they increase.
References
Atmospheric thermodynamics
Weather prediction | Isentropic analysis | [
"Physics"
] | 262 | [
"Weather",
"Weather prediction",
"Physical phenomena"
] |
43,364,546 | https://en.wikipedia.org/wiki/Satellite%20delay | Satellite delay is the noticeable latency due to the limited speed of light, when sending data to and from satellites, especially distant geosynchronous satellites. Bouncing a signal off a geosynchronous satellite takes about a quarter of a second, which is enough to be noticeable, but relaying data between two or three such satellites increases the delay.
See also
Geosynchronous satellite
References
Engineering concepts | Satellite delay | [
"Engineering"
] | 85 | [
"nan"
] |
32,037,577 | https://en.wikipedia.org/wiki/WH2%20motif |
Function
The WH2 motif or WH2 domain is an evolutionarily conserved sequence motif contained in proteins. It is found in WASP proteins which control actin polymerisation, therefore, WH2 is important in cellular processes such as cell contractility, cell motility, cell trafficking and cell signalling.
Motif
The WH2 motif (for Wiskott–Aldrich syndrome homology region 2) has been shown in WAS and Scar1/WASF1 (mammalian homologue) to interact via their WH2 motifs with actin.
The WH2 (WASP-Homology 2, or Wiskott–Aldrich homology 2) domain is an ~18 amino acids actin-binding motif. This domain was first recognized as an essential element for the regulation of the cytoskeleton by the mammalian Wiskott–Aldrich syndrome protein (WASP) family. WH2 proteins occur in eukaryotes from yeast to mammals, in insect viruses, and in some bacteria. The WH2 domain is found as a modular part of larger proteins; it can be associated with the WH1 or EVH1 domain and with the CRIB domain, and the WH2 domain can occur as a tandem repeat. The WH2 domain binds to actin monomers and can facilitate the assembly of actin monomers into actin filaments.
Examples
Human genes encoding proteins containing the WH2 motif include:
COBL, COBLL1, ESPN, INF2, JMY
LMOD1, LMOD2, LMOD3
MTSS1, PXK
WAS, WASF1, WASF2, WASF3, WASF4, WASL, WASPIP, WHDC1, WIPF1, WIPF2
References
Protein domains
Protein families
Membrane proteins | WH2 motif | [
"Biology"
] | 377 | [
"Protein families",
"Protein domains",
"Protein classification",
"Membrane proteins"
] |
32,038,866 | https://en.wikipedia.org/wiki/FASTRAD | FASTRAD is a tool dedicated to the calculation of radiation effects (Dose and Displacement Damage) on electronics. The software has uses in high energy physics and nuclear experiments, medical areas, and accelerator and space physics studies, though it is primarily used in the design of satellites.
History
FASTRAD is a radiation tool dedicated to the analysis and design of radiation sensitive systems. The project was created in 1999, five years after the creation of the product's parent company TRAD, and has been under active development since.
Over time, the radiation hardness that satellite manufacturers have been able to offer has greatly increased. Both the optimization of space systems in terms of the power/mass ratio, or the miniaturization of electronic devices, tends to increase the sensitivity of those systems to the space radiation environment. In order to mitigate the impact on the radiation hardness process, the first solution is to replace the rough shielding analysis by an accurate estimate of the real radiation constraint on the system. Historically, FASTRAD has been able to assist this industry.
The main goal of the software is to reduce the margins stemming from a conservative approach of estimating radiation analysis, while reducing the cycle time of mechanical design changes for shielding optimization. In some cases, it can be used to justify the use of non rad-hard parts and save cost and planning for space program equipment.
For space applications, the software is capable of simulating the entire satellite system.
Radiation CAD interface
The main CAD capabilities of the tool are:
Creation of multiple simple primitives
Insertion of complex 3D geometries coming from STEP or IGES format files
Standard modelling tool set (clipping plane, 2D projection, measurement tool, colors, view shot,...)
The core of the software is the radiation 3D modeler. The goal of the engine is to make a realistic model of any mechanical design. The main section of the interface is the display window, where the user can manipulate their design.
The 3D solids can be defined either by using the component toolbar or by importing them from other 3rd party software (CATIA, Pro/Engineer...) with the standard STEP or IGES format. The Open Cascade library included in FASTRAD provides advanced visualization capabilities like cut operations, complex shape management, and STEP and IGES exchange format modules. The advanced STEP module allows you to import the hierarchy, name and color information. The full 3D designer model is then managed by FASTRAD (visualization, radiation calculation, post-processing).
Material properties are one of the most important parts of radiation simula. The interface allows you to set the material properties of each solid of the 3D model, such as the density and the mass ratio of each element of the (compound) material by determining its chemical composition (see Fig. 1.). The list of predefined materials can be extended by the user.
Simulated radiation detectors can be placed on the 3D model. In this way, radiation effects can be estimated at any point of the 3D model using a Monte Carlo algorithm for a fine calculation of energy deposition by particle-matter interaction (see “Dose calculation and shielding” below), or for a ray-tracing approach.
Several more features (local frame display, interactive measurement tool, context menus,...) are included in the interface.
Dose calculation and shielding
Once the radiation model is completed, the user can perform a deposited dose estimate using the sector analysis module of the software. This ray-tracing module combines the information coming from the radiation model with the information of the radiation environment using a Dose Depth Curve. This dose depth curve gives the deposited dose in a target material (mainly Silicon for electronic devices) behind an Aluminum spherical shielding thickness. This calculation is performed for each detector placed in the 3D model. Even for complex geometries, the calculation provides two kinds of information:
the 3D distribution mass around each detector
the estimated deposited dose in an isotropic radiation environment
Using a post-processing of those results, FASTRAD provides information about optimum shielding location using several viewing representation types. Figure 2. presents a mapping of the mass distribution viewed by one component of an electronic board. The red area indicates the critical directions in terms of shielding thickness.
The user is able to optimize the size of additional shielding that can be used to decrease the received dose on the studied detector.
The main advantage of this process is the short time needed to complete this task and the well defined mechanical shielding solution provided by the sector analysis post-processing.
Monte Carlo algorithm
The dose calculation in the software uses a Monte Carlo module (developed through a partnership with the CNES). This algorithm can be used either in a forward process or a reverse one. In the first case, the software manages the transport of electrons and photons (including secondary particles) from 1keV to 10 MeV, in the 3D model. Creation of secondary photons and electrons are taken into account. Any type of energy spectrum and source geometry can be defined. Sensitive volumes (SV) are selected by the user and FASTRAD computes the deposited energy inside the SVs. The reverse Monte Carlo module is dedicated to the dose calculation due to an isotropic irradiation of electrons in a complex and multi-scale geometry, and as a result, the forward algorithm can lead to large computational times. The principle of the reverse method is to use:
A forward particle tracking method in the vicinity of the SV
A backward particle tracking method from the SV to the external source.
The Reverse Monte Carlo method for electron transport takes into account the energy deposition due to primary electrons and secondary photons.
The Monte Carlo module was successfully verified through a comparison with GEANT4 results for the forward algorithm and with US Format for the reverse method. One example is the case of a piece of electronic equipment in a satellite structure. The radiation environment corresponds to the electron energy spectrum of a geostationary mission (from 10 keV up to 5 MeV).
Interface to Geant4
Geant4 is a particle-matter interaction toolkit maintained by a worldwide collaboration of scientists and software engineers. This C++ library contains a wide range of interaction cross section data and models together with a tracking engine of particles through a 3D geometry.
The Geant4 interface implemented in the FASTRAD software provides a tool able to create the 3D geometry, define the particle source, set the physics list and create all the resulting source files in a ready-to-compile Geant4 project. The tool is useful for young engineers who need to be driven into the Geant4 world, and who can use FASTRAD as a tutorial tool, or by experts who do not want to spend time on the creation of C++ files that describe the geometry, material, and basic physics and who can use the Geant4 project created by FASTRAD as a base that can be enhanced by specific features relative to their physical application. The Geant4 interface gives the software a wide range of radiation related fields, as Geant4 is already used for space, medical, nuclear, aeronautical and military applications. Its radiation CAD capabilities facilitate the engineering process for any radiation sensitive system analysis.
Technical specifications
FASTRAD was developed using C++ with OpenGL to manage the 3D and Open Cascade library for the STEP import and Boolean operations. It was tested under Mac and LINUX using an OS emulator (PowerPC, VMware ...).
Computer Requirements: Configuration: Windows Vista/XP/NT/2000 - 512 Mo RAM - 50 Mo HDD.
See also
NOVICE (EMPC) ()
Geant4 - GEometry ANd Tracking
IGES - Initial Graphics Exchange Specification
CATIA - Computer Aided Three-dimensional Interactive Application
Latchup
RayXpert - 3D modelling software that calculates the gamma dose rate by Monte Carlo
External links
FASTRAD is distributed by TRAD. Official TRAD website
Official FASTRAD website
References
Physics software
Nuclear physics
Radiation effects | FASTRAD | [
"Physics",
"Materials_science",
"Engineering"
] | 1,603 | [
"Physical phenomena",
"Materials science",
"Computational physics",
"Radiation",
"Condensed matter physics",
"Nuclear physics",
"Radiation effects",
"Physics software"
] |
32,040,137 | https://en.wikipedia.org/wiki/Technological%20unemployment | Technological unemployment is the loss of jobs caused by technological change. It is a key type of structural unemployment. Technological change typically includes the introduction of labour-saving "mechanical-muscle" machines or more efficient "mechanical-mind" processes (automation), and humans' role in these processes are minimized. Just as horses were gradually made obsolete as transport by the automobile and as labourer by the tractor, humans' jobs have also been affected throughout modern history. Historical examples include artisan weavers reduced to poverty after the introduction of mechanized looms. Thousands of man-years of work was performed in a matter of hours by the bombe codebreaking machine during World War II. A contemporary example of technological unemployment is the displacement of retail cashiers by self-service tills and cashierless stores.
That technological change can cause short-term job losses is widely accepted. The view that it can lead to lasting increases in unemployment has long been controversial. Participants in the technological unemployment debates can be broadly divided into optimists and pessimists. Optimists agree that innovation may be disruptive to jobs in the short term, yet hold that various compensation effects ensure there is never a long-term negative impact on jobs, whereas pessimists contend that at least in some circumstances, new technologies can lead to a lasting decline in the total number of workers in employment. The phrase "technological unemployment" was popularised by John Maynard Keynes in the 1930s, who said it was "only a temporary phase of maladjustment". The issue of machines displacing human labour has been discussed since at least Aristotle's time.
Prior to the 18th century, both the elite and common people would generally take the pessimistic view on technological unemployment, at least in cases where the issue arose. Due to generally low unemployment in much of pre-modern history, the topic was rarely a prominent concern. In the 18th century fears over the impact of machinery on jobs intensified with the growth of mass unemployment, especially in Great Britain which was then at the forefront of the Industrial Revolution. Yet some economic thinkers began to argue against these fears, claiming that overall innovation would not have negative effects on jobs. These arguments were formalised in the early 19th century by the classical economists. During the second half of the 19th century, it stayed apparent that technological progress was benefiting all sections of society, including the working class. Concerns over the negative impact of innovation diminished. The term "Luddite fallacy" was coined to describe the thinking that innovation would have lasting harmful effects on employment.
The view that technology is unlikely to lead to long-term unemployment has been repeatedly challenged by a minority of economists. In the early 1800s these included David Ricardo himself. There were dozens of economists warning about technological unemployment during brief intensifications of the debate that spiked in the 1930s and 1960s. Especially in Europe, there were further warnings in the closing two decades of the twentieth century, as commentators noted an enduring rise in unemployment suffered by many industrialised nations since the 1970s. Yet a clear majority of both professional economists and the interested general public held the optimistic view through most of the 20th century.
In the second decade of the 21st century, a number of studies have been released suggesting that technological unemployment may increase worldwide. Oxford Professors Carl Benedikt Frey and Michael Osborne, for example, have estimated that 47 percent of U.S. jobs are at risk of automation. However, their methodology has been challenged as lacking evidential foundation and criticised for implying that technology (rather than social policy) creates unemployment rather than redundancies. On the PBS NewsHours the authors defended their findings and clarified they do necessarily imply future technological unemployment. While many economists and commentators still argue such fears are unfounded, as was widely accepted for most of the previous two centuries, concern over technological unemployment is growing once again. A report in Wired in 2017 quotes knowledgeable people such as economist Gene Sperling and management professor Andrew McAfee on the idea that handling existing and impending job loss to automation is a "significant issue". Recent technological innovations have the potential to displace humans in the professional, white-collar, low-skilled, creative fields, and other "mental jobs". The World Bank's World Development Report 2019 argues that while automation displaces workers, technological innovation creates more new industries and jobs on balance.
History
Classical era
According to author Gregory Woirol, the phenomenon of technological unemployment is likely to have existed since at least the invention of the wheel. Ancient societies had various methods for relieving the poverty of those unable to support themselves with their own labour. Ancient China and ancient Egypt may have had various centrally run relief programmes in response to technological unemployment dating back to at least the second millennium BC. Ancient Hebrews and adherents of the ancient Vedic religion had decentralised responses where aiding the poor was encouraged by their faiths. In ancient Greece, large numbers of free labourers could find themselves unemployed due to both the effects of ancient labour saving technology and to competition from slaves ("machines of flesh and blood"). Sometimes, these unemployed workers would starve to death or were forced into slavery themselves although in other cases they were supported by handouts. Pericles responded to perceived technological unemployment by launching public works programmes to provide paid work to the jobless. Some people criticized Pericle's programmes as wasting public money but were defeated.
Perhaps the earliest example of a scholar discussing the phenomenon of technological unemployment occurs with Aristotle, who speculated in Book One of Politics that if machines could become sufficiently advanced, there would be no more need for human labour. Similar to the Greeks, ancient Romans responded to the problem of technological unemployment by relieving poverty with handouts (such as the ). Several hundred thousand families were sometimes supported like this at once. Less often, jobs were directly created with public works programmes, such as those launched by the Gracchi. Various emperors even went as far as to refuse or ban labour saving innovations. In one instance, the introduction of a labor-saving invention was blocked, when Emperor Vespasian refused to allow a new method of low-cost transportation of heavy goods, saying "You must allow my poor hauliers to earn their bread." Labour shortages began to develop in the Roman empire towards the end of the second century AD, and from this point mass unemployment in Europe appears to have largely receded for over a millennium.
Post-classical era
The medieval and early renaissance period saw the widespread adoption of newly invented technologies, as well as older ones which had been conceived yet barely used in the Classical era. Some were invented in Europe while others were invented in more Eastern countries like China, India, Arabia and Persia. The Black Death left fewer workers across Europe. Mass unemployment began to reappear in Europe, especially in Western, Central and Southern Europe in the 15th century, partly as a result of population growth, and partly due to changes in the availability of land for subsistence farming caused by early enclosures. As a result of the threat of unemployment, there was less tolerance for disruptive new technologies. European authorities would often side with groups representing subsections of the working population, such as Guilds, banning new technologies and sometimes even executing those who tried to promote or trade in them.
16th to 18th century
In Great Britain, the ruling elite began to take a less restrictive approach to innovation somewhat earlier than in much of continental Europe, which has been cited as a possible reason for Britain's early lead in driving the Industrial Revolution. Yet concern over the impact of innovation on employment remained strong through the 16th and early 17th century. A famous example of new technology being refused occurred when the inventor William Lee invited Queen Elizabeth I to view a labour saving knitting machine. The Queen declined to issue a patent on the grounds that the technology might cause unemployment among textile workers. After moving to France and also failing to achieve success in promoting his invention, Lee returned to England but was again refused by Elizabeth's successor James I for the same reason.
After the Glorious Revolution, authorities became less sympathetic to workers concerns about losing their jobs due to innovation. An increasingly influential strand of Mercantilist thought held that introducing labour saving technology would actually reduce unemployment, as it would allow British firms to increase their market share against foreign competition. From the early 18th century workers could no longer rely on support from the authorities against the perceived threat of technological unemployment. They would sometimes take direct action, such as machine breaking, in attempts to protect themselves from disruptive innovation. Joseph Schumpeter notes that as the 18th century progressed, thinkers would raise the alarm about technological unemployment with increasing frequency, with von Justi being a prominent example. Yet Schumpeter also notes that the prevailing view among the elite solidified on the position that technological unemployment would not be a long-term problem.
19th century
It was only in the 19th century that debates over technological unemployment became intense, especially in Great Britain where many economic thinkers of the time were concentrated. Building on the work of Dean Tucker and Adam Smith, political economists began to create what would become the modern discipline of economics. While rejecting much of mercantilism, members of the new discipline largely agreed that technological unemployment would not be an enduring problem. In the first few decades of the 19th century, several prominent political economists did, however, argue against the optimistic view, claiming that innovation could cause long-term unemployment. These included Sismondi, Malthus, J S Mill, and from 1821, David Ricardo himself. As arguably the most respected political economist of his age, Ricardo's view was challenging to others in the discipline. The first major economist to respond was Jean-Baptiste Say, who argued that no one would introduce machinery if they were going to reduce the amount of product, and that as Say's law states that supply creates its own demand, any displaced workers would automatically find work elsewhere once the market had had time to adjust. Ramsey McCulloch expanded and formalised Say's optimistic views on technological unemployment, and was supported by others such as Charles Babbage, Nassau Senior and many other lesser known political economists. Towards the middle of the 19th century, Karl Marx joined the debates. Building on the work of Ricardo and Mill, Marx went much further, presenting a deeply pessimistic view of technological unemployment; his views attracted many followers and founded an enduring school of thought but mainstream economics was not dramatically changed. By the 1870s, at least in Great Britain, technological unemployment faded both as a popular concern and as an issue for academic debate. It had become increasingly apparent that innovation was increasing prosperity for all sections of British society, including the working class. As the classical school of thought gave way to neoclassical economics, mainstream thinking was tightened to take into account and refute the pessimistic arguments of Mill and Ricardo.
20th century
For the first two decades of the 20th century, mass unemployment was not the major problem it had been in the first half of the 19th. While the Marxist school and a few other thinkers continued to challenge the optimistic view, technological unemployment was not a significant concern for mainstream economic thinking until the mid to late 1920s. In the 1920s mass unemployment re-emerged as a pressing issue within Europe. At this time the U.S. was generally more prosperous, but even there urban unemployment had begun to increase from 1927. Rural American workers had been suffering job losses from the start of the 1920s; many had been displaced by improved agricultural technology, such as the tractor. The centre of gravity for economic debates had by this time moved from Great Britain to the United States, and it was here that the 20th century's two great periods of debate over technological unemployment largely occurred.
The peak periods for the two debates were in the 1930s and the 1960s. According to economic historian Gregory R Woirol, the two episodes share several similarities. In both cases academic debates were preceded by an outbreak of popular concern, sparked by recent rises in unemployment. In both cases the debates were not conclusively settled, but faded away as unemployment was reduced by an outbreak of war – World War II for the debate of the 1930s, and the Vietnam War for the 1960s episodes. In both cases, the debates were conducted within the prevailing paradigm at the time, with little reference to earlier thought. In the 1930s, optimists based their arguments largely on neo-classical beliefs in the self-correcting power of markets to reduce any short-term unemployment via compensation effects. In the 1960s, belief in compensation effects was less strong, but the mainstream Keynesian economists of the time largely believed government intervention would be able to counter any persistent technological unemployment that was not cleared by market forces. Another similarity was the publication of a major Federal study towards the end of each episode, which broadly found that long-term technological unemployment was not occurring (though the studies did agree innovation was a major factor in the short term displacement of workers, and advised government action to provide assistance).
As the golden age of capitalism came to a close in the 1970s, unemployment once again rose, and this time generally remained relatively high for the rest of the century, across most advanced economies. Several economists once again argued that this may be due to innovation, with perhaps the most prominent being Paul Samuelson. Overall, the closing decades of the 20th century saw most concern expressed over technological unemployment in Europe, though there were several examples in the U.S. A number of popular works warning of technological unemployment were also published. These included James S. Albus's 1976 book titled Peoples' Capitalism: The Economics of the Robot Revolution; David F. Noble with works published in 1984 and 1993; Jeremy Rifkin and his 1995 book The End of Work; and the 1996 book The Global Trap. Yet for the most part, other than during the periods of intense debate in the 1930s and 60s, the consensus in the 20th century among both professional economists and the general public remained that technology does not cause long-term joblessness.
21st century
Opinions
The general consensus that innovation does not cause long-term unemployment held strong for the first decade of the 21st century although it continued to be challenged by a number of academic works, and by popular works such as Marshall Brain's Robotic Nation and Martin Ford's The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future.
Since the publication of their 2011 book Race Against the Machine, MIT professors Andrew McAfee and Erik Brynjolfsson have been prominent among those raising concern about technological unemployment. The two professors remain relatively optimistic, however, stating "the key to winning the race is not to compete against machines but to compete with machines".
Concern about technological unemployment grew in 2013 due in part to a number of studies predicting substantially increased technological unemployment in forthcoming decades and empirical evidence that, in certain sectors, employment is falling worldwide despite rising output, thus discounting globalization and offshoring as the only causes of increasing unemployment.
In 2013, professor Nick Bloom of Stanford University stated there had recently been a major change of heart concerning technological unemployment among his fellow economists. In 2014 the Financial Times reported that the impact of innovation on jobs has been a dominant theme in recent economic discussion. According to the academic and former politician Michael Ignatieff writing in 2014, questions concerning the effects of technological change have been "haunting democratic politics everywhere". Concerns have included evidence showing worldwide falls in employment across sectors such as manufacturing; falls in pay for low and medium skilled workers stretching back several decades even as productivity continues to rise; the increase in often precarious platform mediated employment; and the occurrence of "jobless recoveries" after recent recessions. The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research and even low level journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots.
Former U.S. Treasury Secretary and Harvard economics professor Lawrence Summers stated in 2014 that he no longer believed automation would always create new jobs and that "This isn't some hypothetical future possibility. This is something that's emerging before us right now." Summers noted that already, more labor sectors were losing jobs than creating new ones. While himself doubtful about technological unemployment, professor Mark MacCarthy stated in the fall of 2014 that it is now the "prevailing opinion" that the era of technological unemployment has arrived.
At the 2014 Davos meeting, Thomas Friedman reported that the link between technology and unemployment seemed to have been the dominant theme of that year's discussions. A survey at Davos 2014 found that 80% of 147 respondents agreed that technology was driving jobless growth. At the 2015 Davos, Gillian Tett found that almost all delegates attending a discussion on inequality and technology expected an increase in inequality over the next five years, and gives the reason for this as the technological displacement of jobs. 2015 saw Martin Ford win the Financial Times and McKinsey Business Book of the Year Award for his Rise of the Robots: Technology and the Threat of a Jobless Future, and saw the first world summit on technological unemployment, held in New York. In late 2015, further warnings of potential worsening for technological unemployment came from Andy Haldane, the Bank of England's chief economist, and from Ignazio Visco, the governor of the Bank of Italy. In an October 2016 interview, US President Barack Obama said that due to the growth of artificial intelligence, society would be debating "unconditional free money for everyone" within 10 to 20 years. In 2019, computer scientist and artificial intelligence expert Stuart J. Russell stated that "in the long run nearly all current jobs will go away, so we need fairly radical policy changes to prepare for a very different future economy." In a book he authored, Russell claims that "One rapidly emerging picture is that of an economy where far fewer people work because work is unnecessary." However, he predicted that employment in healthcare, home care, and construction would increase.
Other economists have argued that long-term technological unemployment is unlikely. In 2014, Pew Research canvassed 1,896 technology professionals and economists and found a split of opinion: 48% of respondents believed that new technologies would displace more jobs than they would create by the year 2025, while 52% maintained that they would not. Economics professor Bruce Chapman from Australian National University has advised that studies such as Frey and Osborne's tend to overstate the probability of future job losses, as they don't account for new employment likely to be created, due to technology, in what are currently unknown areas. Looking deeper into this, small and mid-sized businesses have created a large amount of new jobs around the world, which allows for entrepreneurs and investors to have the freedom to create and grow businesses, which is extremely vital with new technologies emerging everyday. With all of these new buinesses there will be a large number of workers that will be required to work for these companies, which would improve the world's employment situation, replacing jobs that were previously lost.
General public surveys have often found an expectation that automation would impact jobs widely, but not the jobs held by those particular people surveyed.
Studies
A number of studies have predicted that automation will take a large proportion of jobs in the future, but estimates of the level of unemployment this will cause vary. Research by Carl Benedikt Frey and Michael Osborne of the Oxford Martin School showed that employees engaged in "tasks following well-defined procedures that can easily be performed by sophisticated algorithms" are at risk of displacement. The study, published in 2013, shows that automation can affect both skilled and unskilled work and both high and low-paying occupations; however, low-paid physical occupations are most at risk. It estimated that 47% of US jobs were at high risk of automation. In 2014, the economic think tank Bruegel released a study, based on the Frey and Osborne approach, claiming that across the European Union's 28 member states, 54% of jobs were at risk of automation. The countries where jobs were least vulnerable to automation were Sweden, with 46.69% of jobs vulnerable, the UK at 47.17%, the Netherlands at 49.50%, and France and Denmark, both at 49.54%. The countries where jobs were found to be most vulnerable were Romania at 61.93%, Portugal at 58.94%, Croatia at 57.9%, and Bulgaria at 56.56%. A 2015 report by the Taub Center found that 41% of jobs in Israel were at risk of being automated within the next two decades. In January 2016, a joint study by the Oxford Martin School and Citibank, based on previous studies on automation and data from the World Bank, found that the risk of automation in developing countries was much higher than in developed countries. It found that 77% of jobs in China, 69% of jobs in India, 85% of jobs in Ethiopia, and 55% of jobs in Uzbekistan were at risk of automation. The World Bank similarly employed the methodology of Frey and Osborne. A 2016 study by the International Labour Organization found 74% of salaried electrical & electronics industry positions in Thailand, 75% of salaried electrical & electronics industry positions in Vietnam, 63% of salaried electrical & electronics industry positions in Indonesia, and 81% of salaried electrical & electronics industry positions in the Philippines were at high risk of automation. A 2016 United Nations report stated that 75% of jobs in the developing world were at risk of automation, and predicted that more jobs might be lost when corporations stop outsourcing to developing countries after automation in industrialized countries makes it less lucrative to outsource to countries with lower labor costs.
The Council of Economic Advisers, a US government agency tasked with providing economic research for the White House, in the 2016 Economic Report of the President, used the data from the Frey and Osborne study to estimate that 83% of jobs with an hourly wage below $20, 31% of jobs with an hourly wage between $20 and $40, and 4% of jobs with an hourly wage above $40 were at risk of automation. A 2016 study by Ryerson University (now Toronto Metropolitan University) found that 42% of jobs in Canada were at risk of automation, dividing them into two categories - "high risk" jobs and "low risk" jobs. High risk jobs were mainly lower-income jobs that required lower education levels than average. Low risk jobs were on average more skilled positions. The report found a 70% chance that high risk jobs and a 30% chance that low risk jobs would be affected by automation in the next 10–20 years. A 2017 study by PricewaterhouseCoopers found that up to 38% of jobs in the US, 35% of jobs in Germany, 30% of jobs in the UK, and 21% of jobs in Japan were at high risk of being automated by the early 2030s. A 2017 study by Ball State University found about half of American jobs were at risk of automation, many of them low-income jobs. A September 2017 report by McKinsey & Company found that as of 2015, 478 billion out of 749 billion working hours per year dedicated to manufacturing, or $2.7 trillion out of $5.1 trillion in labor, were already automatable. In low-skill areas, 82% of labor in apparel goods, 80% of agriculture processing, 76% of food manufacturing, and 60% of beverage manufacturing were subject to automation. In mid-skill areas, 72% of basic materials production and 70% of furniture manufacturing was automatable. In high-skill areas, 52% of aerospace and defense labor and 50% of advanced electronics labor could be automated. In October 2017, a survey of information technology decision makers in the US and UK found that a majority believed that most business processes could be automated by 2022. On average, they said that 59%
of business processes were subject to automation. A November 2017 report by the McKinsey Global Institute that analyzed around 800 occupations in 46 countries estimated that between 400 million and 800 million jobs could be lost due to robotic automation by 2030. It estimated that jobs were more at risk in developed countries than developing countries due to a greater availability of capital to invest in automation. Job losses and downward mobility blamed on automation has been cited as one of many factors in the resurgence of nationalist and protectionist politics in the US, UK and France, among other countries.
However, not all recent empirical studies have found evidence to support the idea that automation will cause widespread unemployment. A study released in 2015, examining the impact of industrial robots in 17 countries between 1993 and 2007, found no overall reduction in employment was caused by the robots, and that there was a slight increase in overall wages. According to a study published in McKinsey Quarterly in 2015 the impact of computerization in most cases is not replacement of employees but automation of portions of the tasks they perform. A 2016 OECD study found that among the 21 OECD countries surveyed, on average only 9% of jobs were in foreseeable danger of automation, but this varied greatly among countries: for example in South Korea the figure of at-risk jobs was 6% while in Austria it was 12%. In contrast to other studies, the OECD study does not primarily base its assessment on the tasks that a job entails, but also includes demographic variables, including sex, education and age. It is not clear however why a job should be more or less automatise just because it is performed by a woman. In 2017, Forrester estimated that automation would result in a net loss of about 7% of jobs in the US by 2027, replacing 17% of jobs while creating new jobs equivalent to 10% of the workforce. Another study argued that the risk of US jobs to automation had been overestimated due to factors such as the heterogeneity of tasks within occupations and the adaptability of jobs being neglected. The study found that once this was taken into account, the number of occupations at risk to automation in the US drops, ceteris paribus, from 38% to 9%. A 2017 study on the effect of automation on Germany found no evidence that automation caused total job losses but that they do effect the jobs people are employed in; losses in the industrial sector due to automation were offset by gains in the service sector. Manufacturing workers were also not at risk from automation and were in fact more likely to remain employed, though not necessarily doing the same tasks. However, automation did result in a decrease in labour's income share as it raised productivity but not wages.
A 2018 Brookings Institution study that analyzed 28 industries in 18 OECD countries from 1970 to 2018 found that automation was responsible for holding down wages. Although it concluded that automation did not reduce the overall number of jobs available and even increased them, it found that from the 1970s to the 2010s, it had reduced the share of human labor in the value added to the work, and thus had helped to slow wage growth. In April 2018, Adair Turner, former Chairman of the Financial Services Authority and head of the Institute for New Economic Thinking, stated that it would already be possible to automate 50% of jobs with current technology, and that it will be possible to automate all jobs by 2060.
Premature deindustrialization
Premature deindustrialization occurs when developing nations deindustrialize without first becoming rich, as happened with the advanced economies. The concept was popularized by Dani Rodrik in 2013, who went on to publish several papers showing the growing empirical evidence for the phenomena. Premature deindustrialization adds to concern over technological unemployment for developing countries – as traditional compensation effects that advanced economy workers enjoyed, such being able to get well paid work in the service sector after losing their factory jobs – may not be available.
Some commentators, such as Carl Benedikt Frey, argue that with the right responses, the negative effects of further automation on workers in developing economies can still be avoided.
Artificial intelligence
Since about 2017, a new wave of concern over technological unemployment had become prominent, this time over the effects of artificial intelligence (AI).
Commentators including Calum Chace and Daniel Hulme have warned that if unchecked, AI threatens to cause an "economic singularity", with job churn too rapid for humans to adapt to, leading to widespread technological unemployment. However, they also advise that with the right responses by business leaders, policy makers and society, the impact of AI could be a net positive for workers.
Morgan R. Frank et al. cautions that there are several barriers preventing researchers from making accurate predictions of the effects AI will have on future job markets. Marian Krakovsky has argued that the jobs most likely to be completely replaced by AI are in middle-class areas, such as professional services. Often, the practical solution is to find another job, but workers may not have the qualifications for high-level jobs and so must drop to lower level jobs. However, Krakovsky (2018) predicts that AI will largely take the route of "complementing people," rather than "replicating people." Suggesting that the goal of people implementing AI is to improve the life of workers, not replace them. Studies have also shown that rather than solely destroying jobs AI can also create work: albeit low-skill jobs to train AI in low-income countries.
Following Russian president Vladimir Putin's 2017 statement that whichever country first achieves mastery in AI "will become the ruler of the world", various national and supranational governments have announced AI strategies. Concerns on not falling behind in the AI arms race have been more prominent than worries over AI's potential to cause unemployment. Several strategies suggest that achieving a leading role in AI should help their citizens get more rewarding jobs. Finland has aimed to help the citizens of other EU nations acquire the skills they need to compete in the post-AI jobs market, making a free course on "The Elements of AI" available in multiple European languages.
Oracle CEO Mark Hurd predicted that AI "will actually create more jobs, not less jobs" as humans will be needed to manage AI systems.
Martin Ford argues that many jobs are routine, repetitive and (to an AI) predictable; Ford warns that these jobs may be automated in the next couple of decades, and that many of the new jobs may not be "accessible to people with average capability", even with retraining.
Certain digital technologies are predicted to result in more job losses than others. For example, in recent years, the adoption of modern robotics has led to net employment growth. However, many businesses anticipate that automation, or employing robots would result in job losses in the future. This is especially true for companies in Central and Eastern Europe.
Other digital technologies, such as platforms or big data, are projected to have a more neutral impact on employment.
Issues within the debates
Long-term effects on employment
Participants in the technological employment debates agree that temporary job losses can result from technological innovation. Similarly, there is no dispute that innovation sometimes has positive effects on workers. Disagreement focuses on whether it is possible for innovation to have a lasting negative impact on overall employment. Levels of persistent unemployment can be quantified empirically, but the causes are subject to debate. Optimists accept short term unemployment may be caused by innovation, yet claim that after a while, compensation effects will always create at least as many jobs as were originally destroyed. While this optimistic view has been continually challenged, it was dominant among mainstream economists for most of the 19th and 20th centuries. For example, labor economists Jacob Mincer and Stephan Danninger developed an empirical study using data from the Panel Study of Income Dynamics, and find that although in the short run, technological progress seems to have unclear effects on aggregate unemployment, it reduces unemployment in the long run. When they include a 5-year lag, however, the evidence supporting a short-run employment effect of technology seems to disappear as well, suggesting that technological unemployment "appears to be a myth". Other studies, on the other hand, suggest that the labour-market effects of technologies such as industrial robots strongly depend on domestic institutional context.
The concept of structural unemployment, a lasting level of joblessness that does not disappear even at the high point of the business cycle, became popular in the 1960s. For pessimists, technological unemployment is one of the factors driving the wider phenomena of structural unemployment. Since the 1980s, even optimistic economists have increasingly accepted that structural unemployment has indeed risen in advanced economies, but they have tended to attribute this on globalisation and offshoring rather than technological change. Others claim a chief cause of the lasting increase in unemployment has been the reluctance of governments to pursue expansionary policies since the displacement of Keynesianism that occurred in the 1970s and early 80s. In the 21st century, and especially since 2013, pessimists have been arguing with increasing frequency that lasting worldwide technological unemployment is a growing threat.
Compensation effects
Compensation effects are labour-friendly consequences of innovation which "compensate" workers for job losses initially caused by new technology. In the 1820s, several compensation effects were described by Jean-Baptiste Say in response to Ricardo's statement that long-term technological unemployment could occur. Soon after, a whole system of effects was developed by Ramsey McCulloch. The system was labelled "compensation theory" by Karl Marx, who criticized its ideas, arguing that none of the effects were guaranteed to operate. Disagreement over the effectiveness of compensation effects has remained a central part of academic debates on technological unemployment ever since.
Compensation effects include:
By new machines. (The labour needed to build the new equipment that applied innovation requires.)
By new investments. (Enabled by the cost savings and therefore increased profits from the new technology.)
By changes in wages. (In cases where unemployment does occur, this can cause a lowering of wages, thus allowing more workers to be re-employed at the now lower cost. On the other hand, sometimes workers will enjoy wage increases as their profitability rises. This leads to increased income and therefore increased spending, which in turn encourages job creation.)
By lower prices. (Which then lead to more demand, and therefore more employment.) Lower prices can also help offset wage cuts, as cheaper goods will increase workers' buying power.
By new products. (Where innovation directly creates new jobs.)
The "by new machines" effect is now rarely discussed by economists; it is often accepted that Marx successfully refuted it. Even pessimists often concede that product innovation associated with the "by new products" effect can sometimes have a positive effect on employment. An important distinction can be drawn between 'process' and 'product' innovations. Evidence from Latin America seems to suggest that product innovation significantly contributes to the employment growth at the firm level, more so than process innovation. The extent to which the other effects are successful in compensating the workforce for job losses has been extensively debated throughout the history of modern economics; the issue is still not resolved. One such effect that potentially complements the compensation effect is job multiplier. According to research developed by Enrico Moretti, with each additional skilled job created in high tech industries in a given city, more than two jobs are created in the non-tradable sector. His findings suggest that technological growth and the resulting job-creation in high-tech industries might have a more significant spillover effect than anticipated. Evidence from Europe also supports such a job multiplier effect, showing local high-tech jobs could create five additional low-tech jobs.
Many economists pessimistic about technological unemployment accept that compensation effects did largely operate as the optimists claimed through most of the 19th and 20th century. Yet they hold that the advent of computerisation means that compensation effects have become less effective. An early example of this argument was made by Wassily Leontief in 1983. He conceded that after some disruption, the advance of mechanization during the Industrial Revolution increased the demand for labour as well as increasing pay due to effects that flow from increased productivity. While early machines lowered the demand for muscle power, they were unintelligent and needed large numbers of human operators to remain productive. Yet since the introduction of computers into the workplace, there is now less need not just for muscle power but also for human brain power. Hence even as productivity continues to rise, the lower demand for human labour may mean less pay and employment.
Luddite fallacy
The term "Luddite fallacy" is sometimes used to express the view that those concerned about long-term technological unemployment are committing a fallacy, as they fail to account for compensation effects. People who use the term typically expect that technological progress will have no long-term impact on employment levels, and eventually will raise wages for all workers, because progress helps to increase the overall wealth of society. The term is originating on from the Luddites, members of an early 19th century English anti-textile-machinery organisation. During the 20th century and the first decade of the 21st century, the dominant view among economists has been that belief in long-term technological unemployment was indeed a fallacy. More recently, there has been increased support for the view that the benefits of automation are not equally distributed.
There are two different theories for why long-term difficulty could develop.
Traditionally ascribed to the Luddites (accurately or not), that there is a finite amount of work available and if machines do it, there can be none left for humans. Economists may call this the lump of labour fallacy, arguing that in reality no such limitation exists.
A long-term difficulty can arise that has nothing to do with any lump of labour. In this view, the amount of work that can exist is infinite, but
machines can do most of the "easy" work that requires less skill, talent, knowledge, or insight
the definition of what is "easy" expands as information technology progresses, and
the work that lies beyond "easy" may require greater brainpower than most people have.
This second view is supported by many modern advocates of the possibility of long-term, systemic technological unemployment.
Skill levels and technological unemployment
A frequent view among those discussing the effect of innovation on the labour market has been that it mainly hurts those with low skills, while often benefiting skilled workers. According to scholars such as Lawrence F. Katz, this may have been true for much of the twentieth century, yet in the 19th century, innovations in the workplace largely displaced costly skilled artisans, and generally benefited the low skilled. While 21st century innovation has been replacing some unskilled work, other low skilled occupations remain resistant to automation, while white collar work requiring intermediate skills is increasingly being performed by autonomous computer programs.
Some recent studies however, such as a 2015 paper by Georg Graetz and Guy Michaels, found that at least in the area they studied – the impact of industrial robots – innovation is boosting pay for highly skilled workers while having a more negative impact on those with low to medium skills. A 2015 report by Carl Benedikt Frey, Michael Osborne and Citi Research agreed that innovation had been disruptive mostly to middle-skilled jobs, yet predicted that in the next ten years the impact of automation would fall most heavily on those with low skills.
Geoffrey Colvin at Forbes argued that predictions on the kind of work a computer will never be able to do have proven inaccurate. A better approach to anticipate the skills on which humans will provide value would be to find out activities where we will insist that humans remain accountable for important decisions, such as with judges, CEOs, bus drivers and government leaders, or where human nature can only be satisfied by deep interpersonal connections, even if those tasks could be automated.
In contrast, others see even skilled human laborers being obsolete. Oxford academics Carl Benedikt Frey and Michael A Osborne have predicted computerization could make nearly half of jobs redundant; of the 702 professions assessed, they found a strong correlation between education and income with ability to be automated, with office jobs and service work being some of the more at risk. In 2012 co-founder of Sun Microsystems Vinod Khosla predicted that 80% of medical doctors' jobs would be lost in the next two decades to automated machine learning medical diagnostic software.
The issue of redundant job places is elaborated by the 2019 paper by Natalya Kozlova, according to which over 50% of workers in Russia perform work that requires low levels of education and can be replaced by applying digital technologies. Only 13% of those people possess education that exceeds the level of intellectual computer systems present today and expected within the following decade.
Empirical findings
There has been a significant amount of empirical research that attempts to quantify the impact of technological unemployment, mainly at the microeconomic level. Most existing firm-level research has found a labor-friendly nature of technological innovations. For example, German economists Stefan Lachenmaier and Horst Rottmann find that both product and process innovation have a positive effect on employment. They also find that process innovation has a more significant job creation effect than product innovation. This result is supported by evidence in the United States as well, which shows that manufacturing firm innovations have a positive effect on the total number of jobs, not just limited to firm-specific behavior.
At the industry level, however, researchers have found mixed results with regard to the employment effect of technological changes. A 2017 study on manufacturing and service sectors in 11 European countries suggests that positive employment effects of technological innovations only exist in the medium- and high-tech sectors. There also seems to be a negative correlation between employment and capital formation, which suggests that technological progress could potentially be labor-saving given that process innovation is often incorporated in investment.
Limited macroeconomic analysis has been done to study the relationship between technological shocks and unemployment. The small amount of existing research, however, suggests mixed results. Italian economist Marco Vivarelli finds that the labor-saving effect of process innovation appears to have affected the Italian economy more negatively than the United States. On the other hand, the job creating effect of product innovation could only be observed in the United States, not Italy. Another study in 2013 finds a more transitory, rather than permanent, unemployment effect of technological change.
Measures of technological innovation
There have been four main approaches that attempt to capture and document technological innovation quantitatively. The first one, proposed by Jordi Gali in 1999 and further developed by Neville Francis and Valerie A. Ramey in 2005, is to use long-run restrictions in a vector autoregression (VAR) to identify technological shocks, assuming that only technology affects long-run productivity.
The second approach is from Susanto Basu, John Fernald and Miles Kimball. They create a measure of aggregate technology change with augmented Solow residuals, controlling for aggregate, non-technological effects such as non-constant returns and imperfect competition.
The third method, initially developed by John Shea in 1999, takes a more direct approach and employs observable indicators such as research and development (R&D) spending, and number of patent applications. This measure of technological innovation is widely used in empirical research, since it does not rely on the assumption that only technology affects long-run productivity, and fairly accurately captures output variation based on input variation. However, there are limitations with direct measures such as R&D. For example, since R&D only measures the input in innovation, the output is unlikely to be perfectly correlated with the input. In addition, R&D fails to capture the indeterminate lag between developing a new product or service, and bringing it to market.
The fourth approach, constructed by Michelle Alexopoulos, looks at the number of new titles published in the fields of technology and computer science to reflect technological progress, which he found to be consistent with R&D expenditure data. Compared with R&D, this indicator captures the lag between changes in technology.
Solutions
Preventing net job losses
Banning/refusing innovation
Historically, innovations were sometimes banned due to concerns about their impact on employment. Since the development of modern economics, however, this option has generally not even been considered as a solution, at least not for the advanced economies. Even commentators who are pessimistic about long-term technological unemployment invariably consider innovation to be an overall benefit to society, with J. S. Mill being perhaps the only prominent western political economist to have suggested prohibiting the use of technology as a possible solution to unemployment.
Gandhian economics called for a delay in the uptake of labour saving machines until unemployment was alleviated, however this advice was largely rejected by Nehru who was to become prime minister once India achieved its independence. The policy of slowing the introduction of innovation so as to avoid technological unemployment was, however, implemented in the 20th century within China under Mao's administration.
Shorter working hours
In 1870, the average American worker clocked up about 75 hours per week. Just prior to World War II working hours had fallen to about 42 per week, and the fall was similar in other advanced economies. According to Wassily Leontief, this was a voluntary increase in technological unemployment. The reduction in working hours helped share out available work, and was favoured by workers who were happy to reduce hours to gain extra leisure, as innovation was at the time generally helping to increase their rates of pay.
Further reductions in working hours have been proposed as a possible solution to unemployment by economists including John R. Commons, Lord Keynes and Luigi Pasinetti. Yet once working hours have reached about 40 hours per week, workers have been less enthusiastic about further reductions, both to prevent loss of income and as many value engaging in work for its own sake. Generally, 20th-century economists had argued against further reductions as a solution to unemployment, saying it reflects a lump of labour fallacy.
In 2014, Google's co-founder, Larry Page, suggested a four-day workweek, so as technology continues to displace jobs, more people can find employment.
Public works
Programmes of public works have traditionally been used as way for governments to directly boost employment, though this has often been opposed by some, but not all, conservatives. Jean-Baptiste Say, although generally associated with free market economics, advised that public works could be a solution to technological unemployment. Some commentators, such as professor Mathew Forstater, have advised that public works and guaranteed jobs in the public sector may be the ideal solution to technological unemployment, as unlike welfare or guaranteed income schemes they provide people with the social recognition and meaningful engagement that comes with work.
For less developed economies, public works may be an easier to administrate solution compared to universal welfare programmes. A partial exception is for spending on infrastructure, which has been recommended as a solution to technological unemployment even by economists previously associated with a neoliberal agenda, such as Larry Summers.
Education
Improved availability to quality education, including skills training for adults, is a solution that in principle at least is not opposed by any side of the political spectrum, and welcomed even by those who are optimistic about long-term technological employment. Improved education paid for by government tends to be especially popular with industry. However, several academics have argued that improved education alone will not be sufficient to solve technological unemployment, pointing to recent declines in the demand for many intermediate skills, and suggesting that not everyone is capable in becoming proficient in the most advanced skills. Kim Taipale has said that "The era of bell curve distributions that supported a bulging social middle class is over... Education per se is not going to make up the difference." while back in 2011 Paul Krugman argued that better education would be an insufficient solution to technological unemployment.
Living with technological unemployment
Welfare payments
The use of various forms of subsidies has often been accepted as a solution to technological unemployment even by conservatives and by those who are optimistic about the long-term effect on jobs. Welfare programmes have historically tended to be more durable once established, compared with other solutions to unemployment such as directly creating jobs with public works. Despite being the first person to create a formal system describing compensation effects, Ramsey McCulloch and most other classical economists advocated government aid for those suffering from technological unemployment, as they understood that market adjustment to new technology was not instantaneous and that those displaced by labour-saving technology would not always be able to immediately obtain alternative employment through their own efforts.
Basic income
Several commentators have argued that traditional forms of welfare payment may be inadequate as a response to the future challenges posed by technological unemployment, and have suggested a basic income as an alternative. People advocating some form of basic income as a solution to technological unemployment include Martin Ford,
Erik Brynjolfsson, Robert Reich, Andrew Yang, Elon Musk, Zoltan Istvan, and Guy Standing. Reich has gone as far as to say the introduction of a basic income, perhaps implemented as a negative income tax is "almost inevitable", while Standing has said he considers that a basic income is becoming "politically essential".
Since late 2015, new basic income pilots have been announced in Finland, the Netherlands, and Canada. Further recent advocacy for basic income has arisen from a number of technology entrepreneurs, the most prominent being Sam Altman, president of Y Combinator.
Skepticism about basic income includes both right and left elements, and proposals for different forms of it have come from all segments of the spectrum. For example, while the best-known proposed forms (with taxation and distribution) are usually thought of as left-leaning ideas that right-leaning people try to defend against, other forms have been proposed even by libertarians, such as von Hayek and Friedman. In the United States, President Richard Nixon's Family Assistance Plan (FAP) of 1969, which had much in common with basic income, passed in the House but was defeated in the Senate.
One objection to basic income is that it could be a disincentive to work, but evidence from older pilots in India, Africa, and Canada indicates that this does not happen and that a basic income encourages low-level entrepreneurship and more productive, collaborative work. Another objection is that funding it sustainably is a huge challenge. While new revenue-raising ideas have been proposed such as Martin Ford's wage recapture tax, how to fund a generous basic income remains a debated question, and skeptics have dismissed it as utopian. Even from a progressive viewpoint, there are concerns that a basic income set too low may not help the economically vulnerable, especially if financed largely from cuts to other forms of welfare.
To better address both the funding concerns and concerns about government control, one alternative model is that the cost and control would be distributed across the private sector instead of the public sector. Companies across the economy would be required to employ humans, but the job descriptions would be left to private innovation, and individuals would have to compete to be hired and retained. This would be a for-profit sector analog of basic income, that is, a market-based form of basic income. It differs from a job guarantee in that the government is not the employer (rather, companies are) and there is no aspect of having employees who "cannot be fired", a problem that interferes with economic dynamism. The economic salvation in this model is not that every individual is guaranteed a job, but rather just that enough jobs exist that massive unemployment is avoided and employment is no longer solely the privilege of only the very smartest or highly trained 20% of the population. Another option for a market-based form of basic income has been proposed by the Center for Economic and Social Justice (CESJ) as part of "a Just Third Way" (a Third Way with greater justice) through widely distributed power and liberty. Called the Capital Homestead Act, it is reminiscent of James S. Albus's Peoples' Capitalism in that money creation and securities ownership are widely and directly distributed to individuals rather than flowing through, or being concentrated in, centralized or elite mechanisms.
Broadening the ownership of technological assets
Several solutions have been proposed which do not fall easily into the traditional left-right political spectrum. This includes broadening the ownership of robots and other productive capital assets. Enlarging the ownership of technologies has been advocated by people including James S. Albus John Lanchester, Richard B. Freeman, and Noah Smith.
Jaron Lanier has proposed a somewhat similar solution: a mechanism where ordinary people receive "nano payments" for the big data they generate by their regular surfing and other aspects of their online presence.
Structural changes towards a post-scarcity economy
The Zeitgeist Movement (TZM), The Venus Project (TVP) as well as various individuals and organizations propose structural changes towards a form of a post-scarcity economy in which people are 'freed' from their automatable, monotonous jobs, instead of 'losing' their jobs. In the system proposed by TZM all jobs are either automated, abolished for bringing no true value for society (such as ordinary advertising), rationalized by more efficient, sustainable and open processes and collaboration or carried out based on altruism and social relevance, opposed to compulsion or monetary gain. The movement also speculates that the free time made available to people will permit a renaissance of creativity, invention, community and social capital as well as reducing stress.
Other approaches
The threat of technological unemployment has occasionally been used by free market economists as a justification for supply side reforms, to make it easier for employers to hire and fire workers. Conversely, it has also been used as a reason to justify an increase in employee protection.
Economists including Larry Summers have advised a package of measures may be needed. He advised vigorous cooperative efforts to address the "myriad devices" – such as tax havens, bank secrecy, money laundering, and regulatory arbitrage – which enable the holders of great wealth to avoid paying taxes, and to make it more difficult to accumulate great fortunes without requiring "great social contributions" in return. Summers suggested more vigorous enforcement of anti-monopoly laws; reductions in "excessive" protection for intellectual property; greater encouragement of profit-sharing schemes that may benefit workers and give them a stake in wealth accumulation; strengthening of collective bargaining arrangements; improvements in corporate governance; strengthening of financial regulation to eliminate subsidies to financial activity; easing of land-use restrictions that may cause estates to keep rising in value; better training for young people and retraining for displaced workers; and increased public and private investment in infrastructure development, such as energy production and transportation.
Michael Spence has advised that responding to the future impact of technology will require a detailed understanding of the global forces and flows technology has set in motion. Adapting to them "will require shifts in mindsets, policies, investments (especially in human capital), and quite possibly models of employment and distribution".
See also
Notes
References
Citations
Sources
Further reading
John Maynard Keynes, The Economic Possibilities of our Grandchildren (1930)
E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) ssrn.com, part 2(2)
Ross, Alec (2016), The Industries of the Future, USA: Simon & Schuster.
Impact of automation
Technological change
Unemployment | Technological unemployment | [
"Engineering"
] | 11,222 | [
"Impact of automation",
"Automation"
] |
32,040,610 | https://en.wikipedia.org/wiki/Tosyl%20azide | Tosyl azide is a reagent used in organic synthesis.
Uses
Tosyl azide is used for the introduction of azide and diazo functional groups. It is also used as a nitrene source and as a substrate for [3+2] cycloaddition reactions.
Preparation
Tosyl azide can be prepared by the reaction of tosyl chloride with sodium azide in aqueous acetone.
Safety
Tosyl azide is one of the most stable azide compounds but is still regarded as a potential explosive and should be carefully stored, while particular caution is vital for all reactions in which it is heated at or above 100 °C. The initial temperature of the explosive decomposition is about 120 °C.
See also
Diphenylphosphoryl azide
Trifluoromethanesulfonyl azide
References
Azido compounds
Reagents for organic chemistry
p-Tosyl compounds | Tosyl azide | [
"Chemistry"
] | 192 | [
"Reagents for organic chemistry"
] |
32,043,063 | https://en.wikipedia.org/wiki/Sodium%20tert-butoxide | Sodium tert-butoxide (or sodium t-butoxide) is a chemical compound with the formula (CH3)3CONa (abbr. NaOtBu). It is a strong, non-nucleophilic base. It is flammable and moisture sensitive. It is sometimes written in the chemical literature as sodium t-butoxide. It is similar in reactivity to the more common potassium tert-butoxide.
The compound can be produced by treating tert-butyl alcohol with sodium hydride.
Reactions
One application for sodium tert-butoxide is as a non-nucleophilic base. It has been widely used in the Buchwald–Hartwig amination, as in this typical example:
Sodium tert-butoxide is used to prepare tert-butoxide complexes. For example hexa(tert-butoxy)ditungsten(III) is thus obtained by the salt metathesis reaction from a ditungsten heptachloride:
NaW2Cl7(THF)5 + 6 NaOBu-t → W2(OBu-t)6 + 7 NaCl + 5 THF
Structure
Sodium tert-butoxide forms clusters in the solid state, both hexamers and nonamers.
Related compounds
Potassium tert-butoxide
Lithium tert-butoxide
Sodium tert-amyloxide (), a highly soluble analogue of sodium tert-butoxide
References
Alkoxides
Reagents for organic chemistry
Non-nucleophilic bases
Tert-butyl compounds
Sodium compounds | Sodium tert-butoxide | [
"Chemistry"
] | 331 | [
"Non-nucleophilic bases",
"Alkoxides",
"Functional groups",
"Reagents for organic chemistry",
"Bases (chemistry)"
] |
32,043,083 | https://en.wikipedia.org/wiki/IEEE%20Magnetics%20Letters | IEEE Magnetics Letters is a peer-reviewed scientific journal that was started in January 2010. It covers the physics and engineering of magnetism, magnetic materials, applied magnetics, design and application of magnetic devices, biomagnetics, magneto-electronics, and spin electronics. It publishes short articles of up to five pages in length and is a hybrid open access journal. The editor-in-chief is Massimiliano d'Aquino (University of Naples Federico II).
References
External links
Engineering journals
English-language journals
Hybrid open access journals
Magnetics Letters
Materials science journals
Academic journals established in 2010
Physics journals | IEEE Magnetics Letters | [
"Materials_science",
"Engineering"
] | 125 | [
"Materials science journals",
"Materials science"
] |
32,044,869 | https://en.wikipedia.org/wiki/Weapon%20target%20assignment%20problem | The weapon target assignment problem (WTA) is a class of combinatorial optimization problems present in the fields of optimization and operations research. It consists of finding an optimal assignment of a set of weapons of various types to a set of targets in order to maximize the total expected damage done to the opponent.
The basic problem is as follows:
There are a number of weapons and a number of targets. The weapons are of type . There are available weapons of type . Similarly, there are targets, each with a value of . Any of the weapons can be assigned to any target. Each weapon type has a certain probability of destroying each target, given by .
Notice that as opposed to the classic assignment problem or the generalized assignment problem, more than one agent (i.e., weapon) can be assigned to each task (i.e., target) and not all targets are required to have weapons assigned. Thus, we see that the WTA allows one to formulate optimal assignment problems wherein tasks require cooperation among agents. Additionally, it provides the ability to model probabilistic completion of tasks in addition to costs.
Both static and dynamic versions of WTA can be considered. In the static case, the weapons are assigned to targets once. The dynamic case involves many rounds of assignment where the state of the system after each exchange of fire (round) is considered in the next round. While the majority of work has been done on the static WTA problem, recently the dynamic WTA problem has received more attention.
In spite of the name, there are nonmilitary applications of the WTA. The main one is to search for a lost object or person by heterogeneous assets such as dogs, aircraft, walkers, etc. The problem is to assign the assets to a partition of the space in which the object is located to minimize the probability of not finding the object. The "value" of each element of the partition is the probability that the object is located there.
Formal mathematical definition
The weapon target assignment problem is often formulated as the following nonlinear integer programming problem:
subject to the constraints
Where the variable represents the assignment of as many weapons of type to target and is the probability of survival (). The first constraint requires that the number of weapons of each type assigned does not exceed the number available. The second constraint is the integral constraint.
Notice that minimizing the expected survival value is the same as maximizing the expected damage.
Algorithms and generalizations
An exact solution can be found using branch and bound techniques which utilize relaxation (approximation). Many heuristic algorithms have been proposed which provide near-optimal solutions in polynomial time.
Example
A commander has 5 tanks, 2 aircraft, and 1 sea vessel and is told to engage 3 targets with values 5, 10, and 20. Each weapon type has the following success probabilities against each target:
{| class="wikitable"
|-
! Weapon Type !! !! !!
|-
| Tank || 0.3 || 0.2 || 0.5
|-
| Aircraft || 0.1 || 0.6 || 0.5
|-
| Sea Vessel || 0.4 || 0.5 || 0.4
|}
One feasible solution is to assign the sea vessel and one aircraft to the highest valued target (3). This results in an expected survival value of . One could then assign the remaining aircraft and 2 tanks to target #2, resulting in expected survival value of . Finally, the remaining 3 tanks are assigned to target #1 which has an expected survival value of . Thus, we have a total expected survival value of . Note that a better solution can be achieved by assigning 3 tanks to target #1, 2 tanks and sea vessel to target #2 and 2 aircraft to target #3, giving an expected survival value of .
See also
Auction algorithm
Closure problem
Generalized assignment problem
Linear bottleneck assignment problem
Quadratic assignment problem
Stable marriage problem
References
Further reading
Combinatorial optimization
Matching (graph theory)
Combat modeling | Weapon target assignment problem | [
"Mathematics"
] | 814 | [
"Applied mathematics",
"Graph theory",
"Mathematical relations",
"Matching (graph theory)",
"Combat modeling"
] |
32,045,633 | https://en.wikipedia.org/wiki/Superdense%20carbon%20allotropes | Superdense carbon allotropes are proposed configurations of carbon atoms that result in a stable material with a higher density than diamond. Few hypothetical carbon allotropes denser than diamond are known. All these allotropes can be divided at two groups: the first are hypothetically stable at ambient conditions; the second are high-pressure carbon allotropes which become quasi-stable only at high pressure.
Ambient conditions
According to the SACADA
database, the first group comprises the structures, called hP3, tI12, st12, r8, I41/a, P41212, m32, m32*, t32, t32*, H-carbon and uni. Among them, st12 carbon was proposed as far as 1987 in the work of R. Biswas et al.
High-pressure carbon
The following allotropes belong to the second group: MP8, OP8, SC4, BC-8 and (9,0). These are hypothetically quasi-stable at the high pressure. BC-8 carbon is not only a superdense allotrope but also one of the oldest hypothetical carbon structures; initially it was proposed in 1984 in the work R. Biswas et al. The MP8 structure proposed in the work J. Sun et al. is almost two times denser than diamond; its density is as high as 7.06 g/cm3 and it is the highest value reported so far.
Band gaps
All hypothetical superdense carbon allotropes have dissimilar band gaps compared to the others. For example, SC4 is supposed to be a metallic allotrope while st12, m32, m32*, t32, t32* have band gaps larger than 5.0 eV.
Carbon tetrahedra
These new materials would have structures based on carbon tetrahedra, and represent the densest of such structures. On the opposite end of the density spectrum is a recently theorized tetrahedral structure called T-carbon. This is obtained by replacing carbon atoms in diamond with carbon tetrahedra. In contrast to superdense allotropes, T-carbon would have very low density and hardness.
References
External links
SACADA - Samara Carbon Allotrope Database
Allotropes of carbon | Superdense carbon allotropes | [
"Chemistry"
] | 478 | [
"Allotropes of carbon",
"Allotropes"
] |
28,249,611 | https://en.wikipedia.org/wiki/Sarit%20Kraus | Sarit Kraus (; born 1960) is a professor of computer science at the Bar-Ilan University in Israel. She was named the 2020-2021 ACM Athena Lecturer in recognition of her contributions to artificial intelligence, notably to multiagent systems, human-agent interaction, autonomous agents and non-monotonic reasoning, as well as her leadership in these fields.
Biography
Sarit Kraus was born in Jerusalem, Israel. She completed her Ph.D. in Computer Science at Hebrew University in 1989 under the supervision of Prof. Daniel Lehmann. She is married to Prof. Yitzchak Kraus and has five children.
Academic career
Kraus has made highly influential contributions to numerous subfields, most notably to multiagent systems (including people and robots) and non-monotonic reasoning. One of her important contributions is to strategic negotiation. Her work in this area is one of the first to integrate game theory with artificial intelligence. Furthermore, she started new research on automated agents that negotiate with people, and established that these agents must be evaluated via experiments with humans. In particular, she has developed Diplomat, the first automated agent that negotiated proficiently with people. This was followed with other agents that interact well with people by integrating qualitative decision-making approach with machine learning tools, to face the challenge of people being bounded rational. Based on Kraus's work, others have begun to develop automated agents that negotiate and interact with people.
Consequently, Kraus's work has become the gold standard for research in negotiation, both among automated agents and between agents and humans. This work has provoked the curiosity of other communities and was published in journals of political science, psychology and economics.
Another influential contribution of Kraus is in introducing a dimension of individualism into the multi-agent field by developing protocols and strategies for cooperation among self-interested agents including the formation of coalitions. This view differed radically from the fully cooperative agents approach, commonly held then by the multi-agent community (then called Distributed Artificial Intelligence). Individualism is necessary for reliably constraining the behaviour in open environments, such as electronic marketplaces.
Together with Barbara J. Grosz of Harvard, Kraus developed a reference theory for collaborative planning (a TeamWork model) called SharedPlans, which provides specification for the design of collaboration-capable agents and a framework for identifying and investigating fundamental questions about collaboration. It specifies the minimal conditions for a group of agents to have a joint goal, the group and individual decision making procedures that are required, the way the agents' mental states and plans can evolve over time and other various important relationships among the agents, e.g., teammates, subcontractors, etc. Given the extensiveness of SharedPlans and its rigorous specifications, it has been the basis for many other works and was widely adopted in other fields (e.g. robotics or human-machine interaction).
Kraus is also highly recognized for her contribution to the area of Non-Monotonic Reasoning. She is the first author of one of the most influential papers in the area (KLM). Within the mainstream logic community, “KLM” semantics have had probably the greatest impact.
According to her DBLP entry Kraus has 131 collaborators from all around the world and from different disciplines. She is the author of a monograph on negotiations and co-author of additional two books.
Kraus’ solutions have enriched the research community, but also bear practical fruits. Her research has induced the design and construction of real-world systems, which have transformed concepts from academia into reality. Kraus, together with Tambe and Ordonez from USC, developed an innovative approach of randomized policies for security applications. The innovative algorithm, which combines game theory and optimization methods, improves the state-of-the-art in security of robotics and multi-agent systems, and is used in practice at the Los Angeles International Airport since 2007. Her seminal research in the area of formal models of collaboration is used in industrial cutting-edge simulation technology and team-supported tools. Her work in developing Sheba's virtual speech therapist is currently being used for treatment by several Israeli HMOs. The Colored Trails game environment that she developed together with Grosz of Harvard provides a platform for researchers to conduct decision-making studies and is now used extensively by researchers from dozens of universities, as well as for training astronauts. Additional exciting recent projects of hers include building systems that negotiate and argue proficiently with people: her research on culture-sensitive agents has resulted in the development of numerous agents for cross-culture collaboration, with provable success in interaction with hundreds of people in America, the Far East and the Middle East---all believing that they interacted with a person, not recognizing that it was actually an agent. Her work on virtual humans has led to the development of a system for the Israeli police for training law enforcement officials to interview witnesses and suspects. Here, virtual psychological models of the suspect were developed, leading to diversified answers by the virtual suspect.
Recently, she has developed an intelligent agent which supports an operator that works with a team of low-cost autonomous robots. Finally, together with the Israeli GM center, a persuasion system has been developed that generates advice for drivers regarding various different decisions involving conflicting goals.
Awards
1995 IJCAI-95 Computers and Thought Award. The award is given by the IJCAI organization every two years to an ”outstanding young scientist"
2002 AAAI Fellow
2007 ACM/SIGART Autonomous Agents Research Award. The award is given by ACM SIGART, in collaboration with IFAAMAS, for excellence in research in the area of autonomous agents
2007 IFAAMAS Influential Paper Award with Barbara Grosz (joint winner)
2008 ECCAI Fellow
2009 Special commendations from the city of Los Angeles for the creation of the ARMOR security scheduling system
2010 “Women of the year” of Emuna
2010 EMET prize
2012 elected to Academia Europaea
2014 IFAAMAS Influential Paper Award with Onn Shehory
2014 ACM Fellow. "For contributions to artificial intelligence, including multi-agent systems, human-agent interaction and non-monotonic reasoning"
2020 ACM 2020-2021 ACM Athena Lecturer. "For foundational contributions to artificial intelligence, notably to multi-agent systems, human-agent interaction, autonomous agents and non-monotonic reasoning, and exemplary service and leadership in these fields."
2023 IJCAI Award for Research Excellence
References
External links
at the Bar-Ilan University.
1960 births
Artificial intelligence researchers
Academic staff of Bar-Ilan University
Israeli computer scientists
Israeli Orthodox Jews
Living people
Israeli women computer scientists
Fellows of the Association for the Advancement of Artificial Intelligence
2014 fellows of the Association for Computing Machinery
Game theorists
Hebrew University of Jerusalem School of Computer Science & Engineering alumni
Members of Academia Europaea
Jewish women scientists | Sarit Kraus | [
"Mathematics"
] | 1,389 | [
"Game theorists",
"Game theory"
] |
28,249,680 | https://en.wikipedia.org/wiki/Vitamin%20A2 | {{DISPLAYTITLE:Vitamin A2}}
Vitamin A2 is a subcategory of vitamin A.
As with all vitamin A forms, A2 can exist as an aldehyde, Dehydroretinal (3,4-dehydroretinal), an alcohol, 3,4-dehydroretinol (vitamin A2 alcohol) or an acid, 3,4-dehydroretinoic acid (vitamin A2 acid). Many cold-blooded vertebrates use the aldehyde for their visual system to obtain a red-shifted sensitive spectrum.
Human skin naturally contains the alcohol form. In humans, CYP27C1 converts ordinary A1 (all-trans retinoids) to A2. The enzyme also converts 11-cis-retinal.
Vitamin A2 was first identified by Richard Alan Morton using newly-developed absorption spectroscopy in 1941.
References
Vitamin A
Carotenoids | Vitamin A2 | [
"Chemistry",
"Biology"
] | 188 | [
"Biomarkers",
"Vitamin A",
"Biomolecules",
"Carotenoids"
] |
28,252,413 | https://en.wikipedia.org/wiki/Micro-mechanics%20of%20failure | The theory of micro-mechanics of failure aims to explain the failure of continuous fiber reinforced composites by micro-scale analysis of stresses within each constituent material (such as fiber and matrix), and of the stresses at the interfaces between those constituents, calculated from the macro stresses at the ply level.
As a completely mechanics-based failure theory, the theory is expected to provide more accurate analyses than those obtained with phenomenological models such as Tsai-Wu and Hashin failure criteria, being able to distinguish the critical constituent in the critical ply in a composite laminate.
Basic concepts
The basic concept of the micro-mechanics of failure (MMF) theory is to perform a hierarchy of micromechanical analyses, starting from mechanical behavior of constituents (the fiber, the matrix, and the interface), then going on to the mechanical behavior of a ply, of a laminate, and eventually of an entire structure.
At the constituent level, three elements are required to fully characterize each constituent:
The constitutive relation, which describes the transient, or time-independent, response of the constituent to external mechanical as well as hygrothermal loadings;
The master curve, which describes the time-dependent behavior of the constituent under creep or fatigue loadings;
The failure criterion, which describes conditions that cause failure of the constituent.
The constituents and a unidirectional lamina are linked via a proper micromechanical model, so that ply properties can be derived from constituent properties, and on the other hand, micro stresses at the constituent level can be calculated from macro stresses at the ply level.
Unit cell model
Starting from the constituent level, it is necessary to devise a proper method to organize all three constituents such that the microstructure of a UD lamina is well-described. In reality, all fibers in a UD ply are aligned longitudinally; however, in the cross-sectional view, the distribution of fibers is random, and there is no distinguishable regular pattern in which fibers are arrayed. To avoid such a complication cause by the random arrangement of fibers, an idealization of the fiber arrangement in a UD lamina is performed, and the result is the regular fiber packing pattern. Two regular fiber packing patterns are considered: the square array and the hexagonal array. Either array can be viewed as a repetition of a single element, named unit cell or representative volume element (RVE), which consists of all three constituents. With periodical boundary conditions applied, a unit cell is able to respond to external loadings in the same way that the whole array does. Therefore, a unit cell model is sufficient in representing the microstructure of a UD ply.
Stress amplification factor (SAF)
Stress distribution at the laminate level due to external loadings applied to the structure can be acquired using finite element analysis (FEA). Stresses at the ply level can be obtained through transformation of laminate stresses from laminate coordinate system to ply coordinate system. To further calculate micro stresses at the constituent level, the unit cell model is employed. Micro stresses at any point within fiber/matrix, and micro surface tractions at any interfacial point, are related to ply stresses as well as temperature increment through:
Here , , and are column vectors with 6, 6, and 3 components, respectively. Subscripts serve as indications of constituents, i.e. for fiber, for matrix, and for interface. and are respectively called stress amplification factors (SAF) for macro stresses and for temperature increment. The SAF serves as a conversion factor between macro stresses at the ply level and micro stresses at the constituent level. For a micro point in fiber or matrix, is a 6×6 matrix while has the dimension of 6×1; for an interfacial point, respective dimensions of and are 3×6 and 3×1. The value of each single term in the SAF for a micro material point is determined through FEA of the unit cell model under given macroscopic loading conditions. The definition of SAF is valid not only for constituents having linear elastic behavior and constant coefficients of thermal expansion (CTE), but also for those possessing complex constitutive relations and variable CTEs.
Constituent failure criteria
Fiber failure criterion
Fiber is taken as transversely isotropic, and there are two alternative failure criteria for it: a simple maximum stress criterion and a quadratic failure criterion extended from Tsai-Wu failure criterion:
The Coefficients involved in the quadratic failure criterion are defined as follows:
where , , , , , and denote longitudinal tensile, longitudinal compressive, transverse tensile, transverse compressive, transverse (or through-thickness) shear, and in-plane shear strength of the fiber, respectively.
Stresses used in two preceding criteria should be micro stresses in the fiber, expressed in such a coordinate system that 1-direction signifies the longitudinal direction of fiber.
Matrix failure criterion
The polymeric matrix is assumed to be isotropic and exhibits a higher strength under uniaxial compression than under uniaxial tension. A modified version of von Mises failure criterion suggested by Christensen is adopted for the matrix:
Here and represent matrix tensile and compressive strength, respectively; whereas and are von Mises equivalent stress and the first stress invariant of micro stresses at a point within matrix, respectively.
Interface failure criterion
The fiber-matrix interface features traction-separation behavior, and the failure criterion dedicated to it takes the following form:
where and are normal (perpendicular to the interface) and shear (tangential to the interface) interfacial tractions, with and being their corresponding strengths. The angle brackets (Macaulay brackets) imply that a pure compressive normal traction does not contribute to interface failure.
Further extension of MMF
Hashin’s Failure Criteria
These are interacting failure criteria where more than one stress components have been used to evaluate the different failure modes. These criteria were originally developed for unidirectional polymeric composites, and hence, applications to other type of laminates and non-polymeric composites have significant approximations. Usually Hashin criteria are implemented within two-dimensional classical lamination approach for point stress calculations with ply discounting as the material degradation model. Failure indices for Hashin criteria are related to fibre and matrix failures and involve four failure modes. The criteria are extended to three-dimensional problems where the maximum stress criteria are used for transverse normal stress component.
The failure modes included in Hashin's criteria are as follows.
Tensile fibre failure for σ11 ≥ 0
Compressive fibre failure for σ11 < 0
Tensile matrix failure for σ22 + σ33 > 0
Compressive matrix failure for σ22 + σ33 < 0
Interlaminar tensile failure for σ33 > 0
Interlaminar compression failure for σ33 < 0
where, σij denote the stress components and the tensile and compressive allowable strengths for lamina are denoted by subscripts T and C, respectively. XT, YT, ZT denotes the allowable tensile strengths in three respective material directions. Similarly, XC, YC, ZC denotes the allowable compressive strengths in three respective material directions. Further, S12, S13 and S23 denote allowable shear strengths in the respective principal material directions.
Endeavors have been made to incorporate MMF with multiple progressive damage models and fatigue models for strength and life prediction of composite structures subjected to static or dynamic loadings.
See also
Composite material
Strength of materials
Material failure theory
Tsai-Wu failure criterion
Christensen failure criterion
References
Mechanics
Solid mechanics
Mechanical failure | Micro-mechanics of failure | [
"Physics",
"Materials_science",
"Engineering"
] | 1,561 | [
"Solid mechanics",
"Materials science",
"Mechanics",
"Mechanical engineering",
"Mechanical failure"
] |
28,252,852 | https://en.wikipedia.org/wiki/National%20Synchrotron%20Light%20Source%20II | The National Synchrotron Light Source II (NSLS-II) at Brookhaven National Laboratory (BNL) in Upton, New York is a national user research facility funded primarily by the U.S. Department of Energy's (DOE) Office of Science. NSLS-II is a synchrotron light source, designed to produce X-rays 10,000 times brighter than BNL's original light source, the National Synchrotron Light Source (NSLS). NSLS-II supports research in energy security, advanced materials synthesis and manufacturing, environment, and human health.
Users and partners
Users
In order to use the NSLS-II, researchers submit a peer-reviewed proposal. In the first five months of 2023, NSLS-II served over 1,200 researchers from academic, industrial, and government laboratories worldwide.
Partners
NSLS-II has partners with public and private institutions which joined effort to fund the construction and operation of some of its beamlines. Its partnerships include BNL's Center for Functional Nanomaterials and the National Institute of Standards and Technology.
Beamlines
NSLS-II currently has 29 beamlines (experimental stations) open for operations. When the facility is complete, NSLS-II is expected to "be capable of supporting some 58 beamlines in total."
The beamlines at NSLS-II are grouped into five science programs: hard X-ray scattering & spectroscopy, imaging and microscopy, structural biology, soft X-ray scattering and spectroscopy, and complex scattering. These programs group beamlines together that offer similar types of research techniques for studying the behavior and structure of matter.
Hard X-ray scattering & spectroscopy
6-BM: Materials Measurement (BMM)
7-BM: Quick X-ray Absorption and Scattering (QAS)
8-ID: Inner Shell Spectroscopy (ISS)
27-ID: High Energy X-ray Diffraction (HEX) (under construction)
28-ID-1: Pair Distribution Function (PDF)
28-ID-2: X-Ray Powder Diffraction (XPD)
Imaging and microscopy
3-ID: Hard X-ray Nanoprobe (HXN)
4-BM: X-ray Fluorescence Microprobe (XFM)
5-ID: Submicron Resolution X-ray Spectroscopy (SRX)
8-BM: Tender Energy X-ray Absorption Spectroscopy (TES)
9-ID: Coherent Diffraction Imaging (CDI) (under construction)
18-ID: Full-Field X-ray Imaging (FXI)
Structural biology
16-ID: Life Science X-ray Scattering (LIX)
17-ID-1: Highly Automated Macromolecular Crystallography Beamline (AMX)
17-ID-2: Frontier Microfocusing Macromolecular Crystallography (FMX)
17-BM: X-ray Footprinting of Biological Materials (XFP)
19-ID: Biological Microdiffraction Facility (NYX)
Soft X-ray scattering & spectroscopy
2-ID: Soft Inelastic X-ray Scattering (SIX)
7-ID-1: Spectroscopy Soft and Tender 1 (SST-1)
7-ID-2: Spectroscopy Soft and Tender 2 (SST-2)
21-ID-1: Electron Spectro-Microscopy ARPES (ESM-ARPES)
21-ID-2: Electron Spectro-Microscopy XPEEM (ESM-XPEEM)
22-IR-1: Frontier Synchrotron Infrared Spectroscopy (FIS)
22-IR-2: Magnetospectroscopy, Ellipsomentry and Time-Resolved Optical Spectroscopies (MET)
23-ID-1: Coherent Soft X-ray Scattering (CSX)
23-ID-2: In situ and Operando Soft X-Ray Spectroscopy (IOS)
29-ID-1: Soft X-ray Nanoprobe (SXN) (under construction)
29-ID-2: NanoARPES and NanoRIXS (ARI) (under construction)
Complex scattering
4-ID: Integrated In-situ and Resonant Hard X-ray Studies (ISR)
10-ID: Inelastic X-ray Scattering (IXS)
11-ID: Coherent Hard X-ray Scattering (CHX)
11-BM: Complex Materials Scattering (CMS)
12-ID: Soft Matter Interfaces (SMI)
Storage ring parameters
NSLS-II is a medium energy (3.0 GeV) electron storage ring designed to deliver photons with high average spectral brightness exceeding 1021 ph/s in the 2–10 keV energy range and a flux density exceeding 1015 ph/s in all spectral ranges. This performance requires the storage ring to support a very high-current electron beam (up to 500 mA) with a very small horizontal (down to 0.5 nm-rad) and vertical (8 pm-rad) emittance. The electron beam is stable in its position (<10% of its size), angle (<10% of its divergence), dimensions (<10%), and intensity (±0.5% variation).
Storage ring lattice
The NSLS-II storage ring lattice consists of 30 double-bend achromat (DBA) cells that can accommodate at least 58 beamlines for experiments, distributed by type of source as follows:
15 low-beta ID straights for undulators or superconducting wigglers
12 high-beta ID straights for either undulators or damping wigglers
31 BM ports providing broadband sources covering the IR, VUV, and soft X-ray ranges. Any of these ports can alternatively be replaced by a 3PW port covering the hard X-ray range.
4 BM ports on large gap (90 mm) dipoles for very far-IR
Radiation sources
Continuing the tradition established by the NSLS, NSLS-II radiation sources span a very wide spectral range, from the far infrared (down to 0.1 eV) to the very hard X-ray region (>300 keV). This is achieved by a combination of bending magnets, three-pole wigglers, and insertion device (ID) sources.
History
Construction of NSLS-II began in 2009 and was completed in 2014. NSLS-II saw first light in October 2014. The facility cost $912,000,000 to build, and the project received the DOE's Secretary's Award of Excellence. Torcon Inc., headquartered in New Jersey, was the general contractor selected by the DOE for the project.
References
External links
BNL: National Synchrotron Light Source II (NSLS-II)
BNL Photon Sciences: About NSLS-II
Brookhaven National Laboratory – a passion for discovery
Lightsources.org
Synchrotron radiation facilities
Brookhaven National Laboratory
2015 establishments in New York (state) | National Synchrotron Light Source II | [
"Materials_science"
] | 1,426 | [
"Materials testing",
"Synchrotron radiation facilities"
] |
28,253,247 | https://en.wikipedia.org/wiki/13%20Things%20That%20Don%27t%20Make%20Sense | 13 Things That Don't Make Sense is a non-fiction book by the British writer Michael Brooks, published in both the UK and the US during 2008.
The British subtitle is "The Most Intriguing Scientific Mysteries of Our Time" while the American is "The Most Baffling..." (see image).
Based on an article Brooks wrote for New Scientist in March 2005, the book, aimed at the general reader rather than the science community, contains discussion and description of a number of unresolved issues in science. It is a literary effort to discuss some of the inexplicable anomalies that after centuries science still cannot completely comprehend.
Chapter 1
The Missing Universe. This chapter deals with astronomy and theoretical physics and the ultimate fate of the universe, in particular the search for understanding of dark matter and dark energy and includes discussion of:
The work of astronomers Vesto Slipher and then Edwin Hubble in demonstrating the universe is expanding;
Vera Rubin's investigation of galaxy rotation curves that suggest something other than gravity is preventing galaxies from spinning apart, which led to the revival of unobserved "dark matter" theory;
Experimental efforts to discover dark matter, including the search for the hypothetical neutralino and other weakly interacting massive particles);
The study of supernovae at Lawrence Berkeley National Laboratory and Harvard University (under Robert Kirshner) that point to an accelerating universe powered by "dark energy" possibly vacuum energy;
The assertion that the proposed modified Newtonian dynamics hypothesis and the accelerating universe disproves the dark matter theory.
Chapter 2
The Pioneer Anomaly. This discusses the Pioneer 10 and Pioneer 11 space probes, which appear to be veering off course and drifting towards the sun. At the time of writing of the book there was a growing speculation as to whether this phenomenon could be explained by a yet-undetermined fault in the rockets' systems or whether this was an unidentified effect of gravity. The lead investigator into the progress of the rockets is physicist Slava Turyshev of the NASA Jet Propulsion Laboratory in California who is analysing the data of the rockets' launch and progress and "reflying" the missions as computer simulations to try to find a solution to the mystery.
However, in 2012, after the book was published, Turyshev was able to give an explanation to the Pioneer Anomaly.
Chapter 3
Varying Constants. This chapter discusses the reliability of some physical constants, quantities or values that are held to be always fixed. One of these, the Fine-structure constant, which calculates the behaviour and amount of energy transmitted in subatomic interactions from light reflection and refraction to nuclear fusion, has been called into question by physicist John Webb of the University of New South Wales who may have identified differences in the behaviour of light from quasars and light sources today. According to Webb's observations quasar light appears to refract different shades of colour from light waves emitted today. Brooks also discusses the Oklo natural nuclear fission reactor, in which the natural conditions in caves in Gabon 2 billion years ago caused the uranium there to react. It may be that the amount of energy released was different from today. Both sets of data are subject to ongoing investigation and debate but, Brooks suggests, may indicate that the behaviour of matter and energy can vary radically and essentially as the conditions of the universe changes through time.
Chapter 4
Cold Fusion. A review of efforts to create nuclear energy at room temperature using hydrogen that is embedded in a metal crystal lattice. Theoretically, this should not happen, because nuclear fusion requires a huge activation energy to get it started. The effect was first reported by chemists Martin Fleischmann and Stanley Pons in 1989, but attempts to reproduce it over the ensuing months were mostly unsuccessful. Cold fusion research was discredited, and articles on the subject became difficult to publish. But according to the book, a scattering of scientists around the world continue to report positive results, with multiple, independent verifications.
Chapter 5
Life. This chapter describes efforts to define life and how it emerged from inanimate matter (abiogenesis) and even recreate artificial life including: the Miller–Urey experiment by chemists Stanley Miller and Harold Urey at the University of Chicago in 1953 to spark life into a mixture of chemicals by using an electrical charge; Steen Rasmussen's work at the Los Alamos National Laboratory to implant primitive DNA, peptide nucleic acid, into soap molecules and heat them up; and the work of the Institute for Complex Adaptive Matter at the University of California.
Chapter 6
Viking. A discussion of the experiments by engineer Gilbert Levin to search for life on Mars in the 1970s as part of the Viking program. Levin's Labeled Release experiment appeared to conclusively show that life does exist on Mars, but as his results were not supported by the other three Viking biological experiments, they were called into question and eventually not accepted by NASA, which instead hypothesized that the gases observed being generated may not have been a product of living metabolism but of a chemical reaction of hydrogen peroxide. Brooks goes into detail on some of Levin's other experiments and also describes how NASA's subsequent missions to Mars have focused on the geology and climate of the planet rather than looking for life on the planet. (Several missions are searching for water and geological conditions which could support life on Mars currently or in the past.)
Chapter 7
The Wow! Signal. Brooks discusses whether or not the signal spotted by astronomer Jerry R. Ehman at the Big Ear radio telescope of Ohio State University in 1977 was a genuine indication of intelligent life in outer space. This was a remarkably clear signal and Big Ear was the largest and longest running SETI (Search for Extra-Terrestrial Intelligence) radio-telescope project in the world. Brooks goes on to discuss the abandonment of NASA's Microwave Observing Program after government funding was stopped by the efforts of senator Richard Bryan of Nevada. There is no public funding for similar observations today while the SETI Institute, which continues NASA's work, is funded by private donation, as are a number of other initiatives.
Chapter 8
A Giant Virus. Brooks describes the huge and highly resistant Mimivirus found in Bradford, England in 1992 and whether this challenges the traditional view of viruses being inanimate chemicals rather than living things. Mimivirus is not only much larger than most viruses but it also has a much more complex genetic structure. The discovery of Mimivirus has given weight to the theories of microbiologist Philip Bell and others that viral infection was indeed the reason for the emergence from primitive life forms of complex cell structures based on a cell nucleus. (See viral eukaryogenesis.) Study of the behaviour and structure of viruses is ongoing.
Chapter 9
Death. Beginning with the example of Blanding's turtle and certain species of fish, amphibians and reptiles that do not age as they grow older, Brooks discusses theories and research into the evolution of ageing. These include the studies of Peter Medawar and George C. Williams in the 1950s and Thomas Johnson, David Friedman and Cynthia Kenyon in the 1980s claiming that ageing is a genetic process that has evolved as organism select genes that help them to grow and reproduce over ones that help them to thrive in later life. Brooks also talks about Leonard Hayflick, as well as others, who have observed that cells in culture will at a fixed point in time stop reproducing and die as their DNA eventually becomes corrupted by continuous division, a mechanical process at cell level rather than part of a creature's genetic code.
Chapter 10
Sex. This chapter is a discussion of theories of the evolution of sexual reproduction. The provided explanation is that although asexual reproduction is much easier and more efficient for an organism it is less common than sexual reproduction because having two parents allows species to adapt and evolve more easily to survive in changing environments. Brooks discusses efforts to prove this by laboratory experiment and goes on to discuss alternative theories including the work of Joan Roughgarden of Stanford University who proposes that sexual reproduction, rather than being driven by Charles Darwin's sexual selection in individuals, is a mechanism for the survival of social groupings, which most higher species depend on for survival.
Chapter 11
Free Will. Discusses the experimental investigations into the Neuroscience of free will by Benjamin Libet of the University of California, San Francisco and others, which show that the brain seems to commit to certain decisions before the person becomes aware of having made them and discusses the implications of these findings on our conception of free will.
Chapter 12
The Placebo Effect. This is a discussion of the role of the placebo in modern medicine, including examples such as Diazepam, which, Brooks claims, in some situations appears to work only if the patient knows they are taking it. Brooks describes research into prescription behaviour which appears to show that use of placebos is commonplace. He describes the paper by Asbjørn Hrobjartsson and Peter C. Gøtzsche in the New England Journal of Medicine that challenges use of placebos entirely, and the work of others towards an understanding of the mechanism of the effect.
Chapter 13
Homeopathy. Brooks discusses the work of researcher Madeleine Ennis involving a homeopathic solution which once contained histamine but was diluted to the point where no histamine remained. Brooks conjectures that the results might be explained by some previously unknown property of water. Brooks supports the investigation of documented anomalies even though he is critical of the practice of homeopathy in general, as are many of the scientists he cites, such as Martin Chaplin of South Bank University.
References
Further reading
Chapter 1
External links
13 more things that don't make sense at New Scientist
2008 non-fiction books
Popular science books
Profile Books books
Cold fusion
Dark matter
Free will
Homeopathy
Unexplained phenomena | 13 Things That Don't Make Sense | [
"Physics",
"Chemistry",
"Astronomy"
] | 1,999 | [
"Dark matter",
"Unsolved problems in astronomy",
"Concepts in astronomy",
"Unsolved problems in physics",
"Exotic matter",
"Cold fusion",
"Nuclear physics",
"Nuclear fusion",
"Physics beyond the Standard Model",
"Matter"
] |
28,254,396 | https://en.wikipedia.org/wiki/Chemical%20bath%20deposition | Chemical Bath Deposition, also called Chemical Solution Deposition and CBD, is a method of thin-film deposition (solids forming from a solution or gas), using an aqueous precursor solution. Chemical Bath Deposition typically forms films using heterogeneous nucleation (deposition or adsorption of aqueous ions onto a solid substrate), to form homogeneous thin films of metal chalcogenides (mostly oxides, sulfides, and selenides) and many less common ionic compounds. Chemical Bath Deposition produces films reliably, using a simple process with little infrastructure, at low temperature (<100˚C), and at low cost. Furthermore, Chemical Bath Deposition can be employed for large-area batch processing or continuous deposition. Films produced by CBD are often used in semiconductors, photovoltaic cells, and supercapacitors, and there is increasing interest in using Chemical Bath Deposition to create nanomaterials.
Uses
Chemical Bath Deposition is useful in industrial applications because it is extremely cheap, simple, and reliable compared to other methods of thin-film deposition, requiring only aqueous solution at (relatively) low temperatures and minimal infrastructure. The Chemical Bath Deposition process can easily be scaled up to large-area batch processing or continuous deposition.
Chemical Bath Deposition forms small crystals, which are less useful for semiconductors than the larger crystals created by other methods of thin-film deposition but are more useful for nano materials. However, films formed by Chemical Bath Deposition often have better photovoltaic properties (band electron gap) than films of the same substance formed by other methods.
Historical Uses
Chemical Bath Deposition has a long history but until recently was an uncommon method of thin-film deposition.
In 1865, Justus Liebig published an article describing the use of Chemical Bath Deposition to silver mirrors (to affix a reflective layer of silver to the back of glass to form a mirror), though in the modern day electroplating and vacuum deposition are more common.
Around WWII, lead sulfide (PbS) and lead selenide (PbSe) CBD films are thought to have been used in infrared detectors. These films are photoconductive when formed by Chemical Bath Deposition.
Chemical Bath Deposition has a long history in forming thin films used in semiconductors as well. However the small size of deposited crystals is not ideal for semiconductors and Chemical Bath Deposition is rarely used to manufacture semiconductors in the modern day.
Photovoltaics
Photovoltaic cells are the most common use of films deposited by Chemical Bath Deposition because many films have better photovoltaic properties when deposited via CBD than when deposited by other methods. This is because thin films formed by Chemical Bath Deposition exhibit greater size quantization, and therefore smaller crystals and a greater optical band gap, than thin films formed by other methods. These improved photovoltaic properties are why Cadmium Sulfide (CdS), a thin film common in photovoltaic cells, is the substance most commonly deposited by CBD and the substance most commonly investigated in CBD research papers.
Chemical Bath Deposition is also used to deposit buffer layers in photovoltaic cells because CBD does not damage the substrate.
Optics
Chemical Bath Deposition films can be made to absorb certain wavelengths and reflect or transmit others as desired. This is because films formed by Chemical Bath Deposition have an electronic bandgap which can be precisely controlled. This selective transmission can be used for anti-reflection and anti-dazzling coatings, solar thermal applications, optical filters, polarizers, total reflectors, etc. The films deposited by Chemical Bath Deposition have possible applications in anti-reflection, anti-dazzling, thermal control widow coatings, optical filters, total reflectors, poultry protection and warming coatings, light emitting diodes, solar cell fabrication and varistors.
Nanomaterials
Chemical Bath Deposition or electroless deposition has great applications in the field of nanomaterials, because the small crystal size enables formation on the nanometer scale, because the properties and nanostructure of Chemical Bath Deposition films can be precisely controlled, and because the uniform thickness, composition, and geometry of films deposited by Chemical Bath Deposition allows the film to retain the structure of the substrate. The low cost and high reliability of Chemical Bath Deposition even on the nanometer scale is unlike any other thin-film deposition technique. Chemical bath deposition can be used to produce polycrystalline and epitaxial films, porous networks, nanorods, superlattices, and composites.
Process
Chemical Bath Deposition relies on creating a solution such that deposition (changing from an aqueous to a solid substance) will only occur on the substrate, using the method below:
Metal salts and (usually) chalcogenide precursors are added to water to form an aqueous solution containing the metal ions and chalcogenide ions which will form the compound to be deposited.
Temperature, pH, and concentration of salts are adjusted until the solution is in metastable supersaturation, that is until the ions are ready to deposit but can’t overcome the thermodynamic barrier to nucleation (forming solid crystals and precipitating out of the solution).
A substrate is introduced, which acts as a catalyst to nucleation, and the precursor ions adhere to onto the substrate forming a thin crystalline film by one of the two methods described below.
That is, the solution is in a state where the precursor ions or colloidal particles are ‘sticky’, but can’t 'stick' to each other. When the substrate is introduced, the precursor ions or particles stick to it and aqueous ions stick to solid ions, forming a solid compound—depositing to form crystalline films.
The pH, temperature, and composition of the film affect crystal size, and can be used to control the rate of formation and the structure of the film. Other factors affecting crystal size include agitation, illumination, and the thickness of the film upon which the crystal is deposited. Agitating the solution prevents the deposition of suspended colloidal crystals, creating a smoother and more homogenous film with a higher band gap energy. Agitation also affects the formation speed and the temperature at which formation occurs, and can alter the structure of the crystals deposited.
Unlike most other deposition processes, Chemical Bath Deposition tends to create a film of uniform thickness, composition, and geometry (lateral homogeneity) even on irregular (patterned or shaped) substrates because it, unlike other methods of deposition, is governed by surface chemistry. Ions adhere to all exposed surfaces of the substrate and crystals grow from those ions.
Ion-By-Ion Mechanism
In ion-by-ion deposition, aqueous precursor ions react directly to form the thin film.
The conditions are controlled such that few hydroxide ions form to prevent deposition (not on the substrate) or precipitation of insoluble metal hydroxide. Sometimes a complexing agent is used to prevent the formation of metal hydroxide. The metal salt and the chalcogenide salt disassociate to form precursor metal cations and chalcogenide anions, which are attracted to and adhere to the substrate by Van der Waals forces. Ions adhere to the substrate, and aqueous ions attach to the growing crystals, forming larger crystals. Thus, this method of deposition results in larger and less uniform crystals than the hydroxide-cluster mechanism.
An example of the reaction, depositing Cadmium Sulfide, is shown below:
Cd^2+ + S^2- -> CdS (deposition)
Hydroxide-Cluster Mechanism
Hydroxide-Cluster deposition occurs when hydroxide ions are present in the solution and usually results in smaller and more uniform crystals than ion-by-ion deposition.
When hydroxide ions are present in the solution in quantity, metal hydroxide ions form. The hydroxide ions act as ligands to the metal cations, forming insoluble colloidal clusters which are both dispersed throughout the solution and deposited onto the substrate. These clusters are attracted to the substrate by Van der Waals forces. The chalcogenide anions react with the metal hydroxide clusters, both dispersed and deposited, to form metal chalcogenide crystals. These crystals form the thin film, which has a structure similar to crystallite. In essence, the hydroxide ions acts as an intermediaries between the metal ions and the chalcogenide ions. Because each hydroxide cluster is a nucleation site, this deposition method usually results in smaller and more uniform crystals than ion-by-ion deposition.
An example of the chemical reaction, depositing Cadmium Sulfide, is shown below:
nCd^2+ (aq) + 2nOH^- (aq) -> [Cd(OH)2] (s) (Formation of cadmium hydroxide cluster)
[Cd(OH)2] n + nS^2- -> nCdS + 2n OH^- (Replacement reaction)
Substrate
Unlike other methods of thin-film deposition, most any substrate which is chemically stable in the aqueous solution can theoretically be used in Chemical Bath Deposition. The desired properties of the film usually dictate the choice of substrate; for example, when light transparency is desired various types of glass are used, and in photovoltaic applications CuInSe2is commonly used. Substrates can also be patterned with monolayers to direct the formation and structure of the thin films. Substrates such as carbonized melamine foam (CFM) and acrylic acid (AA) hydrogels have also been used for some specialized applications.
Thin film deposition
References
Chemistry
Nanomaterials
Materials science | Chemical bath deposition | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,972 | [
"Applied and interdisciplinary physics",
"Thin film deposition",
"Coatings",
"Thin films",
"Materials science",
"nan",
"Nanotechnology",
"Planes (geometry)",
"Nanomaterials",
"Solid state engineering"
] |
28,256,863 | https://en.wikipedia.org/wiki/Relative%20growth%20rate | Relative growth rate (RGR) is growth rate relative to size - that is, a rate of growth per unit time, as a proportion of its size at that moment in time. It is also called the exponential growth rate, or the continuous growth rate.
Rationale
RGR is a concept relevant in cases where the increase in a state variable over time is proportional to the value of that state variable at the beginning of a time period. In terms of differential equations, if is the current size, and its growth rate, then relative growth rate is
.
If the RGR is constant, i.e.,
,
a solution to this equation is
Where:
S(t) is the final size at time (t).
S0 is the initial size.
k is the relative growth rate.
A closely related concept is doubling time.
Calculations
In the simplest case of observations at two time points, RGR is calculated using the following equation:
,
where:
= natural logarithm
= time one (e.g. in days)
= time two (e.g. in days)
= size at time one
= size at time two
When calculating or discussing relative growth rate, it is important to pay attention to the units of time being considered.
For example, if an initial population of S0 bacteria doubles every twenty minutes, then at time interval it is given by solving the equation:
where is the number of twenty-minute intervals that have passed. However, we usually prefer to measure time in hours or minutes, and it is not difficult to change the units of time. For example, since 1 hour is 3 twenty-minute intervals, the population in one hour is . The hourly growth factor is 8, which means that for every 1 at the beginning of the hour, there are 8 by the end. Indeed,
where is measured in hours, and the relative growth rate may be expressed as or approximately 69% per twenty minutes, and as or approximately 208% per hour.
RGR of plants
In plant physiology, RGR is widely used to quantify the speed of plant growth. It is part of a set of equations and conceptual models that are commonly referred to as Plant growth analysis, and is further discussed in that section.
See also
Doubling time
Plant growth analysis
References
Plant physiology
Temporal rates | Relative growth rate | [
"Physics",
"Biology"
] | 464 | [
"Temporal quantities",
"Plant physiology",
"Physical quantities",
"Plants",
"Temporal rates"
] |
22,047,573 | https://en.wikipedia.org/wiki/122%20iron%20arsenide | The 122 iron arsenide unconventional superconductors are part of a new class of iron-based superconductors. They form in the tetragonal I4/mmm, ThCr2Si2 type, crystal structure. The shorthand name "122" comes from their stoichiometry; the 122s have the chemical formula AEFe2Pn2, where AE stands for alkaline earth metal (Ca, Ba Sr or Eu) and Pn is pnictide (As, P, etc.). These materials become superconducting under pressure and also upon doping. The maximum superconducting transition temperature found to date is 38 K in the Ba0.6K0.4Fe2As2. The microscopic description of superconductivity in the 122s is yet unclear.
Overview
Ever since the discovery of high-temperature (High Tc) superconductivity in the cuprate materials, scientists have worked tirelessly to understand the microscopic mechanisms responsible for its emergence. To this day, no theory can fully explain the high-temperature superconductivity and unconventional (non-s-wave) pairing state found in these materials. However, the interest of the scientific community in understanding the pairing glue for unconventional superconductors—those in which the glue is electronic, i.e. cannot be attributed to the phonon-induced interactions between electrons responsible for conventional BCS theory s-wave superconductivity—has recently been expanded by the discovery of high temperature superconductivity (up to Tc = 55 K) in the doped oxypnictide (1111) superconductors with the chemical composition XOFeAs, where X = La, Ce, Pr, Nd, Sm, Gd, Tb, or Dy. The 122s contain the same iron-arsenide planes as the oxypnictides, but are much easier to synthesize in the form of large single crystals.
There are two different ways in which superconductivity was achieved in the 122s. One method is the application of pressure to the undoped parent compounds. The second is the introduction of other elements (dopants) into the crystal structure in very specific ratios. There are two doping schemes: The first type of doping involves the introduction of holes into the barium or strontium varieties; hole doping refers to the substitution of one ion for another with fewer electrons. Superconducting transition temperatures as high as 38 K have been reported upon substitution of the 40% of the Ba2+ or Sr2+ ions with K+. The second doping method is to directly dope the iron-arsenide layer by replacing iron with cobalt. Superconducting transition temperatures up to ~20 K have been observed in this case.
Unlike the oxypnictides, large single crystals of the 122s can be easily synthesized by using the flux method. The behavior of these materials is interesting by that superconductivity exists alongside antiferromagnetism. Various studies including electrical resistivity, magnetic susceptibility, specific heat, NMR, neutron scattering, X-ray diffraction, Mössbauer spectroscopy, and quantum oscillations have been performed for the undoped parent compounds, as well as the superconducting versions.
Synthesis
One of the important qualities of the 122s is their ease of synthesis; it is possible to grow large single crystals, up to ~5×5×0.5 mm, using the flux method. In a nutshell, the flux method uses some solvent in which the starting materials for a chemical reaction are able to dissolve and eventually crystallize into the desired compound. Two standard methods show up in the literature, each using a different flux. The first method employs tin, while the second uses the binary metallic compound FeAs (iron arsenide).
Structural and magnetic phase transition
The 122s form in the I4/mmm tetragonal structure. For example, the tetragonal unit cell of SrFe2As2, at room temperature, has lattice parameters a = b = 3.9243 Å and c = 12.3644 Å. The planar geometry is reminiscent of the cuprate high-Tc superconductors in which the Cu-O layers are believed to support superconductivity.
These materials undergo a first-order structural phase transition into the Fmmm orthorhombic structure below some characteristic temperature T0 that is compound specific. NMR experiments on the CaFe2As2 show that there is a first-order antiferromagnetic magnetic phase transition at the same temperature; in contrast, the antiferromagnetic transition occurs at a lower temperature in the 1111s. The high temperature magnetic state is paramagnetic, while the low temperature state is an antiferromagnetic state known as a spin-density-wave.
Superconductivity
Superconductivity has been observed in the 122s up to a current maximum Tc of 38 K in Ba1−xKxFe2As2 with x ≈ 0.4. Resistivity and magnetic susceptibility measurements have confirmed the bulk nature of the observed superconducting transition. The onset of superconductivity is correlated with the loss of the spin-density-wave state.
The Tc of 38 K in Ba1−xKxFe2As2 (x ≈ 0.4) superconductor shows the inverse iron isotope effect.
Other compounds with 122 structure
In addition to the iron arsenides, the 122 crystal structure plays an important role for other material systems as well. Three famous examples from the field of heavy fermions are CeCu2Si2 (the "first unconventional superconductor" discovered 1978),
URu2Si2 (which is also a heavy fermion superconductor but is the focus of active present research due to the so-called "hidden-order phase" below 17.5 K),
and YbRh2Si2 (one of the prime examples of quantum criticality).
References
Superconductors
Correlated electrons
High-temperature superconductors
Iron compounds
Arsenides | 122 iron arsenide | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,280 | [
"Superconductors",
"Superconductivity",
"Condensed matter physics",
"Correlated electrons"
] |
22,051,259 | https://en.wikipedia.org/wiki/Mechanical%20alloying | Mechanical alloying (MA) is a solid-state and powder processing technique involving repeated cold welding, fracturing, and re-welding of blended powder particles in a high-energy ball mill to produce a homogeneous material. Originally developed to produce oxide-dispersion strengthened (ODS) nickel- and iron-base superalloys for applications in the aerospace industry, MA has now been shown to be capable of synthesizing a variety of equilibrium and non-equilibrium alloy phases starting from blended elemental or pre-alloyed powders. The non-equilibrium phases synthesized include supersaturated solid solutions, metastable crystalline and quasicrystalline phases, nanostructures, and amorphous alloys.The method is sometimes is classified as a surface severe platic deformation method to achieve nanomaterials.
Metal mixes
Mechanical alloying is akin to metal powder processing, where metals may be mixed to produce superalloys. Mechanical alloying occurs in three steps. First, the alloy materials are combined in a ball mill and ground to a fine powder. A hot isostatic pressing (HIP) process is then applied to simultaneously compress and sinter the powder. A final heat treatment stage helps remove existing internal stresses produced during any cold compaction which may have been used. This produces an alloy suitable for high heat turbine blades and aerospace components. In combination with powder spheroidization this technology can be used to develop rapidly new alloy powders for additive manufacturing
Design
Design parameters include type of mill, milling container, milling speed, milling time, type, size, and size distribution of the grinding medium, ball-to-powder weight ratio, extent of filling the vial, milling atmosphere, process control agent, temperature of milling, and the reactivity of the species.
Process
The process of mechanical alloying involves the production of a composite powder particles by:
Using a high energy mill to favor plastic deformation required for cold welding and reduce the process times
Using a mixture of elemental and master alloy powders (the latter to reduce the activity of the element, since it is known that the activity in an alloy or a compound could be orders of magnitude less than in a pure metal)
Eliminating the use of surface-active agents which would produce fine pyrophoric powder as well as contaminate the powder
Relying on a constant interplay between welding and fracturing to yield a powder with a refined internal structure, typical of very fine powders normally produced, but having an overall particle size which was relatively coarse, and therefore stable.
Milling
During high-energy milling the powder particles are repeatedly flattened, cold welded, fractured and rewelded. Whenever two steel balls collide, some powder is trapped between them. Typically, around 1000 particles with an aggregate weight of about 0.2 mg are trapped during each collision. The force of the impact plastically deforms the powder particles, leading to work hardening and fracture. The new surfaces thus created enable the particles to weld together; this leads to an increase in particle size. Since in the early stages of milling, the particles are soft (if using either ductile-ductile or ductile-brittle material combination), their tendency to weld together and form large particles is high. A broad range of particle sizes develops, with some as large as three times larger than the starting particles. The composite particles at this stage have a characteristic layered structure consisting of various combinations of the starting constituents. With continued deformation particles become work hardened, and fracture by a fatigue failure mechanism and/or by the fragmentation of fragile flakes.
References
Bhadeshia, H. K. D. H. Recrystallisation of practical mechanically alloyed iron-based and nickel-base superalloys, Mater. Sci. Eng. A223, 64-77 (1997)
P. R. Soni, Mechanical Alloying: Fundamentals and Applications, Cambridge Int Science Publishing, 2000 - Science - 151 pages.
External links
Mechanical alloying, comprehensive information from University of Cambridge.
Alloys
Metallurgy
Metallurgical processes
Welding | Mechanical alloying | [
"Chemistry",
"Materials_science",
"Engineering"
] | 826 | [
"Welding",
"Metallurgy",
"Metallurgical processes",
"Materials science",
"Chemical mixtures",
"Alloys",
"nan",
"Mechanical engineering"
] |
22,052,160 | https://en.wikipedia.org/wiki/Enilconazole | Enilconazole (synonyms imazalil, chloramizole) is a fungicide widely used in agriculture, particularly in the growing of citrus fruits. Trade names include Freshgard, Fungaflor, and Nuzone.
Enilconazole is also used in veterinary medicine as a topical antimycotic.
History
In 1983, enilconazole was first introduced by Janssen Pharmaceutica and it has since consistently been registered as an antifungal postharvest agent. Shortly after its introduction, enilconazole was used for seed treatment in 1984 and later used in chicken hatcheries in 1990. Like any fungicide, it was used to protect crops from becoming diseased and unable to yield a profitable harvest. Today, it continues to be utilized as an agricultural aid for its contribution to maintaining crop integrity and production output.
Use on crops
Enilconazole is found on a wide variety of fruits and vegetables, but it is primarily used on tubers for storage. Common fungi that are attracted to tubers are Fusarium spp, Phoma spp, and Helminthosporium solani which depreciate the crop quality. In 1984, when enilconazole was initially used for seed treatment, barley was a main target to mitigate crop loss due to disease.
In addition, the antifungal agent is commonly used on citrus fruits.
In the EU its use as a fungicide is permitted within some limits and imported fruits may contain limited amounts.
Hazards
In 1999, based on studies in rodents, enilconazole was identified as "likely to be carcinogenic in humans" under The Environmental Protection Agency's Draft Guidelines for Carcinogenic Assessment. However, because pesticide residues are well below the concentrations associated with risk, the lifetime cancer risk estimate associated with citrus fruit contamination was valued as insignificant.
The EPA has established an equivalent toxicity level for human exposure at 6.1 x 10−2 mg/kg/day. This level placed it in Category II, II, and IV for oral, dermal, and inhalation toxicity respectively. Category I is classified as highly irritating to the eyes, but not to the skin. As for oral toxicity, when the fungicide is transferred via food into the body, it must be metabolized before it can do any damage.
Under California's Proposition 65, enilconazole is listed as "known to the State to cause cancer".
The EPA determined there is no substantial risk of enilconazole toxicity through food and water exposure. Enilconazole has a very minute degree of mobility, so its level of drinking water contamination is quite low. The estimated environmental concentration (EEC) found the levels to be 0.072 ppb for surface water, which is much less than the 500 ppb comparison level for drinking water.
References
External links
Imazalil page at EnvironmentalChemistry.com
Diagram showing metabolism of enilconazole
Pesticide Residues in Food
Allyl compounds
Aromatase inhibitors
Chloroarenes
Ethers
Fungicides
Imidazole antifungals
Lanosterol 14α-demethylase inhibitors
Veterinary drugs | Enilconazole | [
"Chemistry",
"Biology"
] | 660 | [
"Fungicides",
"Functional groups",
"Organic compounds",
"Ethers",
"Biocides"
] |
22,055,207 | https://en.wikipedia.org/wiki/Anion-exchange%20membrane | An anion exchange membrane (AEM) is a semipermeable membrane generally made from ionomers and designed to conduct anions but reject gases such as oxygen or hydrogen.
Applications
Anion exchange membranes are used in electrolytic cells and fuel cells to separate reactants present around the two electrodes while transporting the anions essential for the cell operation. An important example is the hydroxide anion exchange membrane used to separate the electrodes of a direct methanol fuel cell (DMFC) or direct-ethanol fuel cell (DEFC).
Poly(fluorenyl-co-aryl piperidinium) (PFAP)-based anion exchange materials (electrolyte membrane and electrode binder) with high ion conductivity and durability under alkaline conditions has been demonstrated for use to extract hydrogen from water. Performance was 7.68 A/cm2 at 2 V, some 6x the performance of existing materials. Its yield is about 1.2 times that of commercial proton-exchange membrane technology (6 A/cm2), and it does not require the use of expensive rare-earth elements. The system works by increasing the specific surface area.
See also
Alkaline anion-exchange membrane fuel cells
Alkaline fuel cell
Anion exchange membrane electrolysis
Artificial membrane
Gas diffusion electrode
Glossary of fuel cell terms
Ion exchange
Ion-exchange membranes
Proton-exchange membrane
References
Fuel cells
Electrochemistry
Polymers
Hydrogen technologies
Membrane technology | Anion-exchange membrane | [
"Chemistry",
"Materials_science"
] | 302 | [
"Separation processes",
"Electrochemistry",
"Membrane technology",
"Polymer chemistry",
"Electrochemistry stubs",
"Polymers",
"Physical chemistry stubs"
] |
22,056,752 | https://en.wikipedia.org/wiki/Quantum%20phases | Quantum phases are quantum states of matter at zero temperature. Even at zero temperature a quantum-mechanical system has quantum fluctuations and therefore can still support phase transitions. As a physical parameter is varied, quantum fluctuations can drive a phase transition into a different phase of matter. An example of a canonical quantum phase transition is the well-studied Superconductor Insulator Transition in disordered thin films which separates two quantum phases having different symmetries. Quantum magnets provide another example of QPT. The discovery of new quantum phases is a pursuit of many scientists. These phases of matter exhibit properties and symmetries which can potentially be exploited for technological purposes and the benefit of mankind.
The difference between these states and classical states of matter is that classically, materials exhibit different phases which ultimately depends on the change in temperature and/or density or some other macroscopic property of the material whereas quantum phases can change in response to a change in a different type of order parameter (which is instead a parameter in the Hamiltonian of the system, unlike the classical case) of the system at zero temperature – temperature does not have to change. The order parameter plays a role in quantum phases analogous to its role in classical phases. Some quantum phases are the result of a superposition of many other quantum phases.
See also
Quantum phase transition
Classical phase transitions
Quantum critical point
References
Condensed matter physics | Quantum phases | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 275 | [
"Quantum phases",
"Materials science stubs",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Condensed matter stubs",
"Matter"
] |
31,051,559 | https://en.wikipedia.org/wiki/Legendre%20pseudospectral%20method | The Legendre pseudospectral method for optimal control problems is based on Legendre polynomials. It is part of the larger theory of pseudospectral optimal control, a term coined by Ross. A basic version of the Legendre pseudospectral was originally proposed by Elnagar and his coworkers in 1995. Since then, Ross, Fahroo and their coworkers have extended, generalized and applied the method for a large range of problems. An application that has received wide publicity is the use of their method for generating real time trajectories for the International Space Station.
Fundamentals
There are three basic types of Legendre pseudospectral methods:
One based on Gauss-Lobatto points
First proposed by Elnagar et al and subsequently extended by Fahroo and Ross to incorporate the covector mapping theorem.
Forms the basis for solving general nonlinear finite-horizon optimal control problems.
Incorporated in several software products
DIDO, OTIS, PSOPT
One based on Gauss-Radau points
First proposed by Fahroo and Ross and subsequently extended (by Fahroo and Ross) to incorporate a covector mapping theorem.
Forms the basis for solving general nonlinear infinite-horizon optimal control problems.
Forms the basis for solving general nonlinear finite-horizon problems with one free endpoint.
One based on Gauss points
First proposed by Reddien
Forms the basis for solving finite-horizon problems with free endpoints
Incorporated in several software products
GPOPS, PROPT
Software
The first software to implement the Legendre pseudospectral method was DIDO in 2001. Subsequently, the method was incorporated in the NASA code OTIS. Years later, many other software products emerged at an increasing pace, such as PSOPT, PROPT and GPOPS.
Flight implementations
The Legendre pseudospectral method (based on Gauss-Lobatto points) has been implemented in flight by NASA on several spacecraft through the use of the software, DIDO. The first flight implementation was on November 5, 2006, when NASA used DIDO to maneuver the International Space Station to perform the Zero Propellant Maneuver. The Zero Propellant Maneuver was discovered by Nazareth Bedrossian using DIDO. Watch a video of this historic maneuver.
See also
DIDO
Chebyshev pseudospectral method
Ross–Fahroo pseudospectral methods
Ross–Fahroo lemma
Covector mapping principle
References
Optimal control
Numerical analysis
Control theory | Legendre pseudospectral method | [
"Mathematics"
] | 495 | [
"Applied mathematics",
"Control theory",
"Computational mathematics",
"Mathematical relations",
"Numerical analysis",
"Approximations",
"Dynamical systems"
] |
31,055,418 | https://en.wikipedia.org/wiki/Derwent%20Drug%20File | Derwent Drug file, formerly known as Ringdoc, is an information monitoring, abstracting and documentation service, specifically designed to meet the information needs of people requiring information on pharmaceuticals. The Derwent Drug File
provides all relevant and important information for the whole life-cycle of a drug, from drug design to use. The Derwent Drug File concentrates information about the drug itself and its use. Online file contains over 1.5 million records from 1964 to present.
References
Pharmaceutical industry | Derwent Drug File | [
"Chemistry",
"Biology"
] | 102 | [
"Pharmaceutical industry",
"Pharmacology",
"Life sciences industry"
] |
31,056,363 | https://en.wikipedia.org/wiki/Comcast%20Corp.%20v.%20FCC | Comcast Corp. v. FCC, 600 F.3d 642 (D.C. Cir., 2010), is a case at the United States Court of Appeals for the District of Columbia holding that the Federal Communications Commission (FCC) does not have ancillary jurisdiction over the content delivery choices of Internet service providers, under the language of the Communications Act of 1934. In so holding, the Court vacated a 2008 order issued by the FCC that asserted jurisdiction over network management policies and censured Comcast from interfering with its subscribers' use of peer-to-peer software. The case has been regarded as an important precedent on whether the FCC can regulate network neutrality.
Background
In 2007, several subscribers of Comcast's high-speed Internet service discovered that Comcast was interfering with their use of peer-to-peer networking applications, particularly BitTorrent. On behalf of users, the non-profit advocacy organizations Free Press and Public Knowledge filed a complaint with the FCC and claimed that such actions by an ISP ignored traditional network neutrality principles. The complaint stated that Comcast's actions violated the FCC Internet Policy Statement of 2008, particularly the statement's principle that "consumers are entitled to access the lawful Internet content of their choice... [and] to run applications and use services of their choice". Comcast defended its interference with consumers' peer-to-peer programs as necessary to manage scarce network capacity.
Following the complaint, the FCC issued an order censuring Comcast from interfering with subscribers' use of peer-to-peer software. This was the FCC's second attempt to enforce its network neutrality policy. The order began with the FCC stating that it had jurisdiction over Comcast's network management practices under the Communications Act of 1934, which granted the FCC the power to "perform any and all acts, make such rules and regulations, and issue such orders, not inconsistent with [the Act], as may be necessary in the execution of its functions".<ref name="47USC154">{{cite web|url=http://frwebgate3.access.gpo.gov/cgi-bin/PDFgate.cgi?WAISdocID=rmzYEH/0/2/0&WAISaction=retrieve47|title=47 U.S.C. § 154(i)|format=pdf}}</ref> This general high-level power is known as ancillary jurisdiction. Next, the FCC ruled that Comcast impeded consumers' ability to access content and use applications of their choice. Additionally, because other options were available for Comcast to manage its network capacity without discriminating against peer-to-peer programs, the FCC found that this method of bandwidth management violated federal policy.
Comcast initially complied with the order, but requested judicial review of the FCC's 2008 policy statement at the United States Court of Appeals for the District of Columbia.
Circuit court ruling
The Circuit Court held that the FCC failed to argue convincingly that its sanction against Comcast, which in turn was a regulation of the content delivery choices of an Internet service provider, could be justified as part of the ancillary jurisdiction allowed under the 1934 Communications Act. The Court relied on a two-part test for ancillary authority, laid out in the precedent American Library Association v. FCC: The FCC may exercise ancillary authority only if "(1) the Commission's general jurisdictional granted under Title I [of the Communications Act] covers the regulated subject and (2) the regulations are reasonably ancillary to the Commission's effective performance of its statutorily mandated responsibilities."
Although Comcast conceded that the FCC satisfied the first prong of that test, the court ruled that the FCC failed to satisfy the second prong. The FCC could not show that its action of barring Comcast from interfering with its customers' use of particular web services was reasonably ancillary to the effective performance of its statutorily-mandated authority. Instead, the FCC relied on a Congressional statement of policy and various provisions of the Communications Act, neither of which the Court found created "statutorily mandated responsibilities." Additionally, the court noted that if it accepted the FCC's argument, it would "virtually free the Commission from its congressional tether," thereby providing the Commission unbounded authority to impose regulations on Internet service providers.
Impact and subsequent events
After the ruling by the Circuit Court, Comcast stated that "We are gratified by the Court's decision today to vacate the previous FCC's order. [...] Comcast remains committed to the FCC's existing open Internet principles, and we will continue to work constructively with this FCC as it determines how best to increase broadband adoption and preserve an open and vibrant Internet." The Commission naturally had the opposite view, stating that "Today's court decision invalidated the prior Commission's approach to preserving an open Internet. But the Court in no way disagreed with the importance of preserving a free and open Internet; nor did it close the door to other methods for achieving this important end."
While the Circuit Court found that the FCC lacked the power to enforce these network neutrality rules as a matter of ancillary jurisdiction, it hinted that it would accept separate jurisdictional arguments under other provisions of the 1934 Communications Act or the 1996 Telecommunications Act. This prompted the FCC to establish new rules regarding internet regulations in 2010. Because of the ruling in this case, those new rules were presented in reference to other provisions of the statutes, mostly Section 706 of the 1996 Act, as well as other types of ancillary authority via Titles II and VI of the Act. The updated rules were released in December 2010 as the FCC Open Internet Order of 2010. These rules would forbid cable broadband and DSL Internet service providers from blocking or slowing online applications. It would also prohibit mobile carriers from blocking VoIP applications such as Skype or blocking websites in their entirely. These mobile restrictions were fewer than those imposed on cable and DSL.
The industry was unhappy with those new rules as well, with Verizon taking the lead in another court challenge just one month later. This led to the Circuit Court case Verizon Communications Inc. v. FCC'' in 2014, with a charge that the FCC had again surpassed its regulatory authority.
References
See also
Susan Crawford, Comcast v. FCC - "Ancillary Jurisdiction" Has to Be Ancillary to Something (2010)
William McQuillen and Todd Shields, "Comcast Wins in Case on FCC Net Neutrality Powers" (2010).
Fred von Lohmann, "Court Rejects FCC Authority Over the Internet" (2010).
Abigail Phillips, "FCC "Ancillary" Authority to Regulate the Internet? Don't Count on It" (2011).
Federal Communications Commission litigation
2010 in United States case law
Net neutrality
Computer case law
United States Court of Appeals for the District of Columbia Circuit cases | Comcast Corp. v. FCC | [
"Engineering"
] | 1,432 | [
"Net neutrality",
"Computer networks engineering"
] |
31,058,611 | https://en.wikipedia.org/wiki/Hollywood%20%28database%29 | Hollywood is a RNA splicing database containing data for the splicing of orthologous genes in different species.
See also
Alternative splicing
EDAS
AspicDB
References
External links
http://hollywood.mit.edu
Genetics databases
Gene expression
Spliceosome
RNA splicing | Hollywood (database) | [
"Chemistry",
"Biology"
] | 64 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
31,058,693 | https://en.wikipedia.org/wiki/Intronerator | The Intronerator is a database of alternatively spliced genes and a database of introns for Caenorhabditis elegans.
See also
Alternative splicing
AspicDB
EDAS
Hollywood (database)
List of biological databases
References
External links
A working copy of the Intronerator no longer exists as of 2003.
Equivalent functions can be performed with the U.C. Santa Cruz
genome browser:
http://genome.ucsc.edu/cgi-bin/hgTracks?clade=worm&organism=C._elegans
Biological databases
Gene expression
Spliceosome
RNA splicing | Intronerator | [
"Chemistry",
"Biology"
] | 131 | [
"Gene expression",
"Bioinformatics",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Biological databases"
] |
31,058,755 | https://en.wikipedia.org/wiki/ProSAS | ProSAS is a database describing the effects of splicing on the structure of a protein
See also
Alternative splicing
Protein structure
References
External links
http://www.bio.ifi.lmu.de/ProSAS.
Protein databases
Gene expression
Protein structure
RNA splicing
Spliceosome | ProSAS | [
"Chemistry",
"Biology"
] | 64 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Structural biology",
"Molecular biology",
"Biochemistry",
"Protein structure"
] |
31,060,586 | https://en.wikipedia.org/wiki/Mapping%20class%20group%20of%20a%20surface | In mathematics, and more precisely in topology, the mapping class group of a surface, sometimes called the modular group or Teichmüller modular group, is the group of homeomorphisms of the surface viewed up to continuous (in the compact-open topology) deformation. It is of fundamental importance for the study of 3-manifolds via their embedded surfaces and is also studied in algebraic geometry in relation to moduli problems for curves.
The mapping class group can be defined for arbitrary manifolds (indeed, for arbitrary topological spaces) but the 2-dimensional setting is the most studied in group theory.
The mapping class group of surfaces are related to various other groups, in particular braid groups and outer automorphism groups.
History
The mapping class group appeared in the first half of the twentieth century. Its origins lie in the study of the topology of hyperbolic surfaces, and especially in the study of the intersections of closed curves on these surfaces. The earliest contributors were Max Dehn and Jakob Nielsen: Dehn proved finite generation of the group, and Nielsen gave a classification of mapping classes and proved that all automorphisms of the fundamental group of a surface can be represented by homeomorphisms (the Dehn–Nielsen–Baer theorem).
The Dehn–Nielsen theory was reinterpreted in the mid-seventies by Thurston who gave the subject a more geometric flavour and used this work to great effect in his program for the study of three-manifolds.
More recently the mapping class group has been by itself a central topic in geometric group theory, where it provides a testing ground for various conjectures and techniques.
Definition and examples
Mapping class group of orientable surfaces
Let be a connected, closed, orientable surface and the group of orientation-preserving, or positive, homeomorphisms of . This group has a natural topology, the compact-open topology. It can be defined easily by a distance function: if we are given a metric on inducing its topology then the function defined by
is a distance inducing the compact-open topology on . The connected component of the identity for this topology is denoted . By definition it is equal to the homeomorphisms of which are isotopic to the identity. It is a normal subgroup of the group of positive homeomorphisms, and the mapping class group of is the group
.
This is a countable group.
If we modify the definition to include all homeomorphisms we obtain the extended mapping class group , which contains the mapping class group as a subgroup of index 2.
This definition can also be made in the differentiable category: if we replace all instances of "homeomorphism" above with "diffeomorphism" we obtain the same group, that is the inclusion induces an isomorphism between the quotients by their respective identity components.
The mapping class groups of the sphere and the torus
Suppose that is the unit sphere in . Then any homeomorphism of is isotopic to either the identity or to the restriction to of the symmetry in the plane . The latter is not orientation-preserving and we see that the mapping class group of the sphere is trivial, and its extended mapping class group is , the cyclic group of order 2.
The mapping class group of the torus is naturally identified with the modular group . It is easy to construct a morphism : every induces a diffeomorphism of via . The action of diffeomorphisms on the first homology group of gives a left-inverse to the morphism (proving in particular that it is injective) and it can be checked that is injective, so that are inverse isomorphisms between and . In the same way, the extended mapping class group of is .
Mapping class group of surfaces with boundary and punctures
In the case where is a compact surface with a non-empty boundary then the definition of the mapping class group needs to be more precise. The group of homeomorphisms relative to the boundary is the subgroup of which restrict to the identity on the boundary, and the subgroup is the connected component of the identity. The mapping class group is then defined as
.
A surface with punctures is a compact surface with a finite number of points removed ("punctures"). The mapping class group of such a surface is defined as above (note that the mapping classes are allowed to permute the punctures, but not the boundary components).
Mapping class group of an annulus
Any annulus is homeomorphic to the subset of . One can define a diffeomorphism by the following formula:
which is the identity on both boundary components . The mapping class group of is then generated by the class of .
Braid groups and mapping class groups
Braid groups can be defined as the mapping class groups of a disc with punctures. More precisely, the braid group on n strands is naturally isomorphic to the mapping class group of a disc with n punctures.
The Dehn–Nielsen–Baer theorem
If is closed and is a homeomorphism of then we can define an automorphism of the fundamental group as follows: fix a path between and and for a loop based at representing an element define to be the element of the fundamental group associated to the loop . This automorphism depends on the choice of , but only up to conjugation. Thus we get a well-defined map from to the outer automorphism group . This map is a morphism and its kernel is exactly the subgroup . The Dehn–Nielsen–Baer theorem states that it is in addition surjective. In particular, it implies that:
The extended mapping class group is isomorphic to the outer automorphism group .
The image of the mapping class group is an index 2 subgroup of the outer automorphism group, which can be characterised by its action on homology.
The conclusion of the theorem does not hold when has a non-empty boundary (except in a finite number of cases). In this case the fundamental group is a free group and the outer automorphism group Out(Fn) is strictly larger than the image of the mapping class group via the morphism defined in the previous paragraph. The image is exactly those outer automorphisms which preserve each conjugacy class in the fundamental group corresponding to a boundary component.
The Birman exact sequence
This is an exact sequence relating the mapping class group of surfaces with the same genus and boundary but a different number of punctures. It is a fundamental tool which allows to use recursive arguments in the study of mapping class groups. It was proven by Joan Birman in 1969. The exact statement is as follows.
Let be a compact surface and . There is an exact sequence
.
In the case where itself has punctures the mapping class group must be replaced by the finite-index subgroup of mapping classes fixing .
Elements of the mapping class group
Dehn twists
If is an oriented simple closed curve on and one chooses a closed tubular neighbourhood then there is a homeomorphism from to the canonical annulus defined above, sending to a circle with the counterclockwise orientation. This is used to define a homeomorphism of as follows: on it is the identity, and on it is equal to . The class of in the mapping class group does not depend on the choice of made above, and the resulting element is called the Dehn twist about . If is not null-homotopic this mapping class is nontrivial, and more generally the Dehn twists defined by two non-homotopic curves are distinct elements in the mapping class group.
In the mapping class group of the torus identified with the Dehn twists correspond to unipotent matrices. For example, the matrix
corresponds to the Dehn twist about a horizontal curve in the torus.
The Nielsen–Thurston classification
There is a classification of the mapping classes on a surface, originally due to Nielsen and rediscovered by Thurston, which can be stated as follows. An element is either:
of finite order (i.e. there exists such that is the identity),
reducible: there exists a set of disjoint closed curves on which is preserved by the action of ;
or pseudo-Anosov.
The main content of the theorem is that a mapping class which is neither of finite order nor reducible must be pseudo-Anosov, which can be defined explicitly by dynamical properties.
Pseudo-Anosov diffeomorphisms
The study of pseudo-Anosov diffeomorphisms of a surface is fundamental. They are the most interesting diffeomorphisms, since finite-order mapping classes are isotopic to isometries and thus well understood, and the study of reducible classes indeed essentially reduces to the study of mapping classes on smaller surfaces which may themselves be either finite order or pseudo-Anosov.
Pseudo-Anosov mapping classes are "generic" in the mapping class group in various ways. For example, a random walk on the mapping class group will end on a pseudo-Anosov element with a probability tending to 1 as the number of steps grows.
Actions of the mapping class group
Action on Teichmüller space
Given a punctured surface (usually without boundary) the Teichmüller space is the space of marked complex (equivalently, conformal or complete hyperbolic) structures on . These are represented by pairs where is a Riemann surface and a homeomorphism, modulo a suitable equivalence relation. There is an obvious action of the group on such pairs, which descends to an action of on Teichmüller space.
This action has many interesting properties; for example it is properly discontinuous (though not free). It is compatible with various geometric structures (metric or complex) with which can be endowed. In particular, the Teichmüller metric can be used to establish some large-scale properties of the mapping class group, for example that the maximal quasi-isometrically embedded flats in are of dimension .
The action extends to the Thurston boundary of Teichmüller space, and the Nielsen-Thurston classification of mapping classes can be seen in the dynamical properties of the action on Teichmüller space together with its Thurston boundary. Namely:
Finite-order elements fix a point inside Teichmüller space (more concretely this means that any mapping class of finite order in can be realised as an isometry for some hyperbolic metric on );
Pseudo-Anosov classes fix the two points on the boundary corresponding to their stable and unstable foliation and the action is minimal (has a dense orbit) on the boundary;
Reducible classes do not act minimally on the boundary.
Action on the curve complex
The curve complex of a surface is a complex whose vertices are isotopy classes of simple closed curves on . The action of the mapping class groups on the vertices carries over to the full complex. The action is not properly discontinuous (the stabiliser of a simple closed curve is an infinite group).
This action, together with combinatorial and geometric properties of the curve complex, can be used to prove various properties of the mapping class group. In particular, it explains some of the hyperbolic properties of the mapping class group: while as mentioned in the previous section the mapping class group is not a hyperbolic group it has some properties reminiscent of those.
Other complexes with a mapping class group action
Pants complex
The pants complex of a compact surface is a complex whose vertices are the pants decompositions of (isotopy classes of maximal systems of disjoint simple closed curves). The action of extends to an action on this complex. This complex is quasi-isometric to Teichmüller space endowed with the Weil–Petersson metric.
Markings complex
The stabilisers of the mapping class group's action on the curve and pants complexes are quite large. The markings complex is a complex whose vertices are markings of , which are acted upon by, and have trivial stabilisers in, the mapping class group . It is (in opposition to the curve or pants complex) a locally finite complex which is quasi-isometric to the mapping class group.
A marking is determined by a pants decomposition and a collection of transverse curves such that every one of the intersects at most one of the , and this "minimally" (this is a technical condition which can be stated as follows: if are contained in a subsurface homeomorphic to a torus then they intersect once, and if the surface is a four-holed sphere they intersect twice). Two distinct markings are joined by an edge if they differ by an "elementary move", and the full complex is obtained by adding all possible higher-dimensional simplices.
Generators and relations for mapping class groups
The Dehn–Lickorish theorem
The mapping class group is generated by the subset of Dehn twists about all simple closed curves on the surface. The Dehn–Lickorish theorem states that it is sufficient to select a finite number of those to generate the mapping class group. This generalises the fact that is generated by the matrices
.
In particular, the mapping class group of a surface is a finitely generated group.
The smallest number of Dehn twists that can generate the mapping class group of a closed surface of genus is ; this was proven later by Humphries.
Finite presentability
It is possible to prove that all relations between the Dehn twists in a generating set for the mapping class group can be written as combinations of a finite number among them. This means that the mapping class group of a surface is a finitely presented group.
One way to prove this theorem is to deduce it from the properties of the action of the mapping class group on the pants complex: the stabiliser of a vertex is seen to be finitely presented, and the action is cofinite. Since the complex is connected and simply connected it follows that the mapping class group must be finitely generated. There are other ways of getting finite presentations, but in practice the only one to yield explicit relations for all geni is that described in this paragraph with a slightly different complex instead of the curve complex, called the cut system complex.
An example of a relation between Dehn twists occurring in this presentation is the lantern relation.
Other systems of generators
There are other interesting systems of generators for the mapping class group besides Dehn twists. For example, can be generated by two elements or by involutions.
Cohomology of the mapping class group
If is a surface of genus with boundary components and punctures then the virtual cohomological dimension of is equal to .
The first homology of the mapping class group is finite and it follows that the first cohomology group is finite as well.
Subgroups of the mapping class groups
The Torelli subgroup
As singular homology is functorial, the mapping class group acts by automorphisms on the first homology group . This is a free abelian group of rank if is closed of genus . This action thus gives a linear representation .
This map is in fact a surjection with image equal to the integer points of the symplectic group. This comes from the fact that the intersection number of closed curves induces a symplectic form on the first homology, which is preserved by the action of the mapping class group. The surjectivity is proven by showing that the images of Dehn twists generate .
The kernel of the morphism is called the Torelli group of . It is a finitely generated, torsion-free subgroup and its study is of fundamental importance for its bearing on both the structure of the mapping class group itself (since the arithmetic group is comparatively very well understood, a lot of facts about boil down to a statement about its Torelli subgroup) and applications to 3-dimensional topology and algebraic geometry.
Residual finiteness and finite-index subgroups
An example of application of the Torelli subgroup is the following result:
The mapping class group is residually finite.
The proof proceeds first by using residual finiteness of the linear group , and then, for any nontrivial element of the Torelli group, constructing by geometric means subgroups of finite index which does not contain it.
An interesting class of finite-index subgroups is given by the kernels of the morphisms:
The kernel of is usually called a congruence subgroup of . It is a torsion-free group for all (this follows easily from a classical result of Minkowski on linear groups and the fact that the Torelli group is torsion-free).
Finite subgroups
The mapping class group has only finitely many classes of finite groups, as follows from the fact that the finite-index subgroup is torsion-free, as discussed in the previous paragraph. Moreover, this also implies that any finite subgroup of is a subgroup of the finite group .
A bound on the order of finite subgroups can also be obtained through geometric means. The solution to the Nielsen realisation problem implies that any such group is realised as the group of isometries of an hyperbolic surface of genus . Hurwitz's bound then implies that the maximal order is equal to .
General facts on subgroups
The mapping class groups satisfy the Tits alternative: that is, any subgroup of it either contains a non-abelian free subgroup or it is virtually solvable (in fact abelian).
Any subgroup which is not reducible (that is it does not preserve a set of isotopy class of disjoint simple closed curves) must contain a pseudo-Anosov element.
Linear representations
It is an open question whether the mapping class group is a linear group or not. Besides the symplectic representation on homology explained above there are other interesting finite-dimensional linear representations arising from topological quantum field theory. The images of these representations are contained in arithmetic groups which are not symplectic, and this allows to construct many more finite quotients of .
In the other direction there is a lower bound for the dimension of a (putative) faithful representation, which has to be at least .
Notes
Citations
Sources
, translated in .
Geometric group theory
Geometric topology | Mapping class group of a surface | [
"Physics",
"Mathematics"
] | 3,722 | [
"Geometric group theory",
"Group actions",
"Geometric topology",
"Topology",
"Symmetry"
] |
49,377,873 | https://en.wikipedia.org/wiki/Magnetic%20current | Magnetic current is, nominally, a current composed of moving magnetic monopoles. It has the unit volt. The usual symbol for magnetic current is , which is analogous to for electric current. Magnetic currents produce an electric field analogously to the production of a magnetic field by electric currents. Magnetic current density, which has the unit V/m2 (volt per square meter), is usually represented by the symbols and . The superscripts indicate total and impressed magnetic current density. The impressed currents are the energy sources. In many useful cases, a distribution of electric charge can be mathematically replaced by an equivalent distribution of magnetic current. This artifice can be used to simplify some electromagnetic field problems. It is possible to use both electric current densities and magnetic current densities in the same analysis.
The direction of the electric field produced by magnetic currents is determined by the left-hand rule (opposite direction as determined by the right-hand rule) as evidenced by the negative sign in the equation
Magnetic displacement current
Magnetic displacement current or more properly the magnetic displacement current density is the familiar term It is one component of .
where
is the total magnetic current.
is the impressed magnetic current (energy source).
Electric vector potential
The electric vector potential, F, is computed from the magnetic current density, , in the same way that the magnetic vector potential, A, is computed from the electric current density. Examples of use include finite diameter wire antennas and transformers.
magnetic vector potential:
electric vector potential:
where F at point and time is calculated from magnetic currents at distant position at an earlier time . The location is a source point within volume Ω that contains the magnetic current distribution. The integration variable, , is a volume element around position . The earlier time is called the retarded time, and calculated as
Retarded time accounts for the accounts for the time required for electromagnetic effects to propagate from point to point .
Phasor form
When all the functions of time are sinusoids of the same frequency, the time domain equation can be replaced with a frequency domain equation. Retarded time is replaced with a phase term.
where and are phasor quantities and is the wave number.
Magnetic frill generator
A distribution of magnetic current, commonly called a magnetic frill generator, may be used to replace the driving source and feed line in the analysis of a finite diameter dipole antenna. The voltage source and feed line impedance are subsumed into the magnetic current density. In this case, the magnetic current density is concentrated in a two dimensional surface so the units of are volts per meter.
The inner radius of the frill is the same as the radius of the dipole. The outer radius is chosen so that
where
= impedance of the feed transmission line (not shown).
= impedance of free space.
The equation is the same as the equation for the impedance of a coaxial cable. However, a coaxial cable feed line is not assumed and not required.
The amplitude of the magnetic current density phasor is given by:
with
where
= radial distance from the axis.
.
= magnitude of the source voltage phasor driving the feed line.
See also
Surface equivalence principle
Notes
References
Electromagnetism
Antennas | Magnetic current | [
"Physics",
"Engineering"
] | 657 | [
"Electromagnetism",
"Antennas",
"Physical phenomena",
"Telecommunications engineering",
"Fundamental interactions"
] |
49,378,265 | https://en.wikipedia.org/wiki/Reynolds%20and%20Branson | Reynolds & Branson Leeds was a business based at 13 Briggate and 14 Commercial Street in Leeds, England. The business lasted from to . Edward Matterson managed the company in 1822, and William West F.R.S. took over in 1833. The National Archives Records about the company include a day book, sales ledger, and prescription books. The records were created by Reynolds & Branson Ltd. Reynolds & Branson was registered in July 1898 as a limited corporation with a capital of £34,000 in shares of £10 each by Messrs. R. Reynolds, F. W. Branson. No remuneration was given to Mr. R. Reynolds, but a £700 per annum was given to each of the others. In 1890, Richard Reynold's son, Richard Freshfield (Fred) Reynolds was made a partner.
The firm was in the business of wholesale and retail for chemists and surgical instrument makers.
Origins
The original company can be traced back to 1816 (see Grace's Guide which is the leading source of historical information on industry and manufacturing in Britain). Edward Matterson was a druggist who ran the firm after being employed by Allen and Hanburys. He was educated at Leeds Grammar School. In 1822 the company moved to 13 Briggate, Leeds. In 1833 William West F.R.S. took over the company after Matterson went bankrupt (see The bankrupt directory: being a complete register of all the bankruptcies, with their residences, trades, and dates when they appeared in the London gazette, from December 1820 to April 1843). In 1839 Thomas Harvey joined the business, when William West left the company to pursue analytical chemistry. The firm was then renamed Thomas Harvey. Harvey was born at Barnsley into a Quaker family. His father was a linen manufacturer. The second of five children, he was educated at Barnsley Grammar in Yorkshire in 1812. From 1822 to 1825 Harvey studied at Ackworth and afterwards became a chemist apprentice for David Doncaster of Sheffield. Upon Doncaster's death he trained at Thomas Southalls in Birmingham for eight years. In 1837 Harvey settled in Leeds as a chemist. He became an anti-slavery campaigner and philanthropist.
Richard Reynolds
In 1844 Richard Reynolds joined the company as an apprentice. He was born in 1829 and was the eldest son of an apothecary who died when the boy was only four years old. From 1850 to 1851 he attended the School of Pharmacy in London where he took first prizes in chemistry, materia medica and botany in a contest held by the pharmaceutical society. He then went to Mr. Henry Deane at Chapman for two years and returned to the Leeds business. In 1854 Richard Reynolds joined Thomas Harvey as a partner and the company then became Harvey & Reynolds. In 1861 the firm was joined by a Mr. Fowler and became Harvey, Reynolds & Fowler. By 1864 Thomas Harvey had retired (Noted in 1884). At the age of 72, he undertook an arduous journey to Canada on a Quaker mission but it exhausted him. He died on December 25 at his home at Ashwood, Headingley Lane. Mr. Haw then joined the business and the company became Haw & Reynolds. In 1867 the business was listed as Haw, Reynolds, & Co. In 1883 Fredrick W Branson joined the business. An 1884 advertisement listed the partnership between Reynolds & Branson (late Harvey, Reynolds & co).
1898–1914
The firm of Reynolds & Branson was registered in July 1898 as a limited corporation with a capital of £34,000 in shares of £10 each by Messrs. R. Reynolds, F. W. Branson. No remuneration was given to Mr. R. Reynolds, but a £700 each per annum to the others. In 1890 Richard Reynold's son, Richard Freshfield (Fred) Reynolds was made a partner. The firm was in the business of wholesale and retail chemists and surgical instrument makers.[5] Fredrick W Branson now focused on the development of scientific apparatus and chemical glassware for the business. The company was flourishing under his management. Frederick Hartridge attended the University of Leeds in 1905, and then attending the School of Pharmaceutical Society in London in 1909. In 1901, during the outbreak of lead poisoning at Morley, the company was called in. Frederick W. Branson made recommendations which freed Morley from this scourge. In collaboration with A. F. Dimmock, M.D., he contributed to the British Medical Association meeting in 1903 a paper " A new method for the determination of uric acid in urine" (Br. Med. J., 1903, 2, 585). For this process he devised a correction scale which was contributed to the British Pharmaceutical Conference in 1904. At the 1905 meeting of the British Medical Association a further paper by these two authors was read, " A rapid and simple process for the estimation of uric acid " (ibid., 1905, 2, 1104), in which uric acid was precipitated and the precipitate measured in a specially graduated tube. In 1914, in collaboration with Dr. Gordon Sharp, he contributed a paper to the British Pharmaceutical Conference on the activity of digitalis leaves and the stability and standardisation of tinctures of digitalis.
The war years
During the First World War, he actively pursued efforts to standardize the size and shape of chemical glassware. In 1916, he was elected as an inaugural member of the Society of Glass Technology. He organized research and published works on these topics. Branson sought to secure in Great Britain the manufacturing process for the glass required for the equipment of munition factories.[8][9] Branson contributed a paper on the composition of some types of chemical glassware to the Society of Chemical Industry (J. SOC. Chem. Ind., 1915, 34, 471) in collaboration with his son Frederick Hartridge, a paper to the Transactions of the Society of Glass Technology (1919, 3, 249)" A proposed standard formula for a glass for lamp workers. Branson was chairman until retirement in 1932. His son, Frederick Hartridge, Associate of the Royal Institute of Chemistry AIC, became chairman and managing director of Reynolds & Branson.[5][7] he run the company for 20 years, until his untimely death on 10 February 1952, Frederick Hartridge had appointed his 3 Sons and Daughter as directors of Reynolds & Branson, Frederick Norman the eldest son who attended Ilkley Grammar an all-boys school, Eileen his only daughter, Peter Orchard who as Director of Phospherade, which was the mineral water company, He attended Roundhay all-boys grammar school.
In the Second World War he was in the RAF with 54 Spitfire squadron, in 1942 married Rita Blackburn, he went to Australia with 54 Spitfire squadron at the end of 1942, he met Patricia A Grant his second wife. He married Patricia in Leeds 1948, Peter emigrated to Australia 1953, when his father died leaving the bulk of the business to his eldest brother Frederick Norman, He set up his own pharmacy in Blackburn South in 1955, He later become a Podiatrist retiring at the age of 90, Richard Orchard who attended Roundhay all-boys grammar school, Second World War Richard was also in RAF as a pilot he died on active service 1945. His eldest son Frederick Norman Branson became Chairman & Managing Director of Reynolds & Branson in 1953, he would run Reynolds & Branson for almost 20 years, at this point the company had a workforce of 150 people, In 1972 Frederick Norman Branson sold the business to Barclay, later selling to the asset strippers Slater & Walker.
Reynolds & Branson Chronological Time Frame
1822, Edward Matterson druggist, dealer in paint and colours, Location 12, Briggate, Leeds Baines's Directory and Gazette.
1829, Edward Matterson, druggist located 13 Briggate & 4 Blundel place. Pigot's Directory
1833, William West F.R.S. took over the company after Matterson went bankrupt,
1839, Thomas Harvey, chemist and druggist, 5 Commercial Street. Leeds.
1841, Thomas Harvey, chemist and druggist, 13 Briggate. Leeds.
1854, Reynolds returned to Leeds as partner with Harvey in the chemist business and the firm became Harvey & Reynolds.
1856, Harvey & Reynolds, chemist and druggist, 13 Briggate. Leeds.
1861, Harvey, Reynolds & Fowler. Chemist and druggist, 10 Briggate. Leeds.
1864, Haw & Reynolds. Chemist and druggist. Briggate. Leeds. as Thomas Harvey had retired.
1867, Haw, Reynolds, & Co. Chemist and druggist. Briggate. Leeds.
1872, Haw, Reynolds & Co. Chemist. 14 Commercial Street, and 13 Briggate. Leeds.
1886, Reynolds & Branson. Makers of the first short clinical thermometer.
1891, Reynolds & Reynolds, chemist and druggist, 13 Briggate. Leeds
1901, Reynolds & Reynolds, chemist and druggist, 13 Briggate. Leeds
1911, Reynolds & Reynolds, chemist and druggist, 13 Briggate. Leeds
X-ray pioneers
On 24 July 1896, Reynolds and Branson attended the Photographic Convention of the United Kingdom at Leeds. The firm was represented in various sessions. During the session on Orthochromatic Photography, Branson gave a presentation on X-ray apparatus that included a well received demonstration and repeated as follows:"... Mr. Branson, of Messrs. Reynolds and Branson, who had made a special study of X-ray work, gave a demonstration which for lucidity and completeness has rarely been equalled. In the course of his remarks he fully explained the construction and exhaustion of the tubes, and showed various forms and explained his method of making calcium tungstate, which was to mix solutions of sodium tungstate and calcium chloride, collect, wash, and dry the precipitate of calcium tungstate which was formed, and then to fuse this in a small muffle furnace at the temperature of the melting point of cast-iron, and reduce to small crystals in a mortar, mix with varnish, and coat a screen. With such a screen in contact with the plate he had been able to show osseous structure of the hand, measuring only one-hundredth of an inch, with an exposure of one minute. A comparison of the fluorescent appearance of the three salts, calcium tungstate, platinocyanide of barium, and platinocyanide of potassium, was shown, the first and last being the best for photographic work, as the fluorescence was blue, and the barium salt was most satisfactory for visual work, as the fluorescence was yellow."
At the same convention, during the session on Photography at the Seaside the firm displayed some of their product line that included X-ray apparatus, as follows:"Reynolds & Branson, of Commercial Street, Leeds, had a very high-class show, special prominence being given to apparatus for X-ray work. A case of lenses of all the leading makers, together with a very well-made photo-micrographic outfit, a cabinet of chemicals, another of cameras, and all the little odds and ends of apparatus, made up a very fine show."
Reynolds & Branson Trade Catalogues
Reynolds and Branson trade catalogues listed:
Reynolds and Branson, 1887. Handy Guide to Surgical Instruments and Appliances etc. Reynolds and Branson, 14 Commercial Street, Leeds. Gloucester: John Bellows. 1887. 246p.
Reynolds and Branson, 1890. Illustrated Catalogue of Chemical and Physical Apparatus, Pure Chemicals and Reagents. Reynolds and Branson, 14 Commercial Street, Leeds. 1890. 200p.
Reynolds and Branson, 1903. Catalogue of Special Preparations. Reynolds and Branson, 14 Commercial Street, Leeds. 1903. 64p.
Reynolds and Branson, 1907. Illustrated Catalogue of Optical Lanterns, Slides, Compressed Gases and Accessory Apparatus. Reynolds and Branson, 14 Commercial Street, Leeds. Leeds: McCorquodale & Co. 1907. 204p.
Reynolds and Branson, 1908. Illustrated Catalogue of Surgical Instruments and Appliances. Reynolds and Branson, 14 Commercial Street, Leeds. Leeds: Chorley & Pickersgill. 1908. 119p.
Reynolds and Branson, 1912. Catalogue of Special Preparations, Surgical Instruments, Trusses etc. Reynolds and Branson, 14 Commercial Street, Leeds. 1912. 84p.
Reynolds and Branson, 1912–1920. Catalogue of Laboratory Fittings and Furniture. Reynolds and Branson. 1912–1920. 29p.
Reynolds & Branson Patents
Patents include: #1120 in 1885, #16373 in 1893, #14102 in 1899.
Improvements in photographic ‘shutters’ for instantaneous photography. #1650. 1883.
Means or apparatus for measuring quantities of highly volatile liquids. No. 3490. 1904.
References
External links
Reynolds and Branson @Grace's Guide
Science Museum Group List of items manufactured by R&B
Oak Ridge museum, Electroscope manufactured by R&B
Camera manufactured by R&B
http://historiccamera.com/cgi-bin/librarium2/pm.cgi?action=app_display&app=datasheet&app_id=2517
http://www.thoresby.org.uk/content/people/harvey.php
http://www.huntsearch.gla.ac.uk/cgi-bin/foxweb/huntsearch/DetailedResults.fwx?collection=instruments&searchTerm=105190
http://collections.wakefield.gov.uk/photographs/index.asp?page=hitlist&mwsquery=(%7BCategory%7D%3D%7Bphotographs%7D)&mwsQueryTemplate=%5B%7Bcontrol%3DthemeList%7D%7Bindex%3DClassification%7D%7Brelation%3D%3D*%7D%5D&themeList=Community+Life+/+Health+and+Welfare+/+Workhouses+and+children%27s+homes
British companies established in 1816
British companies disestablished in 1972
1816 establishments in England
1972 disestablishments in England
Companies based in Leeds
Defunct manufacturing companies of the United Kingdom
Optical instruments
X-ray equipment manufacturers
X-ray pioneers
Microscopes
Microscope components
Scientific equipment | Reynolds and Branson | [
"Chemistry",
"Technology",
"Engineering"
] | 2,987 | [
"Microscopes",
"Measuring instruments",
"Microscopy"
] |
49,392,319 | https://en.wikipedia.org/wiki/Editas%20Medicine | Editas Medicine, Inc., (formerly Gengine, Inc.), is a clinical-stage biotechnology company which is developing therapies for rare diseases based on CRISPR gene editing technology. Editas headquarters is located in Cambridge, Massachusetts and has facilities in Boulder, Colorado.
History
Editas Medicine was originally founded with the name "Gengine, Inc." in September 2013 by Feng Zhang of the Broad Institute, Jennifer Doudna of the University of California, Berkeley, and George Church, David Liu, and J. Keith Joung of Harvard University, with funding from Third Rock Ventures, Polaris Partners and Flagship Ventures; the name was changed to the current "Editas Medicine" two months later. Doudna quit in June 2014 over legal differences concerning intellectual property of Cas9.
In August 2015, the company raised $120 million in Series B funding from Bill Gates and 13 other investors. it went public on 2 February 2016, via an initial public offering that raised $94 million.
The company entered into a strategic collaboration with Juno Therapeutics in 2015 to combine its CRISPR-Cas9 technology with Juno's experience in creating chimeric antigen receptor and high-affinity T cell receptor therapeutics to the end of developing cancer therapeutics. Juno was later acquired by Celgene, which was in turn acquired by Bristol Myers Squibb.
The company announced in 2015 that it was planning a clinical trial in 2017 using CRISPR gene editing techniques to treat Leber congenital amaurosis type 10 (LCA10), a rare genetic illness that causes blindness. On 30 November 2018, the FDA gave permission to start the trials, under the investigational name EDIT-101 (also known as AGN-151587). In September 2021, a statement from Editas claimed that preliminary results from clinical trials were promising and support clinical benefits of EDIT-101 treatment.
In March 2020, Editas, in partnership with Allergan, was the first to use CRISPR to try to edit DNA inside a person's body (in vivo). As part of the clinical trial, a patient who was nearly blind as a result of Leber's congenital amaurosis received an intravitreal injection containing a harmless virus carrying CRISPR gene-editing instructions. Five months later, Editas reworked its deal with Allergan's owner AbbVie and regained full rights to their range of eye disease treatment therapies, including EDIT-101 for the treatment of LCA10.
In 2019, the company was building new chemistry facilities in Boulder, Colorado.
Katrine Bosley was CEO until 2019, when she was replaced by board member Cynthia Collins. Collins was replaced in 2021 by James Mullen, who had been board chairman. Gilmore O'Neill, former CMO of Sarepta Therapeutics, became CEO on June 1, 2022, with Mullen staying on as executive chairman of the board.
Research
Editas works with two different CRISPR nucleases, Cas9 and Cas12a.
EDIT-101 is a CRISPR based gene therapy for treatment of Leber congenital amaurosis, which is currently in clinical trials.
EDIT-301 is an experimental potential treatment utilizing the firm's CAS 12a editing technology for sickle cell disease and beta-thalassemia. In 2019 the firm reported early success in research on the drug;. In December 2020, it filed an IND application for treatment of sickle cell disease. In January 2021, it said it had received clearance from the FDA for phase 1 safety studies.
References
2013 establishments in Massachusetts
2016 initial public offerings
Genomics companies
Pharmaceutical companies of the United States
Companies based in Cambridge, Massachusetts
Companies listed on the Nasdaq
Gene therapy
Health care companies based in Massachusetts
Pharmaceutical companies established in 2013
Biotechnology companies of the United States | Editas Medicine | [
"Engineering",
"Biology"
] | 775 | [
"Gene therapy",
"Genetic engineering"
] |
33,615,890 | https://en.wikipedia.org/wiki/Scottish%20Informatics%20and%20Computer%20Science%20Alliance | The Scottish Informatics and Computer Science Alliance (SICSA) is a "research pool" funded by the Scottish Funding Council. A research pool is a collaboration of Scottish university departments whose broad objective is to create a coherent research community that will improve the quality of research carried out in Scotland in the pool-related discipline.
SICSA's goals are to improve the quality of research in informatics and computer science across universities in Scotland, to promote the transfer of research results to benefit companies and the public sector in Scotland and to create a university community that represents all aspects of Scottish Informatics and Computer Science.
SICSA was launched in December 2008 and is funded with an award of £14.5 million from the Scottish Funding Council with SICSA member universities providing matching funding. It is managed by the SICSA Executive which is composed of: Director of Research (who is also the SICSA Director), the Director of the SICSA Graduate Academy, the Director of Knowledge Exchange, the Director of Education, the SICSA Executive Officer and the SICSA Executive Assistant.
Membership
SICSA has adopted an inclusive membership policy and all universities in Scotland that have computer science or informatics departments/schools are eligible to be members of SICSA. , the following departments/schools were SICSA members.
University of Aberdeen. Department of Computing Science
University of Abertay. School of Computing and Engineering Systems & Institute of Arts, Media and Computer Games
University of Dundee. School of Computing
University of Edinburgh. School of Informatics
Edinburgh Napier University. Institute for Informatics & Digital Innovation
University of Glasgow. School of Computing Science
Glasgow Caledonian University. School of Engineering and Computing
Heriot-Watt University. School of Mathematical and Computer Sciences
The Robert Gordon University. School of Computing
University of St Andrews. School of Computer Science
University of Stirling. Department of Computer Science and Mathematics
University of Strathclyde. Department of Computer and Information Sciences
University of the West of Scotland. School of Computing
University of the Highlands and Islands. School of Computing
Research themes
SICSA's research activities are organized around four broad research themes
Next-generation Internet
Complex Systems Engineering
Modelling and Abstraction
Multi-modal Interaction
Each research theme is managed by a theme leader and organizes workshops and events for the computer science research community across Scotland.
SICSA staff appointments
To help achieve its goal of improving research quality, SICSA has funded the appointment of 30 academic staff across Scottish universities. Appointments have been made in Edinburgh, Glasgow, St Andrews, Stirling, Strathclyde, Abertay and Aberdeen Universities.
The SICSA Graduate Academy
The SICSA Graduate Academy (SGA) is an international graduate school in informatics and computer science. It provides funding for PhD students, supports graduate training and summer schools and runs a Distinguished Visitor scheme which supports visits from distinguished computer scientists to Scotland. The SGA runs an annual conference for all PhD students working in computer science and informatics in Scotland.
SICSA Prize studentships
SICSA has made available 80 prize studentships for PhD study in Scotland to students from around the world, with the aim of attracting the research leaders of the future to work in Scotland.
SICSA summer schools
SICSA provides support for PhD students in Scotland to attend international summer school and sponsors up to 3 international summer schools per year for PhD students in Scotland. Nine summer schools have been sponsored in Scottish universities between June 2009 and August 2012.
SICSA Distinguished Visitor Scheme
The aim of the SICSA Distinguished Visitor scheme is to provide support for excellent researchers from around the world to interact with the Scottish Computer Science and Informatics research community.
Interaction with industry
SICSA interacts extensively with local industry in Scotland with a view to transferring technology and informing the industrial community of informatics and computer science research in Scotland. An annual "Demofest" is organized that showcases Scottish University research. Specific projects with industry are concerned with smart tourism and migrating high-value software services to the cloud.
In 2013, SICSA developed a portfolio of funding mechanisms which have the specific aim of increasing exchanges between academics and industry. These mechanisms include: SICSA Postgraduate Industry Internship Program; SICSA Early Career Industry Fellowship; SICSA Distinguished Industrial Visitors Fellowship; SICSA Elevate - Incubator Program; SICSA Proof of Concept Program; SICSA Team Based Industrial Placements Program.
Computer science education
Uniquely amongst the SFC research pools, SICSA has extended its remit to include education as well as research. The aim of this move is to allow SICSA to become the single representative body for all aspects of university informatics/computer science research and education in Scotland.
SICSA maintains information about undergraduate and postgraduate taught courses in Scotland for potential students and provides information for industry on opportunities for graduate recruitment.
References
External links
Official SICSA Web site
Academia in Scotland
Academic organisations based in the United Kingdom
College and university associations and consortia in the United Kingdom
Computer science education in the United Kingdom
Computer science organizations
Information science
Information technology organisations based in the United Kingdom
Research institutes in Scotland
2008 establishments in Scotland
Scientific organizations established in 2008 | Scottish Informatics and Computer Science Alliance | [
"Technology"
] | 1,004 | [
"Computer science",
"Computer science organizations"
] |
33,615,932 | https://en.wikipedia.org/wiki/C45H73NO15 | {{DISPLAYTITLE:C45H73NO15}}
The molecular formula C45H73NO15 (molar mass: 868.06 g/mol, exact mass: 867.4980 u) may refer to:
Solamargine
Solanine
Molecular formulas | C45H73NO15 | [
"Physics",
"Chemistry"
] | 64 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
33,617,190 | https://en.wikipedia.org/wiki/MiraDry | miraDry is a microwave-based medical device developed by Miramar Labs which is used in the treatment of axillary hyperhidrosis. It was approved by the United States Food and Drug Administration (FDA) in 2011 and was also approved in Europe. miraDry selectively destroys axillary sweat glands without affecting the superficial layers of the skin. In addition to sweat glands, miraDry destroys hair follicles in the axillary region regardless of hair color. It is about 72.5 to 90% effective and sweat is reduced by about 82% on average. It also reduces axillary hair by about 75%. The effects of miraDry are noticeable almost immediately and are long-lasting or permanent. A case of death due to miraDry caused by necrotizing fasciitis that was complicated by streptococcal toxic shock syndrome has been reported.
References
Medical devices
Microwave technology | MiraDry | [
"Biology"
] | 185 | [
"Medical devices",
"Medical technology"
] |
33,622,470 | https://en.wikipedia.org/wiki/List%20of%20valid%20argument%20forms | Of the many and varied argument forms that can possibly be constructed, only very few are valid argument forms. In order to evaluate these forms, statements are put into logical form. Logical form replaces any sentences or ideas with letters to remove any bias from content and allow one to evaluate the argument without any bias due to its subject matter.
Being a valid argument does not necessarily mean the conclusion will be true. It is valid because if the premises are true, then the conclusion has to be true. This can be proven for any valid argument form using a truth table which shows that there is no situation in which there are all true premises and a false conclusion.
Valid syllogistic forms
In syllogistic logic, there are 256 possible ways to construct categorical syllogisms using the A, E, I, and O statement forms in the square of opposition. Of the 256, only 24 are valid forms. Of the 24 valid forms, 15 are unconditionally valid, and 9 are conditionally valid.
Unconditionally valid
Conditionally valid
Valid propositional forms
The following is a list of some common valid argument forms in propositional logic. It is nowhere near exhaustive, and gives only a few examples of the better known valid argument forms.
Modus ponens
One valid argument form is known as modus ponens, not to be mistaken with modus tollens, which is another valid argument form that has a like-sounding name and structure. Modus ponens (sometimes abbreviated as MP) says that if one thing is true, then another will be. It then states that the first is true. The conclusion is that the second thing is true. It is shown below in logical form.
If A, then B
A
Therefore B
Before being put into logical form the above statement could have been something like below.
If Kelly does not finish his homework, he will not go to class
Kelly did not finish his homework
Therefore, Kelly will not go to class
The first two statements are the premises while the third is the conclusion derived from them.
Modus tollens
Another form of argument is known as modus tollens (commonly abbreviated MT). In this form, you start with the same first premise as with modus ponens. However, the second part of the premise is denied, leading to the conclusion that the first part of the premise should be denied as well. It is shown below in logical form.
If A, then B
Not B
Therefore not A.
When modus tollens is used with actual content, it looks like below.
If the Saints win the Super Bowl, there will be a party in New Orleans that night
There was no party in New Orleans that night
Therefore, the Saints did not win the Super Bowl
Hypothetical syllogism
Much like modus ponens and modus tollens, hypothetical syllogism (sometimes abbreviated as HS) contains two premises and a conclusion. It is, however, slightly more complicated than the first two. In short, it states that if one thing happens, another will as well. If that second thing happens, a third will follow it. Therefore, if the first thing happens, it is inevitable that the third will too. It is shown below in logical form.
If A, then B
If B, then C
Therefore if A, then C
When put into words it looks like below.
If it rains today, I will wear my rain jacket
If I wear my rain jacket, I will keep dry
Therefore if it rains today, I will keep dry
Disjunctive syllogism
Disjunctive syllogism (sometimes abbreviated DS) has one of the same characteristics as modus tollens in that it contains a premise, then in a second premise it denies a statement, leading to the conclusion. In Disjunctive Syllogism, the first premise establishes two options. The second takes one away, so the conclusion states that the remaining one must be true. It is shown below in logical form.
Either A or B
Not A
Therefore B
When A and B are replaced with real life examples it looks like below.
Either you will see Joe in class today or he will oversleep
You did not see Joe in class today
Therefore Joe overslept
Disjunctive syllogism takes two options and narrows it down to one.
Constructive dilemma
Another valid form of argument is known as constructive dilemma or sometimes just 'dilemma'. It does not leave the user with one statement alone at the end of the argument, instead, it gives an option of two different statements. The first premise gives an option of two different statements. Then it states that if the first one happens, there will be a particular outcome and if the second happens, there will be a separate outcome. The conclusion is that either the first outcome or the second outcome will happen. The criticism with this form is that it does not give a definitive conclusion; just a statement of possibilities. When it is written in argument form it looks like below.
Either A or B
If A then C
If B then D
Therefore either C or D
When content is inserted in place of the letters, it looks like below.
Bill will either take the stairs or the elevator to his room
If he takes the stairs, he will be tired when he gets to his room
If he takes the elevator, he will miss the start of the football game on TV
Therefore Bill will either be tired when he gets to his room or he will miss the start of the football game
There is a slightly different version of dilemma that uses negation rather than affirming something known as destructive dilemma. When put in argumentative form it looks like below.
If A then C
If B then D
Not C or not D
Therefore not A or not B
See also
Semantic argument
References
Rules of inference
Valid argument forms
Arguments | List of valid argument forms | [
"Mathematics"
] | 1,183 | [
"Rules of inference",
"Proof theory"
] |
33,624,466 | https://en.wikipedia.org/wiki/Factory%20Instrumentation%20Protocol | The Factory Instrumentation Protocol or FIP is a standardized field bus protocol. Its most current definition can be found in the European Standard EN50170.
History
The FIP standard is based on a French initiative in 1982 to create a requirements analysis for a future field bus standard. The study led to the European Eureka initiative for a field bus standard in June 1986 that included 13 partners. The development group (réseaux locaux industriels) created the first proposal to be standardized in France. The name of the FIP field bus was originally given as an abbreviation of the French "Flux d'Information vers le Processus" while later referring to FIP with the English name "Factory Instrumentation Protocol" (some references also use the hybrid "Flux Information Protocol").
Based on the requirements study other manufacturers created similar protocol definitions - starting in 1990 a number of partners from Japan and America merged with FIP to the WorldFIP standardization group (that later merged into the Fieldbus Foundation group). Along with the competing German Profibus the field buses were submitted for European standardization by CENELEC in 1996. Along with other field bus standards these CENELEC standards were included to the international IEC 61158 and IEC 61784 standards by 1999 where FIP is listed as the Communication Profile Family 5. Eventually FIP has lost ground to Profibus which came to prevail the market in Europe in the following decade - the WorldFIP homepage has seen no press release since 2002 (with the US based Fieldbus Foundation to haven taken the lead in ongoing development which however promotes H1 fieldbus for process automation).
The closest cousin of the FIP family can be found today in the Wire Train Bus for train coaches. However a specific subset of WorldFIP - known the FIPIO protocol - can be found widely in machine components.
Technology
There are three transmission speeds specified as 31.25 kbit/s, 1 Mbit/s and 2.5 Mbit/s for cable and optical fibre. There may be 255 stations per segment with an overall address range of 65536 communication ports. The messaging protocol uses synchronized access to the channel with messages of 128 bytes length.
External links
https://web.archive.org/web/20070428141019/http://www.iufmrese.cict.fr/catalogue/2001/bus_terrain/html/bus2.shtml - History (French)
Industrial computing
Serial buses
Industrial automation | Factory Instrumentation Protocol | [
"Technology",
"Engineering"
] | 514 | [
"Industrial automation",
"Industrial computing",
"Automation",
"Industrial engineering"
] |
33,625,523 | https://en.wikipedia.org/wiki/Pavonia%20%C3%97%20gledhillii | Pavonia × gledhillii is an evergreen flowering plant in the mallow family, Malvaceae.
Etymology
The generic name honours Spanish botanist José Antonio Pavón Jiménez (1754–1844). The epithet gledhillii come from Dr. David Gledhill, curator in 1989 of University of Bristol Botanic Garden.
Description
Pavonia × gledhillii is a 19th-century hybrid of Pavonia makoyana, E. Morrem and Pavonia multiflora, A. Juss., often incorrectly confused with Pavonia multiflora.
This subshrub is intermediate between the two species of origin in almost all respects, but it has nine to ten equal broad bracts and sub-entire leaf margins. It can reach a height of about . The unusual flowers are purple-grey enclosed within a bright red calyx. Flowering period is late Summer.
Gallery
References
M. Cheek – A New Name for a South American Pavonia (Malvaceae) – Kew Bulletin – Vol. 44, No. 1, 1989
External links
Hibisceae
Hybrid plants | Pavonia × gledhillii | [
"Biology"
] | 228 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
26,400,713 | https://en.wikipedia.org/wiki/Y-SNP | A Y-SNP is a single-nucleotide polymorphism on the Y chromosome. Y-SNPs are often used in paternal genealogical DNA testing.
SNP markers
A single nucleotide polymorphism (SNP) is a change to a single nucleotide in a DNA sequence. The relative mutation rate for an SNP is extremely low. This makes them ideal for marking the history of the human genetic tree. SNPs are named with a letter code and a number. The letter indicates the lab or research team that discovered the SNP. The number indicates the order in which it was discovered. For example, M173 is the 173rd SNP documented by the Human Population Genetics Laboratory at Stanford University, which uses the letter M.
See also
Mt-SNP
Short tandem repeat
Haplogroup
Haplotype
Genealogical DNA test
References
Single-nucleotide polymorphisms | Y-SNP | [
"Chemistry",
"Biology"
] | 184 | [
"Single-nucleotide polymorphisms",
"Biodiversity",
"Molecular biology"
] |
26,401,071 | https://en.wikipedia.org/wiki/International%20Journal%20of%20Applied%20Philosophy | The International Journal of Applied Philosophy is a biannual peer-reviewed academic journal that publishes philosophical examinations of practical problems. It was established in 1982, and contains original articles, reviews, and edited discussions of topics of general interest in ethics and applied philosophy. The journal is published by the Philosophy Documentation Center, and some articles are published in co-operation with the Association for Practical and Professional Ethics.
Subject coverage
The journal covers issues in business, education, the environment, government, health care, law, psychology, and science. Special issue topical coverage has included abortion, animal rights, gambling, lying, terrorism, torture, and the foreign policy of the United States.
Abstracting and indexing
The journal is abstracted and indexed in:
See also
List of ethics journals
List of philosophy journals
References
External links
Biannual journals
English-language journals
Philosophy journals
Academic journals established in 1982
Philosophy Documentation Center academic journals
Applied philosophy
Environmental humanities journals
Environmental philosophy
1982 establishments in the United States | International Journal of Applied Philosophy | [
"Environmental_science"
] | 195 | [
"Environmental philosophy",
"Environmental social science"
] |
26,404,383 | https://en.wikipedia.org/wiki/Nuclear%20orientation | Nuclear orientation, in nuclear physics, is the directional ordering of an assembly of nuclear spins with respect to some axis in space. It is one of the nuclear spectroscopy methods.
A nuclear level with spin in a magnetic field will divide into magnetic sub-levels with an energy spacing.
The populations of these levels are determined by the Boltzmann distribution at a steady temperature and will essentially be equal. The exponential in the Boltzmann distribution should not be equal to 1 to obtain unequal populations. To achieve this, cooling to a temperature of around 10 millikelvin is needed.
Typically, this is achieved by implanting the nuclei of interest into ferromagnetic hosts.
In the mid-1940s, Yevgeny Zavoisky developed electron paramagnetic resonance, eventually leading to the concept of nuclear orientation.
In the early 1950s, Neville Robinson, Jim Daniels, and Michael Grace produced an example of nuclear orientation for the first time at the Clarendon Laboratory, University of Oxford.
There is now a Nuclear Orientation Group at Oxford.
Bibliography
K. S. Krane, Nuclear orientation and nuclear structure. Hyperfine Interactions, Volume 43, Numbers 1–4, pages 3–14, December, 1988.
B. Bleaney, Cross-relaxation and nuclear orientation in ytterbium vanadate. Proceedings: Mathematical, Physical and Engineering Sciences, Volume 455, Number 1988, pages 2835–2839, 8 August 1999. Published by The Royal Society.
B. Bleaney, Dynamic nuclear polarization and nuclear orientation in terbium vanadate. Applied Magnetic Resonance, Volume 21, Number 1, pages 35–38, December, 1988.
See also
Nuclear magnetic resonance
Mössbauer spectroscopy
Perturbed angular correlation
References
Nuclear physics
Nuclear magnetic resonance | Nuclear orientation | [
"Physics",
"Chemistry"
] | 362 | [
"Nuclear magnetic resonance",
"Nuclear physics"
] |
25,000,837 | https://en.wikipedia.org/wiki/C10H14N2O3 | {{DISPLAYTITLE:C10H14N2O3}}
The molecular formula C10H14N2O3 (molar mass: 210.23 g/mol, exact mass: 210.1004 u) may refer to:
Aprobarbital
Clocental
Crotylbarbital
Molecular formulas | C10H14N2O3 | [
"Physics",
"Chemistry"
] | 71 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
25,001,172 | https://en.wikipedia.org/wiki/List%20of%20map%20projections | This is a summary of map projections that have articles of their own on Wikipedia or that are otherwise notable. Because there is no limit to the number of possible map projections, there can be no comprehensive list.
Table of projections
*The first known popularizer/user and not necessarily the creator.
Key
Type of projection surface
Cylindrical In normal aspect, these map regularly-spaced meridians to equally spaced vertical lines, and parallels to horizontal lines.
Pseudocylindrical In normal aspect, these map the central meridian and parallels as straight lines. Other meridians are curves (or possibly straight from pole to equator), regularly spaced along parallels.
Conic In normal aspect, conic (or conical) projections map meridians as straight lines, and parallels as arcs of circles.
Pseudoconical In normal aspect, pseudoconical projections represent the central meridian as a straight line, other meridians as complex curves, and parallels as circular arcs.
Azimuthal In standard presentation, azimuthal projections map meridians as straight lines and parallels as complete, concentric circles. They are radially symmetrical. In any presentation (or aspect), they preserve directions from the center point. This means great circles through the central point are represented by straight lines on the map.
Pseudoazimuthal In normal aspect, pseudoazimuthal projections map the equator and central meridian to perpendicular, intersecting straight lines. They map parallels to complex curves bowing away from the equator, and meridians to complex curves bowing in toward the central meridian. Listed here after pseudocylindrical as generally similar to them in shape and purpose.
Other Typically calculated from formula, and not based on a particular projection
Polyhedral maps Polyhedral maps can be folded up into a polyhedral approximation to the sphere, using particular projection to map each face with low distortion.
Properties
Conformal Preserves angles locally, implying that local shapes are not distorted and that local scale is constant in all directions from any chosen point.
Equal-area Area measure is conserved everywhere.
Compromise Neither conformal nor equal-area, but a balance intended to reduce overall distortion.
Equidistant All distances from one (or two) points are correct. Other equidistant properties are mentioned in the notes.
Gnomonic All great circles are straight lines.
Retroazimuthal Direction to a fixed location B (by the shortest route) corresponds to the direction on the map from A to B.
Perspective Can be constructed by light shining through a globe onto a developable surface.
See also
360 video projection
List of national coordinate reference systems
Snake Projection
Notes
Further reading | List of map projections | [
"Mathematics"
] | 524 | [
"Map projections",
"Coordinate systems"
] |
25,001,958 | https://en.wikipedia.org/wiki/4-%28%CE%B3-Glutamylamino%29butanoic%20acid | 4-(γ-Glutamylamino)butanoic acid is molecule that consists of L-glutamate conjugated to γ-aminobutyric acid (GABA). It is the substrate of the enzyme γ-glutamyl-γ-aminobutyrate hydrolase, which is involved in the biosynthesis of polyamines.
References
Alpha-Amino acids
Amino acid derivatives
Biosynthesis
Dicarboxylic acids
Gamma-Amino acids | 4-(γ-Glutamylamino)butanoic acid | [
"Chemistry",
"Biology"
] | 104 | [
"Biotechnology stubs",
"Biochemistry stubs",
"Biosynthesis",
"Chemical synthesis",
"Biochemistry",
"Metabolism"
] |
25,002,848 | https://en.wikipedia.org/wiki/F%C3%B6ppl%E2%80%93von%20K%C3%A1rm%C3%A1n%20equations | The Föppl–von Kármán equations, named after August Föppl and Theodore von Kármán, are a set of nonlinear partial differential equations describing the large deflections of thin flat plates. With applications ranging from the design of submarine hulls to the mechanical properties of cell wall, the equations are notoriously difficult to solve, and take the following form:
where is the Young's modulus of the plate material (assumed homogeneous and isotropic), is the Poisson's ratio, is the thickness of the plate, is the out–of–plane deflection of the plate, is the external normal force per unit area of the plate, is the Cauchy stress tensor, and are indices that take values of 1 and 2 (the two orthogonal in-plane directions). The 2-dimensional biharmonic operator is defined as
Equation (1) above can be derived from kinematic assumptions and the constitutive relations for the plate. Equations (2) are the two equations for the conservation of linear momentum in two dimensions where it is assumed that the out–of–plane stresses () are zero.
Validity of the Föppl–von Kármán equations
While the Föppl–von Kármán equations are of interest from a purely mathematical point of view, the physical validity of these equations is questionable. Ciarlet states: The two-dimensional von Karman equations for plates, originally proposed by von Karman [1910], play a mythical role in applied mathematics. While they have been abundantly, and satisfactorily, studied from the mathematical standpoint, as regards notably various questions of existence, regularity, and bifurcation, of their solutions, their physical soundness has been often seriously questioned. Reasons include the facts that
the theory depends on an approximate geometry which is not clearly defined
a given variation of stress over a cross-section is assumed arbitrarily
a linear constitutive relation is used that does not correspond to a known relation between well defined measures of stress and strain
some components of strain are arbitrarily ignored
there is a confusion between reference and deformed configurations which makes the theory inapplicable to the large deformations for which it was apparently devised.
Conditions under which these equations are actually applicable and will give reasonable results when solved are discussed in Ciarlet.
Equations in terms of Airy stress function
The three Föppl–von Kármán equations can be reduced to two by introducing the Airy stress function where
Equation (1) becomes
while the Airy function satisfies, by construction the force balance equation (2). An equation for is obtained
enforcing the representation of the strain as a function of the stress. One gets
Pure bending
For the pure bending of thin plates the equation of equilibrium is , where
is called flexural or cylindrical rigidity of the plate.
Kinematic assumptions (Kirchhoff hypothesis)
In the derivation of the Föppl–von Kármán equations the main kinematic assumption (also known as the Kirchhoff hypothesis) is that surface normals to the plane of the plate remain perpendicular to the plate after deformation. It is also assumed that the in-plane (membrane) displacements are small and the change in thickness of the plate is negligible. These assumptions imply that the displacement field in the plate can be expressed as
in which is the in-plane (membrane) displacement. This form of the displacement field implicitly assumes that the amount of rotation of the plate is small.
Strain-displacement relations (von Kármán strains)
The components of the three-dimensional Lagrangian Green strain tensor are defined as
Substitution of the expressions for the displacement field into the above gives
For small strains but moderate rotations, the higher order terms that cannot be neglected are
Neglecting all other higher order terms, and enforcing the requirement that the plate does not change its thickness, the strain tensor components reduce to the von Kármán strains
The first terms are the usual small-strains, for the mid-surface. The second terms, involving squares of displacement gradients, are non-linear, and need to be considered when the plate bending is fairly large (when the rotations are about 10 – 15 degrees). These first two terms together are called the membrane strains. The last terms, involving second derivatives, are the flexural (bending) strains. They involve the curvatures. These zero terms are due to the assumptions of the classical plate theory, which assume elements normal to the mid-plane remain inextensible and line elements perpendicular to the mid-plane remain normal to the mid-plane after deformation.
Stress–strain relations
If we assume that the Cauchy stress tensor components are linearly related to the von Kármán strains by Hooke's law, the plate is isotropic and homogeneous, and that the plate is under a plane stress condition, we have = = = 0 and
Expanding the terms, the three non-zero stresses are
Stress resultants
The stress resultants in the plate are defined as
Therefore,
the elimination of the in-plane displacements leads to
and
Solutions are easier to find when the governing equations are expressed in terms of stress resultants rather than the in-plane stresses.
Equations of Equilibrium
The weak form of the Kirchhoff plate is
here Ω denotes the mid-plane. The weak form leads to
The resulting governing equations are
Föppl–von Kármán equations in terms of stress resultants
The Föppl–von Kármán equations are typically derived with an energy approach by considering variations of internal energy and the virtual work done by external forces. The resulting static governing equations (Equations of Equilibrium) are
When the deflections are small compared to the overall dimensions of the plate, and the mid-surface strains are neglected,
.
The equations of equilibrium are reduced (pure bending of thin plates) to
.
References
See also
Plate theory
Partial differential equations
Continuum mechanics | Föppl–von Kármán equations | [
"Physics"
] | 1,204 | [
"Classical mechanics",
"Continuum mechanics"
] |
25,004,404 | https://en.wikipedia.org/wiki/Paul%20Tseng | Paul Tseng () was a Taiwanese-born American-Canadian applied mathematician and a professor at the Department of Mathematics at the University of Washington, in Seattle, Washington. Tseng was recognized by his peers to be one of the leading optimization researchers of his generation. On August 13, 2009, Paul Tseng went missing while kayaking in the Jinsha River in the Yunnan province of China and is presumed dead.
Biography
Tseng was born September 21, 1959, in Hsinchu, Taiwan. In December 1970, Tseng's family moved to Vancouver, British Columbia. Tseng received his B.Sc. from Queen's University in 1981 and his Ph.D. from Massachusetts Institute of Technology in 1986. In 1990 Tseng moved to the University of Washington's Department of Mathematics. Tseng has conducted research primarily in continuous optimization and secondarily in discrete optimization and distributed computation.
Research
Tseng made many contributions to mathematical optimization, publishing many articles and helping to develop quality software that has been widely used.
He published over 120 papers in optimization and had close collaborations with several colleagues, including Dimitri Bertsekas and Zhi-Quan Tom Luo.
Tseng's research subjects include:
Efficient algorithms for structured convex programs and network flow problems,
Complexity analysis of interior point methods for linear programming,
Parallel and distributed computing,
Error bounds and convergence analysis of iterative algorithms for optimization problems and variational inequalities,
Interior point methods and semidefinite relaxations for hard quadratic and matrix optimization problems, and
Applications of large scale optimization techniques in signal processing and machine learning.
In his research, Tseng gave a new proof for the sharpest complexity result for path-following interior-point methods for linear programming. Furthermore, together with Tom Luo, he resolved a long-standing open question on the convergence of matrix splitting algorithms for linear complementarity problems and affine variational inequalities. Tseng was the first to establish the convergence of the affine scaling algorithm for linear programming in the presence of degeneracy.
Tseng has coauthored (with his Ph.D. advisor, Dimitri Bertsekas) a publicly available network optimization program, called RELAX, which has been widely used in industry and academia for research purposes. This software has been used by statisticians like Paul R. Rosenbaum and Donald Rubin in their work on propensity score matching. Tseng's software for matching has similarly been used in nonparametric statistics to implement exact tests. Tseng has also developed a program called ERELAXG, for network optimization problems with gains. In 2010 conferences in his honor were held at the University of Washington and at Fudan University in Shanghai. Tseng's personal web page can be accessed in the exact state it was at the time of his disappearance, and contains many of his writings.
Travels and disappearance
Paul Tseng was an ardent bicyclist, kayaker and backpacker. He took many adventurous trips, including kayak tours along the Mekong, the Danube, the Nile and the Amazon. On August 13, 2009, Paul Tseng went missing while kayaking in the Yantze river near Lijiang, in Yunnan province of China and is now presumed dead.
See also
Computer networking
Dynamic programming
List of convexity topics
List of people who disappeared
Neural network
Reinforcement learning
Notes
External links
Math Programming Society
Publications from DBLP.
Publications from Google Scholar.
1959 births
2000s missing person cases
20th-century American mathematicians
21st-century American mathematicians
American computer scientists
American electrical engineers
American people of Chinese descent
American people of Taiwanese descent
Canadian computer scientists
Canadian electrical engineers
Canadian emigrants to the United States
Canadian mathematicians
Canadian people of Chinese descent
Control theorists
Hakka scientists
Massachusetts Institute of Technology alumni
Massachusetts Institute of Technology faculty
Missing person cases in China
Missing people
Naturalized citizens of Canada
People from Hsinchu
Queen's University at Kingston alumni
Scientists from Vancouver
Canadian systems scientists
Taiwanese emigrants to Canada
Academic staff of the University of British Columbia
University of Washington faculty
American systems scientists | Paul Tseng | [
"Engineering"
] | 811 | [
"Control engineering",
"Control theorists"
] |
36,247,185 | https://en.wikipedia.org/wiki/Orbital%20angular%20momentum%20multiplexing | Orbital angular momentum multiplexing is a physical layer method for multiplexing signals carried on electromagnetic waves using the orbital angular momentum (OAM) of the electromagnetic waves to distinguish between the different orthogonal signals.
OAM is one of two forms of angular momentum of light; it is distinct from, and should not be confused with, light spin angular momentum. The latter offers only two orthogonal quantum states, corresponding to the two states of circular polarization, and can be demonstrated to be equivalent to a combination of polarization multiplexing and phase shifting. OAM on the other hand relies on an extended beam of light, and the higher quantum degrees of freedom which come with the extension. OAM multiplexing can thus access a potentially unbounded set of states, and as such offer a much larger number of channels, subject only to the constraint of real-world optics. The constraint has been clarified in terms of independent scattering channels or the degrees of freedom of scattered fields through angular-spectral analysis, in conjunction with a rigorous Green function method.
The degrees of freedom limit is universal for arbitrary spatial-mode multiplexing, which is launched by a planar electromagnetic device, such as antenna, metasurface, etc., with a predefined physical aperture.
, although OAM multiplexing promises very significant improvements in bandwidth when used in concert with other existing modulation and multiplexing schemes, it is still an experimental technique, and has so far only been demonstrated in the laboratory. Following the early claim that OAM exploits a new quantum mode of information propagation, the technique has become controversial, with numerous studies suggesting it can be modelled as a purely classical phenomenon by regarding it as a particular form of tightly modulated MIMO multiplexing strategy, obeying classical information theoretic bounds.
, new evidence from radio telescope observations suggests that radio-frequency orbital angular momentum may have been observed in natural phenomena on astronomical scales, a phenomenon which is still under investigation.
History
OAM multiplexing was demonstrated using light beams in free space as early as 2004. Since then, research into OAM has proceeded in two areas: radio frequency and optical transmission.
Radio frequency
Terrestrial experiments
An experiment in 2011 demonstrated OAM multiplexing of two incoherent radio signals over a distance of 442 m. It has been claimed that OAM does not improve on what can be achieved with conventional linear-momentum based RF systems which already use MIMO, since theoretical work suggests that, at radio frequencies, conventional MIMO techniques can be shown to duplicate many of the linear-momentum properties of OAM-carrying radio beam, leaving little or no extra performance gain.
In November 2012, there were reports of disagreement about the basic theoretical concept of OAM multiplexing at radio frequencies between the research groups of Tamburini and Thide, and many different camps of communications engineers and physicists, with some declaring their belief that OAM multiplexing was just an implementation of MIMO, and others holding to their assertion that OAM multiplexing is a distinct, experimentally confirmed phenomenon.
In 2014, a group of researchers described an implementation of a communication link over 8 millimetre-wave channels multiplexed using a combination of OAM and polarization-mode multiplexing to achieve an aggregate bandwidth of 32 Gbit/s over a distance of 2.5 metres. These results agree well with predictions about severely limited distances made by Edfors et al.
The industrial interest for long-distance microwave OAM multiplexing seems to have been diminishing since 2015, when some of the original promoters of OAM-based communication at radio frequencies (including Siae Microelettronica) have published a theoretical investigation showing that there is no real gain beyond traditional spatial multiplexing in terms of capacity and overall antenna occupation.
Radio astronomy
In 2019, a letter published in the Monthly Notices of the Royal Astronomical Society presented evidence that OAM radio signals had been received from the vicinity of the M87* black hole, over 50 million light-years distant, suggesting that orbital angular momentum information can propagate over astronomical distances.
Optical
OAM multiplexing has been trialled in the optical domain. In 2012, researchers demonstrated OAM-multiplexed optical transmission speeds of up to 2.5 Tbits/s using 8 distinct OAM channels in a single beam of light, but only over a very short free-space path of roughly one metre. Work is ongoing on applying OAM techniques to long-range practical free-space optical communication links.
OAM multiplexing can not be implemented in the existing long-haul optical fiber systems, since these systems are based on single-mode fibers, which inherently do not support OAM states of light. Instead, few-mode or multi-mode fibers need to be used. Additional problem for OAM multiplexing implementation is caused by the mode coupling that is present in conventional fibers, which cause changes in the spin angular momentum of modes under normal conditions and changes in orbital angular momentum when fibers are bent or stressed. Because of this mode instability, direct-detection OAM multiplexing has not yet been realized in long-haul communications. In 2012, transmission of OAM states with 97% purity after 20 meters over special fibers was demonstrated by researchers at Boston University. Later experiments have shown stable propagation of these modes over distances of 50 meters, and further improvements of this distance are the subject of ongoing work. Other ongoing research on making OAM multiplexing work over future fibre-optic transmission systems includes the possibility of using similar techniques to those used to compensate mode rotation in optical polarization multiplexing.
Alternative to direct-detection OAM multiplexing is a computationally complex coherent-detection with (MIMO) digital signal processing (DSP) approach, that can be used to achieve long-haul communication, where strong mode coupling is suggested to be beneficial for coherent-detection-based systems.
In the beginning, people achieve OAM multiplexing by employing several phase plates or spatial light modulators. An on-chip OAM multiplexer was then an interest of research. In 2012, a paper by Tiehui Su and et al. demonstrated an integrated OAM multiplexer. Different solutions for integrated OAM multiplexer were demonstrated like Xinlun Cai with his paper in 2012. In 2019, Jan Markus Baumann and et al. designed a chip for OAM multiplexing.
Practical demonstration in optical-fiber system
A paper by Bozinovic et al. published in Science in 2013 claims the successful demonstration of an OAM-multiplexed fiber-optic transmission system over a 1.1 km test path. The test system was capable of using up to 4 different OAM channels simultaneously, using a fiber with a "vortex" refractive-index profile. They also demonstrated combined OAM and WDM using the same apparatus, but using only two OAM modes.
A paper by Kasper Ingerslev et al. published in Optics Express in 2018 demonstrates a MIMO-free transmission of 12 orbital angular momentum (OAM) modes over a 1.2 km air-core fiber. WDM compatibility of the system is shown by using 60, 25 GHz spaced WDM channels with 10 GBaud QPSK signals.
Practical demonstration in conventional optical-fiber systems
In 2014, articles by G. Milione et al. and H. Huang et al. claimed the first successful demonstration of an OAM-multiplexed fiber-optic transmission system over a 5 km of conventional optical fiber, i.e., an optical fiber having a circular core and a graded index profile. In contrast to the work of Bozinovic et al., which used a custom optical fiber that had a "vortex" refractive-index profile, the work by G. Milione et al. and H. Huang et al. showed that OAM multiplexing could be used in commercially available optical fibers by using digital MIMO post-processing to correct for mode mixing within the fiber. This method is sensitive to changes in the system that change the mixing of the modes during propagation, such as changes in the bending of the fiber, and requires substantial computation resources to scale up to larger numbers of independent modes, but shows great promise.
In 2018 Zengji Yue, Haoran Ren, Shibiao Wei, Jiao Lin & Min Gu at Royal Melbourne Institute of Technology miniaturised this technology, shrinking it from the size of a large dinner table to a small chip which could be integrated into communications networks. This chip could, they predict, increase the capacity of fibre-optic cables by at least 100-fold and likely higher as the technology is further developed.
See also
Angular momentum of light
Optical vortex
Polarization-division multiplexing
Vorticity
Wavelength-division multiplexing
References
External links
* Siae Microelettronica patent
Orbital angular momentum of waves
Multiplexing
Photonics
Optical communications
Radio communications | Orbital angular momentum multiplexing | [
"Physics",
"Engineering"
] | 1,791 | [
"Optical communications",
"Physical phenomena",
"Telecommunications engineering",
"Physical quantities",
"Radio communications",
"Angular momentum of light",
"Waves",
"Orbital angular momentum of waves",
"Angular momentum",
"Moment (physics)"
] |
36,247,465 | https://en.wikipedia.org/wiki/Polarization-division%20multiplexing | Polarization-division multiplexing (PDM) is a physical layer method for multiplexing signals carried on electromagnetic waves, allowing two channels of information to be transmitted on the same carrier frequency by using waves of two orthogonal polarization states. It is used in microwave links such as satellite television downlinks to double the bandwidth by using two orthogonally polarized feed antennas in satellite dishes. It is also used in fiber optic communication by transmitting separate left and right circularly polarized light beams through the same optical fiber.
Radio
Polarization techniques have long been used in radio transmission to reduce interference between channels, particularly at VHF frequencies and beyond.
Under some circumstances, the data rate of a radio link can be doubled by transmitting two separate channels of radio waves on the same frequency, using orthogonal polarization. For example, in point to point terrestrial microwave links, the transmitting antenna can have two feed antennas; a vertical feed antenna which transmits microwaves with their electric field vertical (vertical polarization), and a horizontal feed antenna which transmits microwaves on the same frequency with their electric field horizontal (horizontal polarization). These two separate channels can be received by vertical and horizontal feed antennas at the receiving station. For satellite communications, orthogonal circular polarization is often used instead, (i.e. right- and left-handed), as the sense of circular polarization is not changed by the relative orientation of the antenna in space.
A dual polarization system comprises usually two independent transmitters, each of which can be connected by means of waveguide or TEM lines (such as coaxial cables or stripline or quasi-TEM such as microstrip) to a single-polarization antenna for its standard operation. Although two separate single-polarization antennas can be used for PDM (or two adjacent feeds in a reflector antenna), radiating two independent polarization states can be often easily achieved by means of a single dual-polarization antenna.
When the transmitter has a waveguide interface, typically rectangular in order to be in single-mode region at the operating frequency, a dual-polarized antenna with a circular (or square) waveguide port is the radiating element chosen for modern communication systems. The circular or square waveguide port is needed so that at least two degenerate modes are supported. An ad-hoc component must be therefore introduced in such situations to merge two separate single-polarized signals into one dual-polarized physical interface, namely an ortho-mode transducer (OMT).
In case the transmitter has TEM or quasi-TEM output connections, instead, a dual-polarization antenna often presents separate connections (i.e. a printed square patch antenna with two feed points), and embeds the function of an OMT by means of intrinsically transferring the two excitation signals to the orthogonal polarization states.
A dual-polarized signal thus carries two independent data streams to a receiving antenna, which can itself be a single-polarized one, for receiving only one of the two streams at a time, or a dual-polarized model, again relaying its received signal to two single-polarization output connectors (via an OMT if in waveguide).
The ideal dual-polarization system lies its foundation onto the perfect orthogonality of the two polarization states, and any of the single-polarized interfaces at the receiver would theoretically contain only the signal meant to be transmitted by the desired polarization, thus introducing no interference and allowing the two data streams to be multiplexed and demultiplexed transparently without any degradation due to the coexistence with the other.
Companies working on commercial PDM technology include Siae Microelettronica, Huawei and Alcatel-Lucent.
Some types of outdoor microwave radios have integrated orthomode transducers and operate in both polarities from a single radio unit, performing cross-polarization interference cancellation (XPIC) within the radio unit itself.
Alternatively, the orthomode transducer may be built into the antenna, and allow connection of separate radios, or separate ports of the same radio, to the antenna.
Cross-Polarization Interference Cancellation (XPIC)
Practical systems, however, suffer from non-ideal behaviors which mix the signals and the polarization states together:
the OMT at the transmitting side has a finite cross-polarization discrimination (XPD) and thus leaks part of the signals meant to be transmitted in one polarization to the other
the transmitting antenna has a finite XPD and thus leaks part of its input polarizations to the other radiated polarization state
propagation in presence of rain, snow, hail creates depolarization, as part of the two impinging polarizations is leaked to the other
the finite XPD of the receiving antenna acts similarly to the transmitting side and the relative alignment of the two antennas contributes to a loss of system XPD
the finite XPD of the receiving OMT likewise further mixes the signals from the dual-polarized port to the single-polarized ports
As a consequence, the signal at one of the received single-polarization terminals actually contains a dominant quantity of the desired signal (meant to be transmitted onto one polarization) and a minor amount of undesired signal (meant to be transported by the other polarization), which represents an interference over the former. As a consequence, each received signal must be cleared of the interference level in order to reach the required signal-to-noise-and-interference ratio (SNIR) needed by the receiving stages, which may be of the order of more than 30 dB for high-level M-QAM schemes. Such operation is carried out by a cross-polarization-interference cancellation (XPIC), typically implemented as a baseband digital stage.
Compared to spatial multiplexing, received signals for a PMD system have a much more favourable carrier-to-interference ratio, as the amount of leakage is often much smaller than the useful signal, whereas spatial multiplexing operates with an amount of interference equal to the amount of useful signal. This observation, valid for a good PMD design, allows the adaptive XPIC to be designed in a simpler manner than a general MIMO cancelling scheme, since the starting point (without cancellation) is typically already sufficient for establishing a low-capacity link by means of a reduced modulation.
An XPIC typically acts on one of the received signals "C" containing the desired signal as dominant term and uses the other received "X" signal too (containing the interfering signal as dominant term). The XPIC algorithm multiplies the "X" by a complex coefficient and then adds it to the received "C". The complex recombination coefficient is adjusted adaptively to maximize the MMSE as measured on the recombination. Once the MMSE is improved to the required level, the two terminals can switch to high-order modulations.
Differential Cross-Polarized Wireless Communications
Is a novel method for polarized antenna transmission utilizing a differential technique .
Photonics
Polarization-division multiplexing is typically used together with phase modulation or optical QAM, allowing transmission speeds of 100 Gbit/s or more over a single wavelength. Sets of PDM wavelength signals can then be carried over wavelength-division multiplexing infrastructure, potentially substantially expanding its capacity. Multiple polarization signals can be combined to form new states of polarization, which is known as parallel polarization state generation.
The major problem with the practical use of PDM over fiber-optic transmission systems are the drifts in polarization state that occur continuously over time due to physical changes in the fibre environment. Over a long-distance system, these drifts accumulate progressively without limit, resulting in rapid and erratic rotation of the polarized light's Jones vector over the entire Poincaré sphere. Polarization mode dispersion, polarization-dependent loss. and cross-polarization modulation are other phenomena that can cause problems in PDM systems.
For this reason, PDM is generally used in conjunction with advanced channel coding techniques, allowing the use of digital signal processing to decode the signal in a way that is resilient to polarization-related signal artifacts. Modulations used include PDM-QPSK and PDM-DQPSK.
Companies working on commercial PDM technology include Alcatel-Lucent, Ciena, Cisco Systems, Huawei and Infinera.
See also
Polarization scrambler
Wavelength-division multiplexing
Orbital angular momentum multiplexing
Orthogonal frequency-division multiplexing
References
Photonics
Radio communications
Multiplexing | Polarization-division multiplexing | [
"Engineering"
] | 1,750 | [
"Telecommunications engineering",
"Radio communications"
] |
43,368,471 | https://en.wikipedia.org/wiki/Eclipse%20NeoSCADA | Eclipse NeoSCADA (formerly Eclipse SCADA) is an Eclipse Incubator project created in July 2013, that aims at providing a full state-of-the-art, open-source SCADA system that can be used out of the box or as a platform for creating a custom solution. Eclipse SCADA emerged from the openSCADA project, which now provides additional functionality on top of Eclipse SCADA.
The initial release (0.1.0) is based on the source code of openSCADA 1.2.0 and has been focusing on the relocation of the project to the Eclipse Foundation, like changing package names and namespaces.
The Eclipse NeoSCADA project is part of the Eclipse IoT Industry Working Group initiative.
As of August 28, 2014 Eclipse SCADA is filed under the Eclipse IoT top level project.
Supported protocols
The following protocols are directly supported by Eclipse NeoSCADA:
Command Line Applications
JDBC
Modbus TCP and RTU
Simatic S7 PLC
Other protocols can be implemented by writing driver modules using the Eclipse SCADA API. There are a few driver modules currently available outside of Eclipse SCADA:
OPC
SNMP
References
External links
Eclipse NeoSCADA project page
Project Proposal
Eclipse (software) | Eclipse NeoSCADA | [
"Engineering"
] | 253 | [
"Software engineering",
"Software engineering stubs"
] |
40,450,182 | https://en.wikipedia.org/wiki/Steriruncitruncated%20tesseractic%20honeycomb | In four-dimensional Euclidean geometry, the steriruncitruncated tesseractic honeycomb is a uniform space-filling honeycomb.
Alternate names
Celliprismatotruncated tesseractic tetracomb
Great tomocubic-diprismatotesseractic tetracomb
Related honeycombs
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
16-cell honeycomb
24-cell honeycomb
Truncated 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs)
x4x3o3x4x - captatit - O102
5-polytopes
Honeycombs (geometry)
Truncated tilings | Steriruncitruncated tesseractic honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 325 | [
"Honeycombs (geometry)",
"Truncated tilings",
"Tessellation",
"Crystallography",
"Symmetry"
] |
40,450,196 | https://en.wikipedia.org/wiki/Stericantellated%20tesseractic%20honeycomb | In four-dimensional Euclidean geometry, the stericantellated tesseractic honeycomb is a uniform space-filling honeycomb.
Related honeycombs
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
16-cell honeycomb
24-cell honeycomb
Truncated 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs)
x4o3x3o4x - scartit - O98
5-polytopes
Honeycombs (geometry) | Stericantellated tesseractic honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 291 | [
"Tessellation",
"Crystallography",
"Honeycombs (geometry)",
"Symmetry"
] |
40,450,230 | https://en.wikipedia.org/wiki/Omnitruncated%20tesseractic%20honeycomb | In four-dimensional Euclidean geometry, the omnitruncated tesseractic honeycomb is a uniform space-filling honeycomb. It has omnitruncated tesseract, truncated cuboctahedral prism, and 8-8 duoprism facets in an irregular 5-cell vertex figure.
Related honeycombs
See also
Truncated square tiling
Omnitruncated cubic honeycomb
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
16-cell honeycomb
24-cell honeycomb
Truncated 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs)
x4x3x3x4x - otatit - O103
5-polytopes
Honeycombs (geometry)
Truncated tilings | Omnitruncated tesseractic honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 342 | [
"Honeycombs (geometry)",
"Truncated tilings",
"Tessellation",
"Crystallography",
"Symmetry"
] |
40,450,477 | https://en.wikipedia.org/wiki/Truncated%20tesseractic%20honeycomb | In four-dimensional Euclidean geometry, the truncated tesseractic honeycomb is a uniform space-filling tessellation (or honeycomb) in Euclidean 4-space. It is constructed by a truncation of a tesseractic honeycomb creating truncated tesseracts, and adding new 16-cell facets at the original vertices.
Related honeycombs
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
Demitesseractic honeycomb
24-cell honeycomb
Truncated 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
Notes
References
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] See p318
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs)
o3o3o *b3x4x, x4x3o3o4o - tattit - O89
5-polytopes
Honeycombs (geometry)
Truncated tilings | Truncated tesseractic honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 318 | [
"Honeycombs (geometry)",
"Truncated tilings",
"Tessellation",
"Crystallography",
"Symmetry"
] |
40,450,718 | https://en.wikipedia.org/wiki/Cantellated%20tesseractic%20honeycomb | In four-dimensional Euclidean geometry, the cantellated tesseractic honeycomb is a uniform space-filling tessellation (or honeycomb) in Euclidean 4-space. It is constructed by a cantellation of a tesseractic honeycomb creating cantellated tesseracts, and new 24-cell and octahedral prism facets at the original vertices.
Related honeycombs
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
Demitesseractic honeycomb
24-cell honeycomb
Truncated 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
Notes
References
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] See p318
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs)
o3x3o *b3o4x, x4o3x3o4o - srittit - O90
Honeycombs (geometry)
5-polytopes | Cantellated tesseractic honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 323 | [
"Tessellation",
"Crystallography",
"Honeycombs (geometry)",
"Symmetry"
] |
40,450,947 | https://en.wikipedia.org/wiki/Cantitruncated%20tesseractic%20honeycomb | In four-dimensional Euclidean geometry, the cantitruncated tesseractic honeycomb is a uniform space-filling tessellation (or honeycomb) in Euclidean 4-space.
Related honeycombs
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
Demitesseractic honeycomb
24-cell honeycomb
Truncated 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
Notes
References
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] See p318
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs)
o3x3o *b3x4x, x4x3x3o4o - grittit - O94
Honeycombs (geometry)
5-polytopes
Truncated tilings | Cantitruncated tesseractic honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 288 | [
"Honeycombs (geometry)",
"Truncated tilings",
"Tessellation",
"Crystallography",
"Symmetry"
] |
39,154,794 | https://en.wikipedia.org/wiki/Tunable%20resistive%20pulse%20sensing | Tunable resistive pulse sensing (TRPS) is a single-particle technique used to measure the size, concentration and zeta potential of particles as they pass through a size-tunable nanopore.
The technique adapts the principle of resistive pulse sensing, which monitors current flow through an aperture, combined with the use of tunable nanopore technology, allowing the passage of ionic current and particles to be regulated by adjusting the pore size. The addition of the tunable nanopore allows for the measurement of a wider range of particle sizes and improves accuracy.
Technique
Particles crossing a nanopore are detected one at a time as a transient change in the ionic current flow, which is denoted as a blockade event with its amplitude denoted as the blockade magnitude. As blockade magnitude is proportional to particle size, accurate particle sizing can be achieved after calibration with a known standard. This standard is composed of particles of a known size and concentration. For TRPS, carboxylated polystyrene particles are often used.
Nanopore-based detection allows particle-by-particle assessment of complex mixtures. By selecting an appropriately sized nanopore and adjusting its stretch, the nanopore size can be optimized for particle size and improve measurement accuracy.
Adjustments to nanopore stretch, in combination with a fine-control of pressure and voltage allow TRPS to determine sample concentration and to accurately derive individual particle zeta potential in addition to particle size information.
Applications
TRPS was developed by Izon Science Limited, producer of commercially available nanopore-based particle characterization systems. Izon Science Limited currently sell one TRPS device, known as the "Exoid". Previous devices include the "qNano", the "qNano Gold" and the "qViron". These systems have been applied to measure a wide range of biological and synthetic particle types including viruses and nanoparticles. TRPS has been applied in both academic and industrial research fields, including:
Drug delivery research (e.g. lipid nanoparticles and liposomes)
Extracellular vesicles such as exosomes
Virology and vaccine production
Biomedical diagnostics
Microfluidics
References
Nanotechnology
Nanoparticles | Tunable resistive pulse sensing | [
"Materials_science",
"Engineering"
] | 448 | [
"Nanotechnology",
"Materials science"
] |
39,158,997 | https://en.wikipedia.org/wiki/Mutatochrome | Mutatochrome (5,8-epoxy-β-carotene) is a carotenoid. It is the predominant carotenoid in the cap of the bolete mushroom Boletus luridus.
References
Carotenoids
Cyclohexenes | Mutatochrome | [
"Chemistry",
"Biology"
] | 62 | [
"Biomarkers",
"Biotechnology stubs",
"Carotenoids",
"Biochemistry stubs",
"Biochemistry"
] |
39,159,537 | https://en.wikipedia.org/wiki/Variegatic%20acid | Variegatic acid (3,3',4,4'-tetrahydroxypulvinic acid) is an orange pigment found in some mushrooms. It is responsible for the bluing reaction seen in many bolete mushrooms when they are injured. When mushroom tissue containing variegatic acid is exposed to air, the chemical is enzymatically oxidized to blue quinone methide anions, specifically chinonmethid anions. It is derived from xerocomic acid, which is preceded by atromentic acid and atromentin, and its genetic basis is unknown. In its oxidized form (due to the production of a second lactone ring) is variegatorubin, similar to xerocomorubin.
It was first isolated from Suillus variegatus. It has strong antioxidant properties, and a nonspecific inhibitory effect on cytochrome P450 enzymes. A total synthesis was reported in 2001 that uses a Suzuki cross coupling reaction. It was found antibiotically inactive against an array of bacteria and fungi using the disk diffusion assay at 50 μg. However, at similar concentrations it was found to inhibit swarming and (probably consequently) biofilm formation of Bacillus subtilis. In vitro data supports that this pigment is an Fe3+-reducant in Fenton chemistry during the initial attack of dead plant matter as part of the brown-rot saprobic lifestyle.
Derivatives
Variegatic acid methyl ester, 3-O-methylvariegatic acid methyl ester, and 3,3',4,4'-tetra-O-methylvariegatic acid methyl ester are red-orange pigments found in Boletales.
See also
Pulvinic acid
Pulvinone
Vulpinic acid
References
Fungal pigments
Polyphenols
Furanones
Catechols
Carboxylic acids | Variegatic acid | [
"Chemistry"
] | 401 | [
"Carboxylic acids",
"Functional groups"
] |
39,160,240 | https://en.wikipedia.org/wiki/%CE%92-Zeacarotene | β-Zeacarotene is a carotenoid. It is used as a coloring agent in the food and pharmaceutical industries. First reported in 1953, it was discovered to occur in small quantities when the fungus Phycomyces blakeseeanus was grown with diphenylamine, a compound that inhibits the synthesis of beta-carotene.
References
Carotenoids
Cyclohexenes | Β-Zeacarotene | [
"Biology"
] | 85 | [
"Biomarkers",
"Carotenoids"
] |
48,044,666 | https://en.wikipedia.org/wiki/Claude%20Fabre | Claude Fabre (born April 23, 1951) is a French physicist, professor emeritus at the Sorbonne University and member of the Kastler-Brossel Laboratory of Sorbonne University, École normale supérieure and Collège de France.
Career
From 1970 to 1974 Claude Fabre studied at the École Normale Supérieure (Paris) and University Paris 6. In 1974, he obtained the agrégation in physics and completed his MSc studies with Claude Cohen-Tannoudji as supervisor. In that same year, he obtained a position at the French National Center for Scientific Research (CNRS), working under the supervision of Serge Haroche at the "Laboratoire de spectroscopie hertzienne de l'ENS", later renamed as Kastler-Brossel Laboratory. He earned his Ph.D. in 1980.
In 1984-1985, he was a visiting scientist at the IBM Laboratory, San Jose California. He became CNRS Director of Research from 1986 to 1998, and part time Associate Professor at the École polytechnique. In 1998, he left CNRS to become Professor at the Université Pierre-et-Marie-Curie, now integrated in Sorbonne University.
Fabre was director of the Doctoral School of Physics in Paris area from 2001 to 2007, and editor in chief of the European Physical Journal D from 2008 to 2012. In 2009, he was a visiting researcher at the National Institute of Standards and Technology, Gaithersburg, then invited professor at the East China Normal University, Shanghai. He was president of the French Optical Society from 2009 to 2011, and senior member of the Institut universitaire de France from 2007 to 2017.
In addition to his activities as researcher and educator, Fabre is active in reinforcing links between the University and High School teaching. From 2007 to 2010 he was chair of jury of "agrégation de physique", recruiting high level physics teachers, then in 2010-2012 member of the committee of reform of physics curricula in high schools, and from 2013 to 2015, commissioned by the minister of education to coordinate the steering committee in charge of the reform of teachers training (École supérieure du professorat et de l'éducation).
Fabre has also been active in scientific outreach, in particular during the World Year of Physics (2005), the 50th anniversary of laser (2005) and the World Year of Light (2015).
Research areas
His earlier research work concerned the interaction between microwaves and highly excited atoms (Rydberg states), both from the theoretical and experimental sides.
Since 1987, his main research subject has been the study of the non-classical properties of light, in particular the study of the quantum fluctuations of light field and of the strong quantum correlations existing between entangled light beams.
Fabre is particularly interested in the quantum aspects of complex or multimodal optical systems with many quantum degrees of freedom, like optical images and light pulses of arbitrary shapes.
He has used these correlations and tailored quantum fluctuations to further the quantum limits of high sensitivity measurements: weak absorption, accurate space-time positioning, improvement of optical resolution, and clock synchronization.
He is also involved in the application of such "multimode quantum technologies" to quantum information processing.
References
Living people
Place of birth missing (living people)
Academic staff of École Polytechnique
1951 births
École Normale Supérieure alumni
20th-century French physicists
21st-century French physicists
Scientists from Paris
Research directors of the French National Centre for Scientific Research
Quantum physicists | Claude Fabre | [
"Physics"
] | 724 | [
"Quantum physicists",
"Quantum mechanics"
] |
48,046,365 | https://en.wikipedia.org/wiki/Glass%20working | Glass working refers collectively to a wide range of techniques and artistic styles that use glass as the primary medium. Some common forms of glass working are:
Glassblowing, the creation of hollow objects such as bottles and vases by blowing air through molten glass
Glass sculpture, works sculpted or molded from glass
Stained glass, glass colored by various means, usually in an artistic fashion
See also
Glass beadmaking
Glass polishing
Glass production
History of glass
Other media
metal working
wood working
Glass | Glass working | [
"Physics",
"Chemistry"
] | 96 | [
"Homogeneous chemical mixtures",
"Amorphous solids",
"Unsolved problems in physics",
"Glass"
] |
48,050,134 | https://en.wikipedia.org/wiki/Microbial%20rhodopsin | Microbial rhodopsins, also known as bacterial rhodopsins, are retinal-binding proteins that provide light-dependent ion transport and sensory functions in halophilic and other bacteria. They are integral membrane proteins with seven transmembrane helices, the last of which contains the attachment point (a conserved lysine) for retinal. Most microbial rhodopsins pump inwards, however "mirror rhodopsins" which function outwards. have been discovered.
This protein family includes light-driven proton pumps, ion pumps and ion channels, as well as light sensors. For example, the proteins from halobacteria include bacteriorhodopsin and archaerhodopsin, which are light-driven proton pumps; halorhodopsin, a light-driven chloride pump; and sensory rhodopsin, which mediates both photoattractant (in the red) and photophobic (in the ultra-violet) responses. Proteins from other bacteria include proteorhodopsin.
As their name indicates, microbial rhodopsins are found in Archaea and Bacteria, and also in Eukaryota (such as algae) and viruses; although they are rare in complex multicellular organisms.
Nomenclature
Rhodopsin was originally a synonym for "visual purple", a visual pigment (light-sensitive molecule) found in the retinas of frogs and other vertebrates, used for dim-light vision, and usually found in rod cells. This is still the meaning of rhodopsin in the narrow sense, any protein evolutionarily homologous to this protein. In a broad non-genetic sense, rhodopsin refers to any molecule, whether related by genetic descent or not (mostly not), consisting of an opsin and a chromophore (generally a variant of retinal). All animal rhodopsins arose (by gene duplication and divergence) late in the history of the large G-protein coupled receptor (GPCR) gene family, which itself arose after the divergence of plants, fungi, choanoflagellates and sponges from the earliest animals. The retinal chromophore is found solely in the opsin branch of this large gene family, meaning its occurrence elsewhere represents convergent evolution, not homology. Microbial rhodopsins are, by sequence, very different from any of the GPCR families.
The term bacterial rhodopsin originally referred to the first microbial rhodopsin discovered, known today as bacteriorhodopsin. The first bacteriorhodopsin turned out to be of archaeal origin, from Halobacterium salinarum. Since then, other microbial rhodopsins have been discovered, rendering the term bacterial rhodopsin ambiguous.
Table
Below is a list of some of the more well-known microbial rhodopsins and some of their properties.
The ion-translocating microbial rhodopsin family
The ion-translocating microbial rhodopsin (MR) family () is a member of the TOG Superfamily of secondary carriers. Members of the MR family catalyze light-driven ion translocation across microbial cytoplasmic membranes or serve as light receptors. Most proteins of the MR family are all of about the same size (250-350 amino acyl residues) and possess seven transmembrane helical spanners with their N-termini on the outside and their C-termini on the inside. There are 9 subfamilies in the MR family:
Bacteriorhodopsins pump protons out of the cell;
Halorhodopsins pump chloride (and other anions such as bromide, iodide and nitrate) into the cell;
Sensory rhodopsins, which normally function as receptors for phototactic behavior, are capable of pumping protons out of the cell if dissociated from their transducer proteins;
the Fungal Chaperones are stress-induced proteins of ill-defined biochemical function, but this subfamily also includes a H+-pumping rhodopsin;
the bacterial rhodopsin, called Proteorhodopsin, is a light-driven proton pump that functions as does bacteriorhodopsins;
the Neurospora crassa retinal-containing receptor serves as a photoreceptor (Neurospora ospin I);
the green algal light-gated proton channel, Channelrhodopsin-1;
Sensory rhodopsins from cyanobacteria.
Light-activated rhodopsin/guanylyl cyclase
A phylogenetic analysis of microbial rhodopsins and a detailed analysis of potential examples of horizontal gene transfer have been published.
Structure
Among the high resolution structures for members of the MR Family are the archaeal proteins, bacteriorhodopsin, archaerhodopsin, sensory rhodopsin II, halorhodopsin, as well as an Anabaena cyanobacterial sensory rhodopsin (TC# 3.E.1.1.6) and others.
Function
The association of sensory rhodopsins with their transducer proteins appears to determine whether they function as transporters or receptors. Association of a sensory rhodopsin receptor with its transducer occurs via the transmembrane helical domains of the two interacting proteins. There are two sensory rhodopsins in any one halophilic archaeon, one (SRI) that responds positively to orange light but negatively to blue light, the other (SRII) that responds only negatively to blue light. Each transducer is specific for its cognate receptor. An x-ray structure of SRII complexed with its transducer (HtrII) at 1.94 Å resolution is available (). Molecular and evolutionary aspects of the light-signal transduction by microbial sensory receptors have been reviewed.
Homologues
Homologues include putative fungal chaperone proteins, a retinal-containing rhodopsin from Neurospora crassa, a H+-pumping rhodopsin from Leptosphaeria maculans, retinal-containing proton pumps isolated from marine bacteria, a green light-activated photoreceptor in cyanobacteria that does not pump ions and interacts with a small (14 kDa) soluble transducer protein and light-gated H+ channels from the green alga, Chlamydomonas reinhardtii. The N. crassa NOP-1 protein exhibits a photocycle and conserved H+ translocation residues that suggest that this putative photoreceptor is a slow H+ pump.
Most of the MR family homologues in yeast and fungi are of about the same size and topology as the archaeal proteins (283-344 amino acyl residues; 7 putative transmembrane α-helical segments), but they are heat shock- and toxic solvent-induced proteins of unknown biochemical function. They have been suggested to function as pmf-driven chaperones that fold extracellular proteins, but only indirect evidence supports this postulate. The MR family is distantly related to the 7 TMS LCT family (TC# 2.A.43). Representative members of MR family can be found in the Transporter Classification Database.
Bacteriorhodopsin
Bacteriorhodopsin pumps one H+ ion, from the cytosol to the extracellular medium, per photon absorbed. Specific transport mechanisms and pathways have been proposed. The mechanism involves:
photo-isomerization of the retinal and its initial configurational changes,
deprotonation of the retinal Schiff base and the coupled release of a proton to the extracellular membrane surface,
the switch event that allows reprotonation of the Schiff base from the cytoplasmic side.
Six structural models describe the transformations of the retinal and its interaction with water 402, Asp85, and Asp212 in atomic detail, as well as the displacements of functional residues farther from the Schiff base. The changes provide rationales for how relaxation of the distorted retinal causes movements of water and protein atoms that result in vectorial proton transfers to and from the Schiff base. Helix deformation is coupled to vectorial proton transport in the photocycle of bacteriorhodopsin.
Most residues participating in the trimerization are not conserved in bacteriorhodopsin, a homologous protein capable of forming a trimeric structure in the absence of bacterioruberin. Despite a large alteration in the amino acid sequence, the shape of the intratrimer hydrophobic space filled by lipids is highly conserved between archaerhodopsin-2 and bacteriorhodopsin. Since a transmembrane helix facing this space undergoes a large conformational change during the proton pumping cycle, it is feasible that trimerization is an important strategy to capture special lipid components that are relevant to the protein activity.
Archaerhodopsin
Archaerhodopsins are light-driven H+ ion transporters. They differ from bacteriorhodopsin in that the claret membrane, in which they are expressed, includes bacterioruberin, a second chromophore thought to protect against photobleaching. Bacteriorhodopsin also lacks the omega loop structure that has been observed at the N-terminus of the structures of several archaerhodopsins.
Archaerhodopsin-2 (AR2) is found in the claret membrane of Halorubrum sp. It is a light-driven proton pump. Trigonal and hexagonal crystals revealed that trimers are arranged on a honeycomb lattice. In these crystals, bacterioruberin binds to crevices between the subunits of the trimer. The polyene chain of the second chromophore is inclined from the membrane normal by an angle of about 20 degrees and, on the cytoplasmic side, it is surrounded by helices AB and DE of neighboring subunits. This peculiar binding mode suggests that bacterioruberin plays a structural role for the trimerization of AR2. When compared with the aR2 structure in another crystal form containing no bacterioruberin, the proton release channel takes a more closed conformation in the P321 or P6(3) crystal; i.e., the native conformation of protein is stabilized in the trimeric protein-bacterioruberin complex.
Mutants of Archaerhodopsin-3 (AR3) are widely used as tools in optogenetics for neuroscience research.
Channelrhodopsins
Channelrhodopsin-1 (ChR1) or channelopsin-1 (Chop1; Cop3; CSOA) of C. reinhardtii is closely related to the archaeal sensory rhodopsins. It has 712 aas with a signal peptide, followed by a short amphipathic region, and then a hydrophobic N-terminal domain with seven probable TMSs (residues 76-309) followed by a long hydrophilic C-terminal domain of about 400 residues. Part of the C-terminal hydrophilic domain is homologous to intersection (EH and SH3 domain protein 1A) of animals (AAD30271).
Chop1 serves as a light-gated proton channel and mediates phototaxis and photophobic responses in green algae. Based on this phenotype, Chop1 could be assigned to TC category #1.A, but because it belongs to a family in which well-characterized homologues catalyze active ion transport, it is assigned to the MR family. Expression of the chop1 gene, or a truncated form of that gene encoding only the hydrophobic core (residues 1-346 or 1–517) in frog oocytes in the presence of all-trans retinal produces a light-gated conductance that shows characteristics of a channel passively but selectively permeable to protons. This channel activity probably generates bioelectric currents.
A homologue of ChR1 in C. reinhardtii is channelrhodopsin-2 (ChR2; Chop2; Cop4; CSOB). This protein is 57% identical, 10% similar to ChR1. It forms a cation-selective ion channel activated by light absorption. It transports both monovalent and divalent cations. It desensitizes to a small conductance in continuous light. Recovery from desensitization is accelerated by extracellular H+ and a negative membrane potential. It may be a photoreceptor for dark adapted cells. A transient increase in hydration of transmembrane α-helices with a t(1/2) = 60 μs tallies with the onset of cation permeation. Aspartate 253 accepts the proton released by the Schiff base (t(1/2) = 10 μs), with the latter being reprotonated by aspartic acid 156 (t(1/2) = 2 ms). The internal proton acceptor and donor groups, corresponding to D212 and D115 in bacteriorhodopsin, are clearly different from other microbial rhodopsins, indicating that their spatial positions in the protein were relocated during evolution. E90 deprotonates exclusively in the nonconductive state. The observed proton transfer reactions and the protein conformational changes relate to the gating of the cation channel.
Halorhodopsins
Bacteriorhodopsin pumps one Cl− ion, from the extracellular medium into the cytosol, per photon absorbed. Although the ions move in the opposite direction, the current generated (as defined by the movement of positive charge) is the same as for bacteriorhodopsin and the archaerhodopsins.
Marine Bacterial Rhodopsin
A marine bacterial rhodopsin has been reported to function as a proton pump. However, it also resembles sensory rhodopsin II of archaea as well as an Orf from the fungus Leptosphaeria maculans (AF290180). These proteins exhibit 20-30% identity with each other.
Transport Reaction
The generalized transport reaction for bacterio- and sensory rhodopsins is:
H+ (in) + hν → H+ (out).
That for halorhodopsin is:
Cl− (out) + hν → Cl− (in).
See also
Bacteriorhodopsin
Proteorhodopsin
Opsin
References
Sensory receptors
Biological pigments
Protein families
Membrane proteins
Transmembrane proteins
Transmembrane transporters
Transport proteins
Integral membrane proteins
ru:Бактериородопсин | Microbial rhodopsin | [
"Biology"
] | 3,171 | [
"Protein classification",
"Membrane proteins",
"Protein families",
"Biological pigments",
"Pigmentation"
] |
48,050,521 | https://en.wikipedia.org/wiki/Acanthostigma%20septoconstrictum | Acanthostigma septoconstrictum is a species of fungus in the Tubeufiaceae family of fungi. It was isolated from decomposing wood in the Great Smoky Mountains National Park. A. septoconstrictum differs from its cogenerate species by having longer setae and asci and broader, asymmetrical ascospores which are constricted at their septa.
References
Further reading
Sanchez, Romina Magalí, Andrew N. Miller, and Maria Virginia Bianchinotti. "A new species of Acanthostigma (Tubeufiaceae, Dothideomycetes) from the southern hemisphere." Mycologia 104.1 (2012): 223–231.
Boonmee, Saranyaphat, et al. "Revision of lignicolous Tubeufiaceae based on morphological reexamination and phylogenetic analysis." Fungal Diversity 51.1 (2011): 63–102.
External links
MycoBank
Tubeufiaceae
Fungus species | Acanthostigma septoconstrictum | [
"Biology"
] | 209 | [
"Fungi",
"Fungus species"
] |
48,050,589 | https://en.wikipedia.org/wiki/Cable%20robots | Cable-driven parallel robots (cable robots in short, also called as cable-suspended robots and wire-driven robots as well) are a type of parallel manipulators in which flexible cables are used as actuators. One end of each cable is reeled around a rotor twisted by a motor, and the other end is connected to the end-effector. One famous example of cable robots is Skycam which is used to move a suspended camera in stadiums. Cables are much lighter than rigid linkages of a serial or parallel robot, and very long cables can be used without making the mechanism massive. As a result, the end-effector of a cable robot can achieve high accelerations and velocities and work in a very large workspace (e.g. a stadium). Numerous engineering articles have studied the kinematics and dynamics of cable robots (e.g. see In The International Journal of Robotics Research 27.9 (2008): 1007–1026. for an enhanced of the concept). Dynamic analysis of cable robots is not the same as that of other parallel robots because cables can only pull an object but they cannot push. Therefore, the manipulator is able to perform a task only if the force in all cables are non-negative. Accordingly, the workspace of cable robots is defined as a region in space where the end-effector is able to exert the required wrench (force and moment vectors) to the surroundings while all cables are in tension (non-negative forces). Many research works have focused on workspace analysis and optimization of cable robots. Workspace and controllability of cable robots can be enhanced by adding cables to structure of the robot. Consequently, redundancy plays a key role in design of cable robots.
However, workspace analysis and obtaining positive tension in cables of a redundant cable robot can be complicated. In general, for a redundant robot, infinite solution may exist, but for a redundant cable robot a solution is acceptable only if all the elements of tension vector are non-negative. Finding such a solution can be challenging, especially if the end-effector is working along a trajectory and a continuous and smooth distribution of tensions is desired in cables. In literature, several methods have been presented to solve such problems a computational method is introduced based on Particle Swarm Optimization method to find continuous smooth solutions along a trajectory for a general redundant cable robot).
In addition to parallel cable robots, cables have been used as actuators in serial robots as well. By employing cables as actuators a mechanism can be designed much smaller and lighter (e.g. a human-like finger mechanism actuated by cables is presented in ).
References
Mechanisms (engineering)
Robots | Cable robots | [
"Physics",
"Technology",
"Engineering"
] | 559 | [
"Machines",
"Robots",
"Physical systems",
"Mechanical engineering",
"Mechanisms (engineering)"
] |
37,687,136 | https://en.wikipedia.org/wiki/C19H23ClN2 | {{DISPLAYTITLE:C19H23ClN2}}
The molecular formula C19H23ClN2 (molar mass: 314.85 g/mol, exact mass: 314.1550 u) may refer to:
Clomipramine
Homochlorcyclizine
Molecular formulas | C19H23ClN2 | [
"Physics",
"Chemistry"
] | 65 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
37,689,509 | https://en.wikipedia.org/wiki/Neuroscience%20of%20rhythm | The neuroscience of rhythm refers to the various forms of rhythm generated by the central nervous system (CNS). Nerve cells, also known as neurons in the human brain are capable of firing in specific patterns which cause oscillations. The brain possesses many different types of oscillators with different periods. Oscillators are simultaneously outputting frequencies from .02 Hz to 600 Hz. It is now well known that a computer is capable of running thousands of processes with just one high-frequency clock. Humans have many different clocks as a result of evolution. Prior organisms had no need for a fast-responding oscillator. This multi-clock system permits quick response to constantly changing sensory input while still maintaining the autonomic processes that sustain life. This method modulates and controls a great deal of bodily functions.
Autonomic rhythms
The autonomic nervous system is responsible for many of the regulatory processes that sustain human life. Autonomic regulation is involuntary, meaning we do not have to think about it for it to take place. A great deal of these are dependent upon a certain rhythm, such as sleep, heart rate, and breathing.
Circadian rhythms
Circadian literally translates to "about a day" in Latin. This refers to the human 24-hour cycle of sleep and wakefulness. This cycle is driven by light. The human body must photoentrain or synchronize itself with light in order to make this happen. The rod cells are the photoreceptor cells in the retina capable of sensing light. However, they are not what sets the biological clock. The photosensitive retinal ganglion cells contain a pigment called melanopsin. This photopigment is depolarized in the presence of light, unlike the rods which are hyperpolarized. Melanopsin encodes the day-night cycle to the suprachiasmatic nucleus (SCN) via the retinohypothalamic tract. The SCN evokes a response from the spinal cord. Preganglionic neurons in the spinal cord modulate the superior cervical ganglia, which synapses on the pineal gland. The pineal gland synthesizes the neurohormone melatonin from tryptophan. Melatonin is secreted into the bloodstream where it affects neural activity by interacting with melatonin receptors on the SCN. The SCN is then able to influence the sleep wake cycle, acting as the "apex of a hierarchy" that governs physiological timing functions. "Rest and sleep are the best example of self-organized operations within neuronal circuits".
Sleep and memory have been closely correlated for over a century. It seemed logical that the rehearsal of learned information during the day, such as in dreams, could be responsible for this consolidation. REM sleep was first studied in 1953. It was thought to be the sole contributor to memory due to its association with dreams. It has recently been suggested that if sleep and waking experience are found to be using the same neuronal content, it is reasonable to say that all sleep has a role in memory consolidation. This is supported by the rhythmic behavior of the brain. Harmonic oscillators have the capability to reproduce a perturbation that happened in previous cycles. It follows that when the brain is unperturbed, such as during sleep, it is in essence rehearsing the perturbations of the day. Recent studies have confirmed that off-wave states, such as slow-wave sleep, play a part in consolidation as well as REM sleep. There have even been studies done implying that sleep can lead to insight or creativity. Jan Born, from the University of Lubeck, showed subjects a number series with a hidden rule. She allowed one group to sleep for three hours, while the other group stayed awake. The awake group showed no progress, while most of the group that was allowed to sleep was able to solve the rule. This is just one example of how rhythm could contribute to humans unique cognitive abilities.
Central pattern generation
A central pattern generator (CPG) is defined as a neural network that does not require sensory input to generate a rhythm. This rhythm can be used to regulate essential physiological processes. These networks are often found in the spinal cord. It has been hypothesized that certain CPG's are hardwired from birth. For example, an infant does not have to learn how to breathe and yet it is a complicated action that involves a coordinated rhythm from the medulla. The first CPG was discovered by removing neurons from a locust. It was observed that the group of neurons was still firing as if the locust was in flight. In 1994, evidence of CPG's in humans was found. A former quadrapalegic began to have some very limited movement in his lower legs. Upon lying down, he noticed that if he moved his hips just right his legs began making walking motions. The rhythmic motor patterns were enough to give the man painful muscle fatigue.
A key part of CPG's is half-center oscillators. In its simplest form, this refers to two neurons capable of rhythmogenesis when firing together. The generation of a biological rhythm, or rhythmogenesis, is done by a series of inhibition and activation. For example, a first neuron inhibits a second one while it fires, however, it also induces slow depolarization in the second neuron. This is followed by the release of an action potential from the second neuron as a result of depolarization, which acts on the first in a similar fashion. This allows for self-sustaining patterns of oscillation. Furthermore, new motor patterns, such as athletic skills or the ability to play an instrument, also use half-center oscillators and are simply learned perturbations to CPG's already in place.
Respiration
Ventilation requires periodic movements of the respiratory muscles. These muscles are controlled by a rhythm generating network in the brain stem. These neurons comprise the ventral respiratory group (VRG). Although this process is not fully understood, it is believed to be governed by a CPG and there have been several models proposed. The classic three phase model of respiration was proposed by D.W. Richter. It contains 2 stages of breathing, inspiratory and expiratory, that are controlled by three neural phases, inspiration, post-inspiration and expiration. Specific neural networks are dedicated to each phase. They are capable of maintaining a sustained level of oxygen in the blood by triggering the lungs to expand and contract at the correct time. This was seen by the measuring of action potentials. It was observed that certain groups of neurons synchronized with certain phases of respiration. The overall behavior was oscillatory in nature. This is an example of how an autonomous biorhythm can control a crucial bodily function.
Cognition
This refers to the types of rhythm that humans are able to generate, be it from recognition of others or sheer creativity.
Sports
Muscle coordination, muscle memory, and innate game awareness all rely on the nervous system to produce a specific firing pattern in response to an either an efferent or afferent signal. Sports are governed by the same production and perception of oscillations that govern much of human activity. For example, in basketball, in order to anticipate the game one must recognize rhythmic patterns of other players and perform actions calibrated to these movements. "The rhythm of a game of basketball emerges from the rhythm of individuals, the rhythm among team members, and the rhythmic contrasts between opposing teams". Although the exact oscillatory pattern that modulates different sports has not been found, there have been studies done to show a correlation between athletic performance and circadian timing. It has been shown certain times of the day are better for training and gametime performance. Training has the best results when done in the morning, while it is better to play a game at night.
Music
The ability to perceive and generate music is frequently studied as a way to further understand human rhythmic processing. Research projects, such as Brain Beats, are currently studying this by developing beat tracking algorithms and designing experimental protocols to analyze human rhythmic processing. This is rhythm in its most obvious form. Human beings have an innate ability to listen to a rhythm and track the beat, as seen here "Dueling Banjos". This can be done by bobbing the head, tapping of the feet or even clapping. Jessica Grahn and Matthew Brett call this spontaneous movement "motor prediction". They hypothesized that it is caused by the basal ganglia and the supplementary motor area (SMA). This would mean that those areas of the brain would be responsible for spontaneous rhythm generation, although further research is required to prove this. However, they did prove that the basal ganglia and SMA are highly involved in rhythm perception. In a study where patients brain activity was recorded using fMRI, increased activity was seen in these areas both in patients moving spontaneously (bobbing their head) and in those who were told to stay still.
Computational models
Computational neuroscience is the theoretical study of the brain used to uncover the principles and mechanisms that guide the development, organization, information-processing and mental abilities of the nervous system. Many computational models have attempted to quantify the process of how various rhythms are created by humans.
Avian song learning
Juvenile avian song learning is one of the best animal models used to study generation and recognition of rhythm. The ability for birds to process a tutor song and then generate a perfect replica of that song, underlies our ability to learn rhythm.
Two very famous computational neuroscientists Kenji Doya and Terrence J. Sejnowski created a model of this using the Zebra Finch as target organism. The Zebra Finch is perhaps one of the most easily understood examples of this among birds. The young Zebra Finch is exposed to a "tutor song" from the adult, during a critical period. This is defined as the time of life that learning can take place, in other words when the brain has the most plasticity. After this period, the bird is able to produce an adult song, which is said to be crystallized at this point. Doya and Sejnowski evaluated three possible ways that this leaning could happen, an immediate, one shot perfection of the tutor song, error learning, and reinforcement learning. They settled on the third scheme. Reinforcement learning consists of a "critic" in the brain capable of evaluating the difference between the tutor and the template song. Assuming the two are closer than the last trial, this "critic" then sends a signal activating NMDA receptors on the articulator of the song. In the case of the Zebra Finch, this articulator is the robust nucleus of archistriatum or RA. The NMDA receptors allow the RA to be more likely to produce this template of the tutor song, thus leading to learning of the correct song.
Dr. Sam Sober explains the process of tutor song recognition and generation using error learning. This refers to a signal generated by the avian brain that corresponds to the error between the tutor song and the auditory feedback the bird gets. The signal is simply optimized in order to be as small of a difference as possible, which results in the learning of the song. Dr. Sober believes that this is also the mechanism employed in human speech learning. Although it's clear that humans are constantly adjusting their speech while birds are believed to have crystallized their song upon reaching adulthood. He tested this idea by using headphones to alter a Bengalese finch's auditory feedback. The bird actually corrected for up to 40% of the perturbation. This provides strong support for error learning in humans.
Macaque motor cortex
This animal model has been said to be more similar to humans than birds. It has been shown that humans demonstrate 15–30 Hz (Beta) oscillations in the cortex while performing muscle coordination exercises. This was also seen in macaque monkey cortices. The cortical local field potentials (LFPs) of conscious monkeys were recorded while they performed a precision grip task. More specifically, the pyramidal tract neurons (PTNs) were targeted for measurement. The primary frequency recorded was between 15 and 30 Hz, the same oscillation found in humans. These findings indicate that the macaque monkey cortex could be a good model for rhythm perception and production. One example of how this model is used is the investigation of the role of motor cortex PTNs in "corticomuscular coherence" (muscle coordination). In similar study where LFPs were recorded from macaque monkeys while they performed a precision grip task, it was seen that the disruption of the PTN resulted in a greatly reduced oscillatory response. Stimulation of the PTN caused the monkeys to not be able to perform the grip task as well. It was concluded that PTNs in the motor cortex directly influence the generation of Beta rhythms.
Imaging
Current methods
At the moment, recording methods are not capable of simultaneously measuring small and large areas at the same time, with the temporal resolution that the circuitry of the brain requires. These techniques include EEG, MEG, fMRI, optical recordings, and single-cell recordings.
Future
Techniques such as large scale single-cell recordings are movements in the direction of analyzing overall brain rhythms. However, these require invasive procedures, such as tetrode implantation, which does not allow a healthy brain to be studied. Also, pharmacological manipulation, cell culture imaging and computational biology all make attempts at doing this but in the end they are indirect.
Frequency bands
The classification of frequency borders allowed for a meaningful taxonomy capable of describing brain rhythms, known as neural oscillations.
References
Basic neuroscience research
Rhythm and meter
Articles containing video clips | Neuroscience of rhythm | [
"Physics"
] | 2,835 | [
"Spacetime",
"Rhythm and meter",
"Physical quantities",
"Time"
] |
37,689,664 | https://en.wikipedia.org/wiki/Resting%20state%20fMRI | Resting state fMRI (rs-fMRI or R-fMRI), also referred to as task-independent fMRI or task-free fMRI, is a method of functional magnetic resonance imaging (fMRI) that is used in brain mapping to evaluate regional interactions that occur in a resting or task-negative state, when an explicit task is not being performed. A number of resting-state brain networks have been identified, one of which is the default mode network. These brain networks are observed through changes in blood flow in the brain which creates what is referred to as a blood-oxygen-level dependent (BOLD) signal that can be measured using fMRI.
Because brain activity is intrinsic, present even in the absence of an externally prompted task, any brain region will have spontaneous fluctuations in BOLD signal. The resting state approach is useful to explore the brain's functional organization and to examine if it is altered in neurological or mental disorders. Because of the resting state aspect of this imaging, data can be collected from a range of patient groups including people with intellectual disabilities, pediatric groups, and even those that are unconscious. Resting-state functional connectivity research has revealed a number of networks which are consistently found in healthy subjects, different stages of consciousness and across species, and represent specific patterns of synchronous activity.
Basics of resting state fMRI
Functional magnetic resonance imaging (functional MRI or fMRI) is a specific magnetic resonance imaging (MRI) procedure that measures brain activity by detecting associated changes in blood flow. More specifically, brain activity is measured through low frequency BOLD signal in the brain.
The procedure is similar to MRI but uses the change in magnetization between oxygen-rich and oxygen-poor blood as its basic measure. This measure is frequently corrupted by noise from various sources and hence statistical procedures are used to extract the underlying signal. The resulting brain activation can be presented graphically by color-coding the strength of activation across the brain or the specific region studied. The technique can localize activity to within millimeters but, using standard techniques, no better than within a window of a few seconds.
FMRI is used both in research, and to a lesser extent, in clinical settings. It can also be combined and complemented with other measures of brain physiology such as EEG, NIRS, and functional ultrasound. Arterial spin labeling fMRI can be used as a complementary approach for assessing resting brain functions.
Physiological basis
The physiological blood-flow response largely decides the temporal sensitivity, how well neurons that are active can be measured in BOLD fMRI. The basic time resolution parameter is the sampling rate, or TR, which dictates how often a particular brain slice is excited and allowed to lose its magnetization. TRs could vary from the very short (500 ms) to the very long (3 seconds). For fMRI specifically, the haemodynamic response is assumed to last over 10 seconds, rising multiplicatively (that is, as a proportion of current value), peaking at 4 to 6 seconds, and then falling multiplicatively. Changes in the blood-flow system, the vascular system, integrate responses to neuronal activity over time. Because this response is a smooth continuous function, sampling with faster TRs helps only to map faster fluctuations like respiratory and heart rate signals.
While fMRI strives to measure the neuronal activity in the brain, the BOLD signal can be influenced by many other physiological factors other than neuronal activity. For example, respiratory fluctuations and cardiovascular cycles affect the BOLD signal being measured in the brain and therefore are usually tried to be removed during processing of the raw fMRI data. Due to these sources of noise, there have been many experts who have approached the idea of resting state fMRI very skeptically during the early uses of fMRI. It has only been very recently that researchers have become confident that the signal being measured is not an artifact caused by other physiological function.
Resting state functional connectivity between spatially distinct brain regions reflects the repeated history of co-activation patterns within these regions, thereby serving as a measure of plasticity.
History
Bharat Biswal
In 1992, Bharat Biswal started his work as a graduate student at The Medical College of Wisconsin under the direction of his advisor, James S. Hyde, and discovered that the brain, even during rest, contains information about its functional organization. He had used fMRI to study how different regions of the brain communicate while the brain is at rest and not performing any active task. Though at the time, Biswal's research was mostly disregarded and attributed to another signal source, his resting neuroimaging technique has now been widely replicated and considered a valid method of mapping functional brain networks. Mapping the brain's activity while it is at rest holds many potentials for brain research and even helps doctors diagnose various diseases of the brain.
Marcus Raichle
Experiments by neurologist Marcus Raichle's lab at Washington University School of Medicine and other groups showed that the brain's energy consumption is increased by less than 5% of its baseline energy consumption while performing a focused mental task. These experiments showed that the brain is constantly active with a high level of activity even when the person is not engaged in focused mental work (the resting state). His lab has been primarily focused on finding the basis of this resting activity and is credited with many groundbreaking discoveries. These include the relative independence of blood flow and oxygen consumption during changes in brain activity, which provided the physiological basis of fMRI, as well the discovery of the well known Default Mode Network.
Connectivity
Functional
Functional connectivity is the connectivity between brain regions that share functional properties. More specifically, it can be defined as the temporal correlation between spatially remote neurophysiological events, expressed as deviation from statistical independence across these events in distributed neuronal groups and areas. This applies to both resting state and task-state studies. While functional connectivity can refer to correlations across subjects, runs, blocks, trials, or individual time points, resting state functional connectivity focuses on connectivity assessed across individual BOLD time points during resting conditions. Functional connectivity has also been evaluated using the perfusion time series sampled with arterial spin labeled perfusion fMRI. Functional connectivity MRI (), which can include resting state fMRI and task-based MRI, might someday help provide more definitive diagnoses for mental health disorders such as bipolar disorder and may also aid in understanding the development and progression of post-traumatic stress disorder as well as evaluate the effect of treatment. Functional connectivity has been suggested to be an expression of the network behavior underlying high level cognitive function partially because unlike structural connectivity, functional connectivity often changes on the order of seconds as in the case of dynamic functional connectivity.
Networks
Default mode network
The default mode network (DMN) is a network of brain regions that are active when an individual is awake and at rest. The default mode network is an interconnected and anatomically defined brain system that preferentially activates when individuals focus on internal tasks such as daydreaming, envisioning the future, retrieving memories, and gauging others' perspectives. It is negatively correlated with brain systems that focus on external visual signals. It is one of the most studied networks present during resting state and is one of the most easily visualized networks.
Other resting state networks
Depending on the method of resting state analysis, functional connectivity studies have reported a number of neural networks that result to be strongly functionally connected during rest. The key networks, also referred as components, which are more frequently reported include: the DMN, the sensory/motor networks, the central executive network (CEN), up to three different visual networks, a ventral and dorsal attention network, the auditory network and the limbic network. As already reported, these resting-state networks consist of anatomically separated, but functionally connected regions displaying a high level of correlated BOLD signal activity. These networks are found to be quite consistent across studies, despite differences in the data acquisition and analysis techniques. Importantly, most of these resting-state components represent known functional networks, that is, regions that are known to share and support cognitive functions.
Analyzing data
Processing data
Many programs exist for the processing and analyzing of resting state fMRI data. Some of the most commonly used programs include SPM, AFNI, FSL (esp. Melodic for ICA), CONN, C-PAC, and Connectome Computation System (CCS).
Methods of analysis
There are many methods of both acquiring and processing rsfMRI data. The most popular methods of analysis focus either on independent components or on regions of correlation.
Independent component analysis
Independent component analysis (ICA) is a useful statistical approach in the detection of resting state networks. ICA separates a signal into non-overlapping spatial and time components. It is highly data-driven and allows for better removal of noisy components of the signal (motion, scanner drift, etc.). It also has been shown to reliably extract default mode network as well as many other networks with very high consistency. ICA remains in the forefront of the research methods.
Regional analysis
Other methods of observing networks and connectivity in the brain include the seed-based d mapping and region of interest (ROI) methods of analysis. In these cases, signal from only a certain voxel or cluster of voxels known as the seed or ROI are used to calculate correlations with other voxels of the brain. This provides a much more precise and detailed look at specific connectivity in brain areas of interest. This can also be performed across the entire brain by utilizing an atlas, making it easier to define ROI's and measure connectivity. In 2021, Yeung and colleagues conducted a regional analysis utilizing a modified version of the Human Connectome Project (HCP) atlas, and found changes in the functional connectome of stroke patients during rehabilitative treatment. Overall connectivity between an ROI (such as the prefrontal cortex) and all other voxels of the brain can also be averaged, providing a measure of global brain connectivity (GBC) specific to that ROI.
Other methods for characterizing resting-state networks include partial correlation, coherence and partial coherence, phase relationships, dynamic time warping distance, clustering, and graph theory.
Reliability and reproducibility
Resting-state functional magnetic resonance imaging (rfMRI) can image low-frequency fluctuations in the spontaneous brain activities, representing a popular tool for macro-scale functional connectomics to characterize inter-individual differences in normal brain function, mind-brain associations, and the various disorders. This suggests reliability and reproducibility for commonly used rfMRI-derived measures of the human brain functional connectomics. These metrics hold great potentials of accelerating biomarker identification for various brain diseases, which call the need of addressing reliability and reproducibility at first place.
Combining imaging techniques
fMRI with DWI
With fMRI providing functional and DWI structural information about the brain, these two imaging techniques are commonly used in conjunction to provide a holistic view of brain network interactions. When collected from defined ROI's, fMRI data informs researchers of how activity (blood flow) in the brain changes over time or during a task. This is then bolstered through structural DWI data, which shows how individual white matter tracts connect these ROI's. Investigations harnessing these techniques have progressed the field of network neuroscience, by further defining groups of regions in the brain which connect both structurally (having white matter tracts pass between them), and functionally (showing similar or opposite patterns of activity over time), into brain networks like the DMN. Advances in topological data analysis have established a coherent statistical framework for integrating functional and structural information, represented as functional and structural brain networks with distinct topologies.
This combined data provides unique clinical and neuropsychiatric benefit, by enabling the investigation of how brain networks are disturbed, or white matter pathways compromised, by the presence of mental illness or structural damage. Altered brain network connectivity has been shown across a swathe of disorders, such as Schizophrenia, Depression, Stroke, and brain tumor, underpinning their unique symptoms.
fMRI with EEG
Many imaging experts feel that in order to obtain the best combination of spatial and temporal information from brain activity, both fMRI as well as electroencephalography (EEG) should be used simultaneously. This dual technique combines the EEG's well documented ability to characterize certain brain states with high temporal resolution and to reveal pathological patterns, with fMRI's (more recently discovered and less well understood) ability to image blood dynamics through the entire brain with high spatial resolution. Up to now, EEG-fMRI has been mainly seen as an fMRI technique in which the synchronously acquired EEG is used to characterize brain activity ('brain state') across time allowing to map (through statistical parametric mapping, for example) the associated haemodynamic changes.
The clinical value of these findings is the subject of ongoing investigations, but recent researches suggest an acceptable reliability for EEG-fMRI studies and better sensitivity in higher field scanner. Outside the field of epilepsy, EEG-fMRI has been used to study event-related (triggered by external stimuli) brain responses and provided important new insights into baseline brain activity during resting wakefulness and sleep.
fMRI with TMS
Transcranial magnetic stimulation (TMS) uses small and relatively precise magnetic fields to stimulate regions of the cortex without dangerous invasive procedures. When these magnetic fields stimulate an area of the cortex, focal blood flow increases at the site of stimulation as well as at distant sites anatomically connected to the stimulated location. Positron emission tomography (PET) can then be used to image the brain and changes in blood flow and results show very similar regions of connectivity confirming networks found in fMRI studies and TMS can also be used to support and provide more detailed information on the connected regions.
Potential pitfalls
Potential pitfalls when using rsfMRI to determine functional network integrity are contamination of the BOLD signal by sources of physiological noise such as heart rate, respiration, and head motion. These confounding factors can often bias results in studies where patients are compared to healthy controls in the direction of hypothesized effects, for example a lower coherence might be found in the default network in the patient group, while the patient groups also moved more during the scan. Also, it has been shown that the use of global signal regression can produce artificial correlations between a small number of signals (e.g., two or three). Fortunately, the brain has many signals.
Current and future applications
Research using resting state fMRI has the potential to be applied in clinical context, including use in the assessment of many different diseases and mental disorders.
Disease condition and changes in resting state functional connectivity
Alzheimer's disease: decreased connectivity
Mild cognitive impairment: abnormal connectivity
Autism: altered connectivity
Depression and effects of antidepressant treatment: abnormal connectivity
Bipolar disorder and effects of mood stabilizers: abnormal connectivity and network properties
Schizophrenia: disrupted networks
Attention deficit hyperactivity disorder (ADHD): altered "small networks" and thalamus changes
Aging brain: disruption of brain systems and motor network
Epilepsy: disruption and decrease/increase in connectivity
Parkinson's disease: altered connectivity
Obsessive compulsive disorder: increase/decrease in connectivity
Pain disorder: altered connectivity
Anorexia nervosa: connectivity alterations within corticolimbic circuitry and of insular cortex
Other types of current and future clinical applications for resting state fMRI include identifying group differences in brain disease, obtaining diagnostic and prognostic information, longitudinal studies and treatment effects, clustering in heterogeneous disease states, and pre-operative mapping and targeting intervention. Due to its lack of reliance on task performance and cognitive demands, resting state fMRI can be a useful tool in assessing brain alterations in disorders of impaired consciousness and cognition, as well as paediatric populations.
See also
Connectomics
Neuroimaging
List of functional connectivity software
Medical image computing
References
Magnetic resonance imaging
Medical imaging
Neuroimaging | Resting state fMRI | [
"Chemistry"
] | 3,280 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
37,691,693 | https://en.wikipedia.org/wiki/Sepiapterin%20reductase%20deficiency | Sepiapterin reductase deficiency is an inherited pediatric disorder characterized by movement problems, and most commonly displayed as a pattern of involuntary sustained muscle contractions known as dystonia. Symptoms are usually present within the first year of age, but diagnosis is delayed due to physicians lack of awareness and the specialized diagnostic procedures. Individuals with this disorder also have delayed motor skills development including sitting, crawling, and need assistance when walking. Additional symptoms of this disorder include intellectual disability, excessive sleeping, mood swings, and an abnormally small head size. SR deficiency is a very rare condition. The first case was diagnosed in 2001, and since then there have been approximately 30 reported cases. At this time, the condition seems to be treatable, but the lack of overall awareness and the need for a series of atypical procedures used to diagnose this condition pose a dilemma.
Signs and symptoms
Cognitive problems
Intellectual disability: Delay in cognitive development
Extreme mood swings
Language delay
Motor problems
Dystonia: involuntary muscle contractions
Axial hypotonia: low muscle tone and strength
Dysarthria: impairment in muscles used for speech
Muscle stiffness and tremors
Seizures
Coordination and balance impairment
Oculogyric crises: abnormal rotation of the eyes
The oculogyric crises usually occur in the later half of the day and during these episodes patients undergo extreme agitation and irritability along with uncontrolled head and neck movements. Apart from the aforementioned symptoms, patients can also display parkinsonism, sleep disturbances, small head size (microcephaly), behavioral abnormalities, weakness, drooling, and gastrointestinal symptoms.
Causes
This disorder occurs through a mutation in the SPR gene, which is responsible for encoding the sepiapterin reductase enzyme. The enzyme is involved in the last step of producing tetrahydrobiopterin, better known as BH4. BH4 is involved in the processing of amino acids and the production of neurotransmitters, specifically that of dopamine and serotonin which are primarily used in transmission of signals between nerve cells in the brain. The mutation in the SPR gene interferes with the production of the enzyme by producing enzymes with little or no function at all. This interference results in a lack of BH4 specifically in the brain. The lack of BH4 only occurs in the brain because other parts of the body adapt and utilize alternate pathways for the production of BH4. The mutation in the SPR gene leads to nonfunctional sepiapterin reductase enzymes, which results in a lack of BH4 and ultimately disrupts the production of dopamine and serotonin in the brain. The disruption of dopamine and serotonin production leads to the visible symptoms present in patients suffering from sepiapterin reductase deficiency. SR deficiency is considered an inherited autosomal recessive condition disorder because each parent carries one copy of the mutated gene, but typically do not show any signs or symptoms of the condition.
Diagnosis
CSF neurotransmitter screening
The diagnosis of SR deficiency is based on the analysis of the pterins and biogenic amines found in the cerebrospinal fluid (CSF) of the brain. The pterin compound functions as a cofactor in enzyme catalysis and biogenic amines which include adrenaline, dopamine, and serotonin have functions that vary from the control of homeostasis to the management of cognitive tasks. This analysis reveals decreased concentrations of homovanillic acid (HVA), 5-hydroxyindolacetic acid (HIAA), and elevated levels of 7,8-dihydrobiopterin, a compound produced in the synthesis of neurotransmitters. Sepiapterin is not detected by the regularly used methods applied in the investigation of biogenic monoamine metabolites in the cerebrospinal fluid. It must be determined by specialized methods that work by indicating a marked and abnormal increase of sepiapterin in cerebrospinal fluid. Confirmation of the diagnosis occurs by demonstrating high levels of CSF sepiapterin and a marked decrease of SR activity of the fibroblasts along with SPR gene molecular analysis.
Treatment
Levodopa and Carbidopa
SR deficiency is currently being treated using a combination therapy of levodopa and carbidopa. These treatments are also used for individuals suffering from Parkinson's. The treatment is noninvasive and only requires the patient to take oral tablets 3 or 4 times a day, where the dosage of levodopa and carbidopa is determined by the severity of the symptoms. Levodopa is in a class of medications called central nervous system agents where its main function is to become dopamine in the brain. Carbidopa is in a class of medications called decarboxylase inhibitors and it works by preventing levodopa from being broken down before it reaches the brain. This treatment is effective in mitigating motor symptoms, but it does not totally eradicate them and it is not as effective on cognitive problems. Patients who have been diagnosed with SR deficiency and have undergone this treatment have shown improvements with most motor impairments including oculogyric crises, dystonia, balance, and coordination.
Case Studies
Autosomal Recessive DOPA-responsive Dystonia
The diagnosis of sepiapterin reductase deficiency in a patient at the age of 14 years was delayed by an earlier diagnosis of an initially unclassified form of methylmalonic aciduria at the age of 2. At that time the hypotonia and delayed development were not considered to be suggestive of a neurotransmitter defect. The clinically relevant diagnosis was only made following the onset of dystonia with diurnal variation, when the patient was a teenager. Variability in occurrence and severity of other symptoms of the condition, such as hypotonia, ataxia, tremors, spasticity, bulbar involvement, oculogyric crises, and cognitive impairment, is comparable with autosomal dominant GTPCH and tyrosine hydroxylase deficiency, which are both classified as forms of DOPA-responsive dystonia.
Homozygous Mutation causing Parkinsonism
Hypotonia and Parkinsonism were present in two Turkish siblings, brother and sister. By using exome sequencing, which sequences a selective coding region of the genome, researchers have found a homozygous five-nucleotide deletion in the SPR gene which confirmed both siblings were homozygous. It is predicted that this mutation leads to premature translational termination. Translation is the biological process through which proteins are manufactured. The homozygous mutation of the SPR gene in these two siblings exhibiting early-onset Parkinsonism showcases that SPR gene mutations can vary in combinations of clinical symptoms and movement. These differences result in a wider spectrum for the disease phenotype and increases the genetic heterogeneity causing difficulties in diagnosing the disease.
Quantification of Sepiapterin in CSF
This study examined the clinical history of the CSF and urine of two Greek siblings who were both diagnosed with SR deficiency. Both siblings displayed delayed psychomotor development and a movement disorder. The diagnosis was confirmed by measuring the SR enzyme activity and mutation analysis. The mutation analysis of the gene was performed using genomic DNA isolated from blood samples. The results concluded that both patients have low concentrations of HVA and HIAA and high concentrations of sepiapterin in the CSF, but neopterin and biopterin were abnormal in only one sibling. The results of this research indicates that when diagnosing the SR deficiency, the quantification of sepiapterin in the CSF is more important and indicative of SR deficiency than using neopterin and biopterin alone. The results also show that the urine concentrations of neurotransmitter metabolites are abnormal in patients with this disorder. This finding may provide an initial and easier indication of the deficiency before CSF analysis is performed.
Figures
See also
Succinic semialdehyde dehydrogenase deficiency
Neurotransmitter
Parkinson's disease
Cerebral palsy
Enzyme
Biochemistry
References
External links
SR deficiency Genetics Home Reference
SR deficiency National Institute of Health
Congenital disorders
Neurochemistry | Sepiapterin reductase deficiency | [
"Chemistry",
"Biology"
] | 1,704 | [
"Biochemistry",
"Neurochemistry"
] |
37,696,533 | https://en.wikipedia.org/wiki/Schrieffer%E2%80%93Wolff%20transformation | In quantum mechanics, the Schrieffer–Wolff transformation is a unitary transformation used to determine an effective (often low-energy) Hamiltonian by decoupling weakly interacting subspaces. Using a perturbative approach, the transformation can be constructed such that the interaction between the two subspaces vanishes up to the desired order in the perturbation. The transformation also perturbatively diagonalizes the system Hamiltonian to first order in the interaction. In this, the Schrieffer–Wolff transformation is an operator version of second-order perturbation theory. The Schrieffer–Wolff transformation is often used to project out the high energy excitations of a given quantum many-body Hamiltonian in order to obtain an effective low energy model. The Schrieffer–Wolff transformation thus provides a controlled perturbative way to study the strong coupling regime of quantum-many body Hamiltonians.
Although commonly attributed to the paper in which the Kondo model was obtained from the Anderson impurity model by J.R. Schrieffer and P.A. Wolff., Joaquin Mazdak Luttinger and Walter Kohn used this method in an earlier work about non-periodic k·p perturbation theory. Using the Schrieffer–Wolff transformation, the high energy charge excitations present in Anderson impurity model are projected out and a low energy effective Hamiltonian is obtained which has only virtual charge fluctuations. For the Anderson impurity model case, the Schrieffer–Wolff transformation showed that the Kondo model lies in the strong coupling regime of the Anderson impurity model.
Derivation
Consider a quantum system evolving under the time-independent Hamiltonian operator of the form:where is a Hamiltonian with known eigenstates and corresponding eigenvalues , and where is a small perturbation. Moreover, it is assumed without loss of generality that is purely off-diagonal in the eigenbasis of , i.e., for all . Indeed, this situation can always be arranged by absorbing the diagonal elements of into , thus modifying its eigenvalues to .
The Schrieffer–Wolff transformation is a unitary transformation which expresses the Hamiltonian in a basis (the "dressed" basis) where it is diagonal to first order in the perturbation . This unitary transformation is conventionally written as:When is small, the generator of the transformation will likewise be small. The transformation can then be expanded in using the Baker-Campbell-Haussdorf formulaHere, is the commutator between operators and . In terms of and , the transformation becomesThe Hamiltonian can be made diagonal to first order in by choosing the generator such thatThis equation always has a definite solution under the assumption that is off-diagonal in the eigenbasis of . Substituting this choice in the previous transformation yields:This expression is the standard form of the Schrieffer–Wolff transformation. Note that all the operators on the right-hand side are now expressed in a new basis "dressed" by the interaction to first order.
In the general case, the difficult step of the transformation is to find an explicit expression for the generator . Once this is done, it is straightforward to compute the Schrieffer-Wolff Hamiltonian by computing the commutator . The Hamiltonian can then be projected on any subspace of interest to obtain an effective projected Hamiltonian for that subspace. In order for the transformation to be accurate, the eliminated subspaces must be energetically well separated from the subspace of interest, meaning that the strength of the interaction must be much smaller than the energy difference between the subspaces. This is the same regime of validity as in standard second-order perturbation theory.
Particular case
This section will illustrate how to practically compute the Schrieffer-Wolff (SW) transformation in the particular case of an unperturbed Hamiltonian that is block-diagonal.
But first, to properly compute anything, it is important to understand what is actually happening during the whole procedure. The SW transformation being unitary, it does not change the amount of information or the complexity of the Hamiltonian. The resulting shuffle of the matrix elements creates, however, a hierarchy in the information (e.g. eigenvalues), that can be used afterward for a projection in the relevant sector. In addition, when the off-diagonal elements coupling the blocks are much smaller than the typical unperturbed energy scales, a perturbative expansion is allowed to simplify the problem.
Consider now, for concreteness, the full Hamiltonian with an unperturbed part made of independent blocks . In physics, and in the original motivation for the SW transformation, it is desired that each block corresponds to a distinct energy scale. In particular, all degenerate energy levels should belong to the same block. This well-split Hamiltonian is our starting point . A perturbative coupling takes now on a specific meaning: the typical matrix element coupling different sectors must be much smaller than the eigenvalue differences between those sectors. The SW transformation will modify each block into an effective Hamiltonian incorporating ("integrating out") the effects of the other blocks via the perturbation . In the end, it is sufficient to look at the sector of interest (called a projection) and to work with the chosen effective Hamiltonian to compute, for instance, eigenvalues and eigenvectors. In physics, this would generate effective low- (or high-)energy Hamiltonians.
As mentioned in the previous section, the difficult step is the computation of the generator of the SW transformation. To obtain results comparable to second-order perturbation theory, it is enough to solve the equation (see Derivation). A simple trick in two steps is available when is block-diagonal.
The first step consists of finding the unitary transformation diagonalizing . Since each block can be diagonalized with a unitary transformation (this is the matrix of right-eigenvectors of ), it is enough to build , composed of the smaller rotations on its diagonal, to transform into a purely diagonal matrix .
The application of to the whole matrix yields then
with a transformed perturbation , which remains off-diagonal. In this new form, the second step to compute becomes very simple, since we obtain an explicit expression, in components:
where denotes the th element on the diagonal of . The reason for this comes from the observation that, for any matrix , and diagonal matrix , we have the relation . Since the generator for is defined by , the above formula follows immediately. As expected, the associated operator is unitary (it satisfies ) because the denominator of changes sign when transposed, and is Hermitian.
Using the last formula in the derivation, the second-order Schrieffer-Wolff-transformed Hamiltonian has now an explicit form as a function of its elementary terms and :
The "dressed" states have an energy
following the recipe for first-order (non-degenerate) perturbation theory. This is applicable since the SW transformation is based on the approximation Note that the unitary rotation does not affect the eigenvalues, meaning that is also a meaningful approximation for the original Hamiltonian .
The "dressed" states themselves can be derived, in first-order perturbation theory too, as
Notice the index to the unperturbed eigenstate of to recall the current rotated basis of . To express the eigenstates in the natural basis of itself, it is necessary to perform the unitary transformation .
References
Further reading
od
Quantum mechanics | Schrieffer–Wolff transformation | [
"Physics"
] | 1,556 | [
"Theoretical physics",
"Quantum mechanics",
"Quantum physics stubs"
] |
37,698,469 | https://en.wikipedia.org/wiki/In%20vitro%20recombination | Recombinant DNA (rDNA), or molecular cloning, is the process by which a single gene, or segment of DNA, is isolated and amplified. Recombinant DNA is also known as in vitro recombination. A cloning vector is a DNA molecule that carries foreign DNA into a host cell, where it replicates, producing many copies of itself along with the foreign DNA. There are many types of cloning vectors such as plasmids and phages. In order to carry out recombination between vector and the foreign DNA, it is necessary the vector and DNA to be cloned by digestion, ligase the foreign DNA into the vector with the enzyme DNA ligase. And DNA is inserted by introducing the DNA into bacteria cells by transformation.
Steps
Preparation of foreign DNA
There are two major sources of foreign DNA for molecular cloning is genomic DNA (gDNA) and complementary (or copy) DNA (cDNA). cDNA molecules are DNA copies of mRNA molecules, produced in vitro by action of the enzyme reverse transcriptase. In order to obtain the cDNA for a specific gene, it is first necessary to construct a cDNA library.
Fragmenting gDNA
The DNA of interest needs to be fragmented to provide a relevant DNA segment of suitable size. Preparation of DNA fragments for cloning is achieved by means of PCR, but it may also be accomplished by restriction enzyme digestion and fractionation by gel electrophoresis.
Preparing cDNA libraries
To prepare a cDNA library, the first step is to isolate the total mRNA from the cell type of interest. Then, the enzyme reverse transcriptase is used to generate cDNAs. Reverse transcriptase is a RNA-dependent DNA polymerase. It depends on the presence of a primer, usually a poly-dT oligonucleotide, to prime DNA synthesis. DNA polymerase can use these single–stranded primers to initiate second strand DNA synthesis on the mRNA templates. After the single-stranded DNA molecules are converted into double-stranded DNA molecules by DNA polymerase, they are inserted into vectors and cloned. To do this, the cDNA are frequently methylated with a specific methyl transferase. These synthetic oligonucleotides can either be “linkers”, the overhangs are generated by digestion with the appropriate restriction enzyme.
Plasmid Vector
Recombinant DNA vectors function as carriers of the foreign DNA. Plasmids are small, closed-circular DNA molecules that exist from the chromosomes of their host. Their replication is to be under stringent control (low copy number) or relaxed (high copy number). The restriction sites, called the multiple cloning site or polylinker, give a wide choice of restriction site for use in the cloning step.
References
Mark F. Wiser,"Lecture Notes for Methods in Cell Biology
Biochemistry methods | In vitro recombination | [
"Chemistry",
"Biology"
] | 592 | [
"Biochemistry methods",
"Molecular biology stubs",
"Biochemistry",
"Molecular biology"
] |
37,698,845 | https://en.wikipedia.org/wiki/N-Hydroxypiperidine | N-Hydroxypiperidine (also known as 1-piperidinol and 1-hydroxypiperidine) is the chemical compound with formula C5H11NO. It is a hydroxylated derivative of the heterocyclic compound piperidine.
Preparation
N-Hydroxypiperidine can be prepared from the application of meta-chloroperoxybenzoic acid and methanol to the tertiary amine product of acrylonitrile and piperidine, followed by heating with acetone of the resulting tertiary N-oxide.
Reactions
N-Hydroxypiperidine is a secondary amine, which can undergo an oxidation reaction with hydrogen peroxide in methanol as the solvent. This produces a nitrone, which is heteroatomic equivalent to a ketone with a nitrogen instead of an alpha carbon. Competing elimination reactions can occur, as well.
References
1-Piperidinyl compounds
Hydroxylamines | N-Hydroxypiperidine | [
"Chemistry"
] | 200 | [
"Hydroxylamines",
"Reducing agents"
] |
37,699,547 | https://en.wikipedia.org/wiki/Aqion | Aqion is a hydrochemistry software tool. It bridges the gap between scientific software (such like PhreeqC)
and the calculation/handling of "simple" water-related tasks in daily routine practice. The software aqion is free for private users, education and companies.
Motivation & history
First. Most of the hydrochemical software is designed for experts and scientists. In order to flatten the steep learning curve aqion provides an introduction to fundamental water-related topics in form of a "chemical pocket calculator".
Second. The program mediates between two terminological concepts: The calculations are performed in the "scientific realm" of thermodynamics (activities, speciation, log K values, ionic strength, etc.). Then, the output is translated into the "language" of common use: molar and mass concentrations, alkalinity, buffer capacities, water hardness, conductivity and others.
History. Version 1.0 was released in January 2012 (after a half-year test run in 2011). The project is active with 1-2 updates per month.
Features
Validates aqueous solutions (charge balance error, parameter adjustment)
Calculates physico-chemical parameters: alkalinity, buffer capacities (ANC, BNC), water hardness, ionic strength
Calculates aqueous speciation and complexation
Calculates pH of solutions after addition of chemicals (acids, bases, salts)
Calculates the calcite-carbonate system (closed/open system, Langelier Saturation Index)
Calculates mineral dissolution, precipitation, and saturation indices
Calculates mixing of two waters
Calculates reduction-oxidation (redox) reactions
Plots titration curves
Fields of application
Water analysis and water quality
Geochemical modeling (in simplest form)
Education
Limits of application
only inorganic species (no organic chemistry)
only equilibrium thermodynamics (no chemical kinetics)
only aqueous solutions with ionic strength ≤ 0.7 mol/L (no brines)
Basic algorithm & numerical solver
There are two fundamental approaches in hydrochemistry: Law of mass action (LMA) and Gibbs energy minimization (GEM).
The program aqion belongs to the category LMA approach. In a nutshell: A system of NB independent basis components j (i.e. primary species), that combines to form NS secondary species i, is represented by a set of mass-action and mass-balance equations:
(1) mass action law: with i = 1 ... NS
(2) mass balance law: with j = 1 ... NB
where Ki is the equilibrium constant of formation of the secondary species i, and νi,j represents the stoichiometric coefficient of basis species j in secondary species i (the values of νj,i can be positive or negative). Here, activities ai are symbolized by curly brackets {i} while concentrations ci by rectangular brackets [i]. Both quantities are related by the
(3) activity correction:
with γi as the activity coefficient calculated by the Debye–Hückel equation and/or Davies equation. Inserting Eq.(1) into Eq.(2) yields a nonlinear polynomial function fj for the j-th basis species:
(4)
which is the objective function of the Newton–Raphson method.
To solve Eq.(4) aqion adopts the numerical solver from the open-source software PhreeqC.
The equilibrium constants Ki are taken from the thermodynamic database wateq4f.
Examples, test & verification
The software aqion is shipped with a set of example solutions (input waters) and a tutorial how to attack typical water-related problems (online-manual with about 40 examples). More examples and exercises for testing and re-run can be found in classical textbooks of hydrochemistry.
The program was verified by benchmark tests of specific industry standards.
Screenshots
References
External links
PHREEQC
Online calculator for pH, aqueous speciation, saturation indices, alkalinity, EC
Computational chemistry software
Science education software | Aqion | [
"Chemistry"
] | 857 | [
"Computational chemistry",
"Computational chemistry software",
"Chemistry software"
] |
35,243,576 | https://en.wikipedia.org/wiki/Froth%20pump | Mineral processing often relies on the formation of froth to separate rich minerals from gangue. In the process, chemicals and air are added. The rich minerals entrap the air and move as bubbles to the top of a flotation cell while the sand and clays of no commercial values sink to the bottom to form tailings. The whole process may be done in tanks in series called roughers, scavengers and cleaners.
In oilsand extraction plants, the froth is formed to remove the bitumen and it is called oilsand froth. Oilsand froth is difficult to pump at very low speeds and at low temperatures. However above a certain speed in a steel pipe, water may separate and form a lubrication layer. The process is called self-lubrication and depending on the temperature (25 C - 45 C), the friction losses are between 10 and 20 times the equivalent friction losses of water. When entering a centrifugal pump, a low pressure forms at the eye of the impeller. Pre-rotation creates centrifugal forces that push the dense slurry towards the wall of the pipe while leaving a core of air at the center. According to Abulnaga (2004), the froth pump may use
-( a) a very large eye diameter
- (b) an inducer or having the impeller vanes extend into the suction pipe
- (c) a recirculating pipe from the discharge side of the pump with pressurized froth to break up the bubbles at the eye
- (d) tandem vanes at the impeller shroud
- (e) split or secondary vanes at the shroud
- (f) vertical arrangement, thus:
The split vanes in a slurry pump must be thick enough to resist continuous wear.thus:
References
Pumps | Froth pump | [
"Physics",
"Chemistry"
] | 378 | [
"Pumps",
"Hydraulics",
"Physical systems",
"Turbomachinery"
] |
35,243,831 | https://en.wikipedia.org/wiki/Dominant%20functor | In category theory, an abstract branch of mathematics, a dominant functor is a functor F : C → D in which every object of D is a retract of an object of the form F(x) for some object X of C.
References
Functors | Dominant functor | [
"Mathematics"
] | 56 | [
"Functions and mappings",
"Mathematical structures",
"Category theory stubs",
"Mathematical objects",
"Mathematical relations",
"Functors",
"Category theory"
] |
35,252,470 | https://en.wikipedia.org/wiki/Center%20of%20pressure%20%28terrestrial%20locomotion%29 | In biomechanics, center of pressure (CoP) is the term given to the point of application of the ground reaction force vector. The ground reaction force vector represents the sum of all forces acting between a physical object and its supporting surface. Analysis of the center of pressure is common in studies on human postural control and gait. It is thought that changes in motor control may be reflected in changes in the center of pressure. In biomechanical studies, the effect of some experimental condition on movement execution will regularly be quantified by alterations in the center of pressure.
The center of pressure is not a static outcome measure. For instance, during human walking, the center of pressure is near the heel at the time of heelstrike and moves anteriorly throughout the step, being located near the toes at toe-off. For this reason, analysis of the center of pressure will need to take into account the dynamic nature of the signal. In the scientific literature various methods for the analysis of center of pressure time series have been proposed.
Measuring CoP
CoP measurements are commonly gathered through the use of a force plate. A force plate gathers data in the anterior-posterior direction (forward and backward), the medial-lateral direction (side-to-side) and the vertical direction, as well as moments about all 3 axes. Together, these can be used to calculate the position of the center of pressure relative to the origin of the force plate.
Relationship to balance
CoP and center of gravity (CoG) are both related to balance in that they are dependent on the position of the body with respect to the supporting surface. Center of gravity is subject to change based on posture. Center of pressure is the location on the supporting surface where the resultant vertical force vector would act if it could be considered to have a single point of application.
A shift of CoP is an indirect measure of postural sway and thus a measure of a person’s ability to maintain balance. People sway in the anterior-posterior direction (forward and backward) and the medial-lateral direction (side-to-side) when they are simply standing still. This comes as a result of small contractions of muscles in the body to maintain an upright position. An increase in sway is not necessarily an indicator of poorer balance so much as it is an indicator of decreased neuromuscular control, although it has been noted that postural sway is a precursor to a fall.
Notes
References
Benda, B.J., Riley, P.O. and Krebs, D.E. (1994). Biomechanical relationship between center of gravity and center of pressure during standing. IEEE Transactions on Rehabilitation Engineering, 2(1), 3-10.
Fernie, G.R, Gryfe, C.I., Holliday, P.J., and Llewellyn, A. (1982). The relationship of postural sway in standing to the incidence of falls in geriatric subjects. Age and Ageing, 11(1), 11-16.
Gribble, P.A., Hertel, J. (2004). Effect of Lower-Extremity Fatigue on Postural Control. Archives of Physical Medicine Rehabilitation and Rehabilitation, 85, 589-592.
Biomechanics
Walking
Pressure | Center of pressure (terrestrial locomotion) | [
"Physics",
"Mathematics"
] | 676 | [
"Biomechanics",
"Point (geometry)",
"Geometric centers",
"Mechanics",
"Symmetry"
] |
23,510,634 | https://en.wikipedia.org/wiki/Offshore%20Energies%20UK | Offshore Energies UK (OEUK), formerly known as Oil and Gas UK (OGUK), is a trade association for the United Kingdom offshore energies industry. The current Chief Executive is David Whitehouse.
History
OEUK is a not-for-profit organisation, established in April 2007 on the foundations of the UK Offshore Operators Association (UKOOA). The association is the leading representative body for the UK offshore energies that stretches back over 30 years. Its membership is open to all companies active in the area; these include businesses such as super majors, independent oil companies, wind and hydrogen, and SMEs working in the supply chain.
Function
OEUK is a trade association for the whole sector. It is a source of information for and about the UK Upstream and a gateway to industry networks and expertise.
They do this by:
raising the profile of the UK offshore energies sector.
promoting open dialogue within and across all sectors of the industry on issues including technical, fiscal, safety, environmental and skills issues, and brokering solutions.
developing and delivering industry-wide initiatives engaging with governments and other external organisations with a stake in the industry’s future.
Location
The association is situated near St Paul's Cathedral on the 1st Floor of Paternoster House in the City of Westminster. It also has an office on 4th Floor, Annan House, 33-35 Palmerston Road, Aberdeen, AB11 5QP, in the town where the UKOOA was based.
Subsidiaries
OEUK has one subsidiary:
Leading Oil and Gas Industry Competitiveness
See also
Petroleum industry in Aberdeen
Oil and gas industry in the United Kingdom
North Sea oil
List of oil and gas fields of the North Sea
References
Trade associations based in the United Kingdom
Non-profit organisations based in the United Kingdom
Organisations based in Aberdeen
Organisations based in the City of Westminster
Petroleum industry in the United Kingdom
Petroleum organizations
Organizations established in 2007
2007 establishments in the United Kingdom | Offshore Energies UK | [
"Chemistry",
"Engineering"
] | 389 | [
"Petroleum",
"Petroleum organizations",
"Energy organizations"
] |
23,511,307 | https://en.wikipedia.org/wiki/Collision%20avoidance%20in%20transportation | In transportation, collision avoidance is the maintenance of systems and practices designed to prevent vehicles (such as aircraft, motor vehicles, ships, cranes and trains) from colliding with each other. They perceive the environment with sensors and prevent collisions using the data collected from the sensors. Collision avoidance is used in autonomous vehicles, aviation, trains and water transport. Examples of collision avoidance include:
Airborne collision avoidance systems for aircraft
Automatic Identification System for collision avoidance in water transport
Collision avoidance (spacecraft)
Collision avoidance system in automobiles
Positive train control
Tower Crane Anti-Collision Systems
Collison avoidance requires the position of objects in its surroundings and the location of the vehicle relative to their positions.
Technology
Many collision avoidance systems need two things:
The position of all other vehicles
The position of the vehicle relative to other vehicles
The first step in collision avoidance is perception, which can use sensors like LiDAR, visual cameras, thermal or IR cameras, or solid-state devices. They are divided upon the part of the electromagnetic spectrum they use. There are two types of sensors, passive and active sensors. Examples of active sensors are LiDAR, Radar and Sonar. Examples of passive sensors are cameras and thermal sensors.
Passive sensors
Passive sensors detect energy discharged by objects around them. They are primarily visual or infrared cameras. Visual cameras operate in visible light, and thermal cameras operate in infrared light (wavelength of 700 nanometres to 14 micrometres).
Visual Cameras
Cameras rely on capturing pictures of their surroundings to extract useful information. The advantages of using cameras are their smaller size, lesser weight and flexibility. Their disadvantages are their lack of image quality, and the sensitivity to the lighting and weather. They rely on image-processing systems to detect objects.
Infrared cameras
Infrared sensors use infrared light to detect objects. They are primarily used in low light conditions and can be used with visual cameras to overcome the poor performance of visual cameras in low lighting. Their have a lower resolution than traditional cameras.
Active sensors
Active sensors emit radiation and read the reflected radiation. An active sensor has a transmitter and a receiver. A transmitter emits a signal like a light wave, an electrical signal, or an acoustic signal; this signal then bounces off an object, and the receiver of the sensor reads the reflected signal. They are fast, require less processing power and are affected less by weather.
Radar
A radio detection and ranging (radar) sensor transmits a radio signal which bounces back to the radar when it encounters an object. Depending on the time it took for the signal to bounce back, the distance between the object and the radar is calculated. Radar systems have good resistance to weather conditions.
Lidar
In a light detection and ranging (LiDAR) sensor, one part emits laser pulses onto the surface and the other reads the reflection to measure the time it took for each pulse to bounce back in order to calculate the distance. Since LiDAR uses a short wavelength, it can detect small objects. A LiDAR cannot detect transparent objects such as clear glass. Another sensor, such as an ultrasonic sensor, can be used to overcome this issue.
Sonar
Ultrasonic sensors measure the distance between an item and the sensor by sending out sound waves and then listening for the waves to be reflected back from the object. The frequency at which the sound waves are produced is higher than what is audible to humans. The distance can be derived using a formula:
where d is the distance, v is the speed of the wave, and t is the time of flight.
Autobrake
Warnings are given before the autobrake. If the driver ignores the warnings, the autonomous brake or autobrake will apply a partial or full brake. It can be active at any speed.
Blind spot monitoring
A blind-spot monitoring system searches the spaces near the car with radars or cameras to detect any cars that may be approaching or hiding in blind zones. The relevant side-view mirror will display an illuminated symbol when a vehicle of that type is recognized.
Rear cross-traffic warning
Cross-traffic warning notifies the driver when traffic approaches from the sides when one reverses. The alert generally consists of a sound (like an auditory chirp) and a visual signal in either the outside mirror or the dash display for the back camera.
Pedestrian detection and braking
When one crosses into the path of an automobile, pedestrian detection can identify them. Certain vehicles will automatically apply the brakes, either fully or partially. Cyclists can also be detected by certain systems.
Adaptive headlights
Adaptive headlights will revolve when the driver spins the steering wheel illuminating the road around bends.
Lane departure warning (LDW)
Lane departure warning uses cameras and several sensors to detect lane markers and monitor the distance between the vehicle and these lanes. If the vehicle leaves the lane without signaling, a beep may be heard. It may also use physical systems such as vibration of the steering wheel or seat. In advanced versions, it may also apply brakes or turn the steering wheel to keep the vehicle within the lane.
Classification
By methods used
There are four methods used for performing collision avoidance. They are:
Geometric methods
Force field methods
Optimisation Based methods
Sense and Avoid methods
Geometric methods
Geometric approaches analyse geometric attributes to make sure that a defined minimum distance between vehicles is not breached.
Force field methods
Force-field methods, also called potential field methods, use the concept of a repulsive or attractive force field to repel an agent from an obstacle or attract it to a target.
Optimisation based methods
Optimisation based methods calculate an trajectory that avoids collisions using geographical information.
Sense and avoid methods
Sense and avoid methods detect and avoid individual objects without attributes of other objects.
By activity
Depending on when they are deployed, collision avoidance systems can be classified into passive and active systems.
Passive types
Methods of collision avoidance like seatbelts and airbags are primarily designed to reduce injury to the driver. They are passive types of collision avoidance. This includes rescue systems that notify rescue centers of an accident.
Active types
With the addition of camera and radar sensing technologies, active types of collision avoidance can assist or warn the driver, or take control in dangerous situations.
Uses
In aviation
Unmanned Aerial Vehicles use collision avoidance systems to operate safely. TCAS is a collision avoidance system that is widely used. It is a universally accepted last resort meant to reduce the chance of collisions.
In autonomous driving
Collision avoidance is also used in autonomous cars. The aim of a collision avoidance system in vehicles is to prevent collisions, primarily caused by negligence or blind spots, by developing safety measures.
In trains
Automatic Train Protection, an important function of a train control system, helps prevent collisions by managing the speed of the train. Kavach is a collision avoidance system used in the Indian Railways.
In ships and other water transport
Automatic identification systems are used for collision avoidance in water transport.
In spacecraft and space stations
Collision avoidance has been routinely used in spacecraft or space stations (when possible) to ensure their safety. The International Space Station (ISS) has performed 14 maneuvers between 2008 and 2014 due to the threat of a collision.
See also
Contention (telecommunications)
Autonomous vehicles
References
Collision | Collision avoidance in transportation | [
"Physics"
] | 1,427 | [
"Collision",
"Mechanics"
] |
23,512,322 | https://en.wikipedia.org/wiki/Polonium%20dichloride | Polonium dichloride is a chemical compound of the radioactive metalloid, polonium and chlorine. Its chemical formula is . It is an ionic salt.
Structure
Polonium dichloride appears to crystallise with an orthorhombic unit cell in either the P222, Pmm2 or Pmmm space group, although this is likely a pseudo-cell. Alternatively, the true space group may be monoclinic or triclinic, with one or more cell angles close to 90°. Assuming the space group is P222, the structure exhibits distorted cubic coordination of Po as and distorted square planar coordination of Cl as .
Preparation
can be obtained either by halogenation of polonium metal or by dehalogenation of polonium tetrachloride, . Methods for dehalogenating include thermal decomposition at 300 °C, reduction of cold, slightly moist by sulfur dioxide; and heating in a stream of carbon monoxide or hydrogen sulfide at 150 °C.
Reactions
dissolves in dilute hydrochloric acid to give a pink solution, which autoxidises to Po(IV). is rapidly oxidised by hydrogen peroxide or chlorine water. Addition of potassium hydroxide to the pink solution results in a dark brown precipitate – possibly hydrated PoO or – which is rapidly oxidised to Po(IV). With dilute nitric acid, forms a dark red solution followed by a flaky white precipitate of unknown composition.
See also
Polonium tetrachloride
References
Polonium compounds
Chlorides
Metal halides
Chalcohalides | Polonium dichloride | [
"Chemistry"
] | 337 | [
"Chlorides",
"Inorganic compounds",
"Chalcohalides",
"Salts",
"Metal halides"
] |
23,515,462 | https://en.wikipedia.org/wiki/Nevada%E2%80%93Texas%E2%80%93Utah%20retort | The Nevada–Texas–Utah retort process (also known as NTU, Dundas–Howes or Rexco process) was an above-ground shale oil extraction technology to produce shale oil, a type of synthetic crude oil. It heated oil shale in a sealed vessel (retort) causing its decomposition into shale oil, oil shale gas and spent residue. The process was developed in the 1920s and used for shale oil production in the United States and in Australia. The process was simple to operate; however, it was ceased from the operation because of a small capacity and labor extensiveness.
History
The NTU retort was a successor of the 19th-century coal gasification retorts and it is considered as a predecessor of the gas combustion retort and the Paraho processes. It was invented and patented by Roy C. Dundas and Raymond T. Howes in 1923. The process was improved by David Davis and George Wightman Wallace, a consulting engineer of the NTU Company. In 1925, the NTU Company built a test plant at Sherman Cut near Casmalia, California.
In 1925–1929, the process was tested by the United States Bureau of Mines in the Oil Shale Experiment Station at Anvil Point in Rifle, Colorado. Retorting was carried out from 17 January to 28 June 1927. The plant was dismantled when work was terminated in June 1929. One of the leading technologist involved in this stage was Lewis Cass Karrick, an inventor of the Karrick process. In 1946–1951, two pilot plants with nominal capacities of 40 tons of raw oil shale were located at the same location. More than 12,000 barrels of shale oil was produced during this period. During the World War II, three NTU retorts were operated at Marangaroo, New South Wales, Australia. Almost 500,000 barrels of shale oil was produced by these retorts by retorting local torbanite.
Retort
The NTU retort was a vertical downdraft retort, which used internal combustion to generate heat for an oil shale pyrolysis (chemical decomposition). The retort was designed as a steel cylinder, lined with fire bricks. At the top it was equipped with an air supply pipe and at the bottom it was equipped with an exhaust pipe. The batch of crushed oil shale was loaded from the top; after that the retort was sealed. To start the pyrolysis process the fuel gas was ignited at the top of retort, and air injection into the retort started. The supply of fuel gas stopped after the upper quarter of the oil shale batch started to burn. At the same time the air injection continued bringing temperature in the burning part to about .
The heated gas caused a pyrolysis on the lower part of oil shale and produced shale oil and oil-shale gas are escaped from the retort through exhaust pipe at the bottom of retort. The pyrolysis occurred at the temperature about . By time-being, the combustion zones moved downward, and the char (semi-coke) produced as a solid residue of pyrolysis, ignited to burn as additional fuel for combustion. This caused the pyrolysis zone to move downward to the lower parts of retort. After combustion zone reached to the bottom of retort, air injection was stopped to stop combustion. After burning of char, the shale oil production ceased and only spent oil shale ash remained in the retort. The bottom of retort could opened for removal of the oil shale ash after retorting process. Operating the NTU retort with nominal capacity of 40 ton of raw oil shale, the full process cycle took about 40 hours. The shale oil yield varied from 80% to 85% of Fischer assay.
Advantages and disadvantages
The advantage of the NTU retort process was simple design, simple operation, and limited need for external fuel. It was suitable for processing of wide variety of oil shales. The disadvantage of this process was a batch mode of operation not allowing continuous retorting, and therefore having small capacity being labor extensive at the same time. The process also had a relatively low oil yield and it required cooling water.
See also
Alberta Taciuk process
Petrosix
Kiviter process
Galoter process
TOSCO II process
Fushun process
Superior multimineral process
Pumpherston retort
References
Oil shale technology
Fuels infrastructure in the United States
Petroleum in the United States | Nevada–Texas–Utah retort | [
"Chemistry"
] | 903 | [
"Petroleum technology",
"Oil shale technology",
"Synthetic fuel technologies"
] |
46,509,266 | https://en.wikipedia.org/wiki/Robert%20Swendsen | Robert Haakon Swendsen is a Professor of Physics at Carnegie Mellon University. He is known in the computational physics community for the Swendsen-Wang algorithm, the Monte Carlo Renormalization Group, and related methods that enable efficient computational studies of equilibrium phenomena near phase transitions. He is the 2014 Recipient of the Aneesur Rahman Prize for Computational Physics from the American Physical Society.
Swendsen completed his undergraduate studies at Yale University and his PhD at University of Pennsylvania.
Swendsen is also known for his pedagogy. He received Carnegie Mellon's Julius Ashkin Teaching Award in 2014 He is also known for his textbook, An Introduction to Statistical Mechanics and Thermodynamics (2nd ed. 2020). Oxford University Press.
References
Year of birth missing (living people)
Living people
21st-century American physicists
Carnegie Mellon University faculty
Yale University alumni
University of Pennsylvania alumni
Computational physicists | Robert Swendsen | [
"Physics"
] | 185 | [
"Computational physicists",
"Computational physics"
] |
46,509,336 | https://en.wikipedia.org/wiki/Non-ballistic%20atmospheric%20entry | Non-ballistic atmospheric entry is a class of atmospheric entry trajectories that follow a non-ballistic trajectory by employing aerodynamic lift in the high upper atmosphere. It includes trajectories such as skip and glide.
Skip is a flight trajectory where the spacecraft goes in and out the atmosphere. Glide is a flight trajectory where the spacecraft stays in the atmosphere for a sustained flight period of time. In most examples, a skip reentry roughly doubles the range of suborbital spaceplanes and reentry vehicles over the purely ballistic trajectory. In others, a series of skips allows the range to be further extended.
Non-ballistic atmospheric entry was first seriously studied as a way to extend the range of ballistic missiles, but was not used operationally in this form as conventional missiles with extended range were introduced. The underlying aerodynamic concepts have been used to produce maneuverable reentry vehicles (MARV), to increase the accuracy of some missiles like the Pershing II. More recently, the concepts have been used to produce hypersonic glide vehicles (HGV) to avoid interception as in the case of the Avangard. The range-extension is used as a way to allow flights at lower altitudes, helping avoid radar detection for a longer time compared to a higher ballistic path.
The concept has also been used to extend the reentry time for vehicles returning to Earth from the Moon, which would otherwise have to shed a large amount of velocity in a short time and thereby suffer very high heating rates. The Apollo Command Module also used what is essentially a skip re-entry, as did the Soviet Zond and Chinese Chang'e 5-T1.
History
Early concepts
The conceptual basis was first noticed by German artillery officers, who found that their Peenemünder Pfeilgeschosse arrow shells traveled much further when fired from higher altitudes. This was not entirely unexpected due to geometry and thinner air, but when these factors were accounted for, they still could not explain the much greater ranges being seen. Investigations at Peenemünde led them to discover that the longer trajectories in the thinner high-altitude air resulted in the shell having an angle of attack that produced aerodynamic lift at supersonic speeds. At the time this was considered highly undesirable because it made the trajectory very difficult to calculate, but its possible application for extending range was not lost on the observers.
In June 1939, Kurt Patt of Klaus Riedel's design office at Peenemünde proposed wings for converting rocket speed and altitude into aerodynamic lift and range. He calculated that this would roughly double range of the A-4 rockets from to about . Early development was considered under the A-9 name, although little work other than wind tunnel studies at the Zeppelin-Staaken company would be carried out during the next few years. Low-level research continued until 1942 when it was cancelled.
The earliest known proposal for the boost-glide concept for truly long-range use dates to the 1941 Silbervogel, a proposal by Eugen Sänger for a rocket powered bomber able to attack New York City from bases in Germany then fly on for landing somewhere in the Pacific Ocean held by the Empire of Japan. The idea would be to use the vehicle's wings to generate lift and pull up into a new ballistic trajectory, exiting the atmosphere again and giving the vehicle time to cool off between the skips. It was later demonstrated that the heating load during the skips was much higher than initially calculated, and would have melted the spacecraft.
In 1943, the A-9 work was dusted off again, this time under the name A-4b. It has been suggested this was either because it was now based on an otherwise unmodified A-4, or because the A-4 program had "national priority" by this time, and placing the development under the A-4 name guaranteed funding. A-4b used swept wings in order to extend the range of the V2 enough to allow attacks on UK cities in the Midlands or to reach London from areas deeper within Germany. The A-9 was originally similar, but later featured long ogival delta shaped wings instead of the more conventional swept ones. This design was adapted as a crewed upper stage for the A-9/A-10 intercontinental missile, which would glide from a point over the Atlantic with just enough range to bomb New York before the pilot bailed out.
Post-war development
In the immediate post-war era, Soviet rocket engineer Aleksei Isaev found a copy of an updated August 1944 report on the Silbervogel concept. He had the paper translated to Russian, and it eventually came to the attention of Joseph Stalin who was intensely interested in the concept of an antipodal bomber. In 1946, he sent his son Vasily Stalin and scientist Grigori Tokaty, who had also worked on winged rockets before the war, to visit Sänger and Irene Bredt in Paris and attempt to convince them to join a new effort in the Soviet Union. Sänger and Bredt turned down the invitation.
In November 1946, the Soviets formed the NII-1 design bureau under Mstislav Keldysh to develop their own version without Sänger and Bredt. Their early work convinced them to convert from a rocket powered hypersonic skip-glide concept to a ramjet powered supersonic cruise missile, not unlike the Navaho being developed in the United States during the same period. Development continued for a time as the Keldysh bomber, but improvements in conventional ballistic missiles ultimately rendered the project unnecessary.
In the United States, the skip-glide concept was advocated by many of the German scientists who moved there, primarily Walter Dornberger and Krafft Ehricke at Bell Aircraft. In 1952, Bell proposed a bomber concept that was essentially a vertical launch version of Silbervogel known as Bomi. This led to a number of follow-on concepts during the 1950s, including Robo, Hywards, Brass Bell, and ultimately the Boeing X-20 Dyna-Soar. Earlier designs were generally bombers, while later models were aimed at reconnaissance or other roles. Dornberger and Ehricke also collaborated on a 1955 Popular Science article pitching the idea for airliner use.
The introduction of successful intercontinental ballistic missiles (ICBMs) in the offensive role ended any interest in the skip-glide bomber concepts, as did the reconnaissance satellite for the spyplane roles. The X-20 space fighter saw continued interest through the 1960s, but was ultimately the victim of budget cuts; after another review in March 1963, Robert McNamara canceled the program in December, noting that after $400 million had been spent they still had no mission for it to fulfill.
Missile use
Through the 1960s, the skip-glide concept saw interest not as a way to extend range, which was no longer a concern with modern missiles, but as the basis for maneuverable reentry vehicles for ICBMs. The primary goal was to have the RV change its path during reentry so that anti-ballistic missiles (ABMs) would not be able to track their movements rapidly enough for a successful interception. The first known example was the Alpha Draco tests of 1959, followed by the Boost Glide Reentry Vehicle (BGRV) test series, ASSET and PRIME.
This research was eventually put to use in the Pershing II's MARV reentry vehicle. In this case, there is no extended gliding phase; the warhead uses lift only for short periods to adjust its trajectory. This is used late in the reentry process, combining data from a Singer Kearfott inertial navigation system with a Goodyear Aerospace active radar. Similar concepts have been developed for most nuclear-armed nations' theatre ballistic missiles.
The Soviet Union had also invested some effort in the development of MARV to avoid US ABMs, but the closure of the US defenses in the 1970s meant there was no reason to continue this program. Things changed in the 2000s with the introduction of the US's Ground-Based Midcourse Defense, which led Russia to reanimate this work. The vehicle, referred to as Object 4202 in the Soviet era, was reported in October 2016 to have had a successful test. The system was revealed publicly on 1 March 2018 as the hypersonic glide vehicle (HGV) Avangard (; ), which officially entered active service as an ICBM payload on 27 December 2019. Vladimir Putin announced that Avangard had entered serial production, claiming that its maneuverability makes it invulnerable to all current missile defences.
China has also developed a boost-glide warhead, the DF-ZF (known to US intelligence as "WU-14"). In contrast to the US and Russian MARV designs, the DF-ZF's primary goal is to use boost-glide to extend range while flying at lower altitudes than would be used to reach the same target using a purely ballistic path. This is intended to keep it out of the sight of the US Navy's Aegis Combat System radars as long as possible, and thereby decrease the time that system has to respond to an attack. DF-ZF was officially unveiled on 1 October 2019. Similar efforts by Russia led to the Kholod and GLL-8 Igla hypersonic test projects, and more recently the Yu-71 hypersonic glide vehicle which can be carried by RS-28 Sarmat.
Boost-glide became the topic of some interest as a possible solution to the US Prompt Global Strike (PGS) requirement, which seeks a weapon that can hit a target anywhere on the Earth within one hour of launch from the United States. PGS does not define the mode of operation, and current studies include Advanced Hypersonic Weapon boost-glide warhead, Falcon HTV-2 hypersonic aircraft, and submarine-launched missiles. Lockheed Martin is developing this concept as the hypersonic AGM-183A ARRW.
Reentry vehicle use
The technique was used by the Soviet Zond series of circumlunar spacecraft, which used one skip before landing. In this case a true skip was required in order to allow the spacecraft to reach the higher-latitude landing areas. Zond 6, Zond 7 and Zond 8 made successful skip entries, although Zond 5 did not. The Chang'e 5-T1, which flew mission profiles similar to Zond, also used this technique.
The Apollo Command Module used a skip-like concept to lower the heating loads on the vehicle by extending the re-entry time, but the spacecraft did not leave the atmosphere again and there has been considerable debate whether this makes it a true skip profile. NASA referred to it simply as "lifting entry". A true multi-skip profile was considered as part of the Apollo Skip Guidance concept, but this was not used on any crewed flights. The concept continues to appear on more modern vehicles like the Orion spacecraft, which made the first American skip entry in the Artemis 1 mission, using onboard computers.
Flight mechanics
Using simplified equations of motion and assuming that during the atmospheric flight both drag and lift forces will be much larger than the gravity force acting on the vehicle, the following analytical relations for a skip reentry flight can be derived:
where is the flightpath angle relative to the local horizontal, the subscript E indicates the conditions at the start of the entry and the subscript F indicates the conditions at the end of the entry flight.
The velocity before and after the entry can be derived to relate as follows:
where is the lift-to-drag ratio of the vehicle.
See also
High Speed Strike Weapon (HSSW) (USA)
Prompt Global Strike (PGS) (USA)
Alpha Draco (USA)
ArcLight (missile) (USA)
DARPA Falcon Project (USA)
Boeing X-51 Waverider (USA)
North American X-15 (USA)
Tupolev Tu-130 (Russia)
BrahMos-II (India / Russia)
Hypersonic Technology Demonstrator Vehicle (India)
Multiple independently targetable reentry vehicle
Planar reentry equations
Notes
References
Citations
Bibliography
Cancelled military aircraft projects of the United States
Atmospheric entry
Apollo program
Spaceplanes
Space weapons
Hypersonic aircraft | Non-ballistic atmospheric entry | [
"Engineering"
] | 2,510 | [
"Atmospheric entry",
"Aerospace engineering"
] |
22,061,787 | https://en.wikipedia.org/wiki/Magnus%20expansion | In mathematics and physics, the Magnus expansion, named after Wilhelm Magnus (1907–1990), provides an exponential representation of the product integral solution of a first-order homogeneous linear differential equation for a linear operator. In particular, it furnishes the fundamental matrix of a system of linear ordinary differential equations of order with varying coefficients. The exponent is aggregated as an infinite series, whose terms involve multiple integrals and nested commutators.
The deterministic case
Magnus approach and its interpretation
Given the coefficient matrix , one wishes to solve the initial-value problem associated with the linear ordinary differential equation
for the unknown -dimensional vector function .
When n = 1, the solution is given as a product integral
This is still valid for n > 1 if the matrix satisfies for any pair of values of t, t1 and t2. In particular, this is the case if the matrix is independent of . In the general case, however, the expression above is no longer the solution of the problem.
The approach introduced by Magnus to solve the matrix initial-value problem is to express the solution by means of the exponential of a certain matrix function
:
which is subsequently constructed as a series expansion:
where, for simplicity, it is customary to write for and to take t0 = 0.
Magnus appreciated that, since , using a Poincaré−Hausdorff matrix identity, he could relate the time derivative of to the generating function of Bernoulli numbers and
the adjoint endomorphism of ,
to solve for recursively in terms of "in a continuous analog of the BCH expansion", as outlined in a subsequent section.
The equation above constitutes the Magnus expansion, or Magnus series, for the solution of matrix linear initial-value problem. The first four terms of this series read
where is the matrix commutator of A and B.
These equations may be interpreted as follows: coincides exactly with the exponent in the scalar ( = 1) case, but this equation cannot give the whole solution. If one insists in having an exponential representation (Lie group), the exponent needs to be corrected. The rest of the Magnus series provides that correction systematically: or parts of it are in the Lie algebra of the Lie group on the solution.
In applications, one can rarely sum exactly the Magnus series, and one has to truncate it to get approximate solutions. The main advantage of the Magnus proposal is that the truncated series very often shares important qualitative properties with the exact solution, at variance with other conventional perturbation theories. For instance, in classical mechanics the symplectic character of the time evolution is preserved at every order of approximation. Similarly, the unitary character of the time evolution operator in quantum mechanics is also preserved (in contrast, e.g., to the Dyson series solving the same problem).
Convergence of the expansion
From a mathematical point of view, the convergence problem is the following: given a certain matrix , when can the exponent be obtained as the sum of the Magnus series?
A sufficient condition for this series to converge for is
where denotes a matrix norm. This result is generic in the sense that one may construct specific matrices for which the series diverges for any .
Magnus generator
A recursive procedure to generate all the terms in the Magnus expansion utilizes the matrices defined recursively through
which then furnish
Here adkΩ is a shorthand for an iterated commutator (see adjoint endomorphism):
while are the Bernoulli numbers with .
Finally, when this recursion is worked out explicitly, it is possible to express as a linear combination of n-fold integrals of n − 1 nested commutators involving matrices :
which becomes increasingly intricate with .
The stochastic case
Extension to stochastic ordinary differential equations
For the extension to the stochastic case let be a -dimensional Brownian motion, , on the probability space
with finite time horizon and natural filtration. Now, consider the linear matrix-valued stochastic Itô differential equation (with Einstein's summation convention over the index )
where are progressively measurable -valued bounded stochastic processes and is the identity matrix. Following the same approach as in the deterministic case with alterations due to the stochastic setting the corresponding matrix logarithm will turn out as an Itô-process, whose first two expansion orders are given by and , where
with Einstein's summation convention over and
Convergence of the expansion
In the stochastic setting the convergence will now be subject to a stopping time and a first convergence result is given by:
Under the previous assumption on the coefficients there exists a strong solution , as well as a strictly positive
stopping time such that:
has a real logarithm up to time , i.e.
the following representation holds -almost surely:
where is the -th term in the stochastic Magnus expansion as defined below in the subsection Magnus expansion formula;
there exists a positive constant , only dependent on , with , such that
Magnus expansion formula
The general expansion formula for the stochastic Magnus expansion is given by:
where the general term is an Itô-process of the form:
The terms are defined recursively as
with
and with the operators being defined as
Applications
Since the 1960s, the Magnus expansion has been successfully applied as a perturbative tool in numerous areas of physics and chemistry, from atomic and molecular physics to nuclear magnetic resonance and quantum electrodynamics. It has been also used since 1998 as a tool to construct practical algorithms for the numerical integration of matrix linear differential equations. As they inherit from the Magnus expansion the
preservation of qualitative traits of the problem, the corresponding schemes are prototypical examples of geometric numerical integrators.
See also
Baker–Campbell–Hausdorff formula
Derivative of the exponential map
Notes
References
External links
UCSD
Ordinary differential equations
Stochastic differential equations
Lie algebras
Mathematical physics | Magnus expansion | [
"Physics",
"Mathematics"
] | 1,204 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
22,063,516 | https://en.wikipedia.org/wiki/Supramolecular%20chirality | In chemistry, the term supramolecular chirality is used to describe supramolecular assemblies that are non-superposable on their mirror images.
Chirality in supramolecular chemistry implies the non-symmetric arrangement of molecular components in a non-covalent assembly. Chirality may arise in a supramolecular system if one of its component is chiral or if achiral components arrange in a non symmetrical way to produce a supermolecule that is chiral.
References
Chirality
Stereochemistry | Supramolecular chirality | [
"Physics",
"Chemistry",
"Biology"
] | 116 | [
"Pharmacology",
"Origin of life",
"Stereochemistry",
"Chirality",
"Space",
"Stereochemistry stubs",
"nan",
"Asymmetry",
"Biochemistry",
"Spacetime",
"Symmetry",
"Biological hypotheses"
] |
22,064,427 | https://en.wikipedia.org/wiki/RoGFP | The reduction-oxidation sensitive green fluorescent protein (roGFP) is a green fluorescent protein engineered to be sensitive to changes in the local redox environment. roGFPs are used as redox-sensitive biosensors.
In 2004, researchers in S. James Remington's lab at the University of Oregon constructed the first roGFPs by introducing two cysteines into the beta barrel structure of GFP. The resulting engineered protein could exist in two different oxidation states (reduced dithiol or oxidized disulfide), each with different fluorescent properties.
Originally, members of the Remington lab published six versions of roGFP, termed roGFP1-6 (see more structural details below). Different groups of researchers introduced cysteines at different locations in the GFP molecule, generally finding that cysteines introduced at the amino acid positions 147 and 204 produced the most robust results.
roGFPs are often genetically encoded into cells for in-vivo imaging of redox potential. In cells, roGFPs can generally be modified by redox enzymes such as glutaredoxin or thioredoxin. roGFP2 preferentially interacts with glutaredoxins and therefore reports the cellular glutathione redox potential.
Various attempts have been made to make roGFPs that are more amenable to live-cell imaging. Most notably, substituting three positively-charged amino acids adjacent to the disulfide in roGFP1 drastically improves the response rate of roGFPs to physiologically relevant changes in redox potential. The resulting roGFP variants, named roGFP1-R1 through roGFP1-R14, are much more suitable for live-cell imaging. The roGFP1-R12 variant has been used to monitor redox potential in bacteria and yeast, but also for studies of spatially-organized redox potential in live, multicellular organisms such as the model nematode C. elegans. In addition, roGFPs are used to investigate the topology of ER proteins, or to analyze the ROS production capacity of chemicals.
One notable improvement to roGFPs occurred in 2008, when the specificity of roGFP2 for glutathione was further increased by linking it to the human glutaredoxin 1 (Grx1). By expressing the Grx1-roGFP fusion sensors in the organism of interest and/or targeting the protein to a cellular compartment, it is possible to measure the glutathione redox potential in a specific cellular compartment in real-time and therefore provides major advantages compared to other invasive static methods e.g. HPLC.
Given the variety of roGFPs, some effort has been made to benchmark their performance. For example, members of Javier Apfeld's group published a method in 2020 describing the 'suitable ranges' of different roGFPs, determined by how sensitive each sensor is to experimental noise in different redox conditions.
Species of roGFP
See Kostyulk 2020 for a more comprehensive review of different redox sensors.
See also
Synapto-pHluorin
References
Biochemistry detection methods
Biosensors
Cell imaging
Fluorescent proteins | RoGFP | [
"Chemistry",
"Biology"
] | 660 | [
"Biochemistry methods",
"Fluorescent proteins",
"Biochemistry detection methods",
"Chemical tests",
"Biosensors",
"Microscopy",
"Bioluminescence",
"Cell imaging"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.