id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
58,734,840 | https://en.wikipedia.org/wiki/Trichoderma%20hamatum | Trichoderma hamatum is a species of fungus in the family Hypocreaceae. It has been used a biological control of certain plant diseases, including Sclerotinia lettuce drop caused by Sclerotinia minor.
References
External links
Trichoderma
Biopesticides
Biotechnology
Biological pest control
Fungi described in 1906
Fungus species
Taxa named by Hermann Friedrich Bonorden | Trichoderma hamatum | [
"Biology"
] | 80 | [
"nan",
"Fungi",
"Fungus species",
"Biotechnology"
] |
58,734,908 | https://en.wikipedia.org/wiki/Trichoderma%20stromaticum | Trichoderma stromaticum is a species of fungus in the family Hypocreaceae. It is a parasite of the cacao witches broom pathogen and has been used in its biological control.
References
External links
Trichoderma
Biopesticides
Biotechnology
Biological pest control
Fungi described in 2000
Fungus species | Trichoderma stromaticum | [
"Biology"
] | 64 | [
"Fungi",
"Fungus species",
"nan",
"Biotechnology"
] |
58,740,760 | https://en.wikipedia.org/wiki/Kurth%20Kiln | Kurth Kiln was established by the Forests Commission Victoria in 1941 on a site about 7 km north of Gembrook on the Tomahawk Creek.
Dr Ernest Edgar Kurth from the University of Tasmania was commissioned to design the kiln with the aim of mass-producing charcoal as an alternative fuel in the response to war-time petrol rationing.
Gembrook was selected as the ideal site for the Kurth Kiln because it fully met three essential criteria required for successful operation;
Water - the kiln required 2000 gallons (9,100 litres) of water per day in order for its cooling systems to be effective.
Wood - the kiln burnt about 28 cords (~100 cubic metres) of wood per week.
Gradient - sloping land enabled easier top loading of wood into the kiln.
Dr Kurth was paid £5 for the use of his patented design (No 2563/41) and the total cost of establishing the kiln was 1,799 pounds 17 shillings and 2 pence.
The kiln commenced operation in March 1942 but transport difficulties combined with an oversupply of charcoal from private operators meant the kiln was used only intermittently during 1943 and was shut down soon after. Over the period of its operation, Kurth Kiln produced only 471 tons of charcoal which represented a tiny fraction of Victoria’s total production.
Petrol rationing
Australia’s declaration of War on 2 September 1939 immediately raised concerns about the security of petrol supplies. At the time Australia was totally reliant on imported fuel and had a limited storage capacity. At the start of the war, the country had sufficient petrol for only three months supply and by May 1940 the Commonwealth Oil Board estimated that only 67 per cent of the total capacity of about 140 million gallons was on-hand.
The Federal Government considered increasing the price of fuel to dampen demand. Another plan was that petrol should be merchandised in two colours, blue for commercial vehicles, and red for private cars, with red petrol being substantially more expensive than blue. The motoring industry and newspapers resisted the changes.
When France collapsed, the Minister for Supply stressed to Cabinet that continuity of the already erratic deliveries was under threat, and on 6 June 1940 Cabinet finally made the decision that rationing should be introduced to reduce consumption by 50 per cent. Political expediency came into play and petrol rationing was introduced in September 1940 and restrictions were tightened as the War progressed.
On 17 June 1941 the Prime Minister, Robert Menzies announced further restrictions to motorists limiting them to two gallons per month which were enough for 1,000 miles per year so many simply put their cars up on blocks for the duration and switched to public transport or walked. The scheme finally devised for petrol rationing coupons was complicated, and the paperwork was profuse. Drivers had to apply for a petrol licence, from which they were allocated ration tickets based on an assessment of their needs.
Forests Commission firefighting vehicles were exempt from the petrol restrictions. District Foresters were authorised to issue petrol coupons for timber industry trucks, which, in the absence of private cars and utes, often served as family transport too. Country school buses were fitted with gas converters and on declared days of Acute Fire Danger they were banned. Some of Melbourne's buses also ran on charcoal.
Petrol rationing was not strictly enforced until 1942 but remained in place until February 1950.
Charcoal as a replacement fuel
During the first half of the twentieth century charcoal production in Victoria was small and employed relatively few people. The charcoal was supplied to blacksmiths, coal depots, gas works and some powerhouses. The main areas of State forest producing charcoal were in a broad arc across Central Victoria at places like Beaufort, Trentham, Lionville, Macedon, Broadford, the Dandenongs and Gembrook. Charcoal was also produced in East Gippsland, mainly at Nowa Nowa. The best trees were durable species like red ironbark (Eucalyptus sideroxylon), red box (E. polyanthemos), grey box (E. microcarpa), yellow box (E. mellidora), red gum (E. camadulensis) and for lighter grades of charcoal, yellow stringybark (E. meullerana).
The charcoal was produced by slow and controlled burning of wood in earthen, brick or metal kilns. The simplest method was the "beehive kiln" where wood was stacked vertically into a conical heap to allow air and smoke movement and then covered with a layer of sticks and twigs. The stack was then covered further with a thick layer of earth and ash. Once the stack was alight the small chimney opening at the top was sealed. It then required constant tending to ensure no air entered the stack and took about three days to reduce the wood to charcoal. Large beehive kilns each contained about 50 cords of wood and were very labour intensive. They also produced charcoal of uneven consistency with a high ash content and some inevitable soil contamination. Other charcoal producers opted for lined pits in the ground, steel chambers or recycled boilers.
With the urging of the Federal Government charcoal quickly emerged as a substitute fuel in response to the national petrol rationing. A special vehicle-mounted unit which converted charcoal into flammable gases (principally carbon monoxide and hydrogen) was known to be suitable to power internal combustion engines.
Charcoal was relatively simple to produce but had a well-deserved reputation for being inconvenient to use for short trips, inefficient, under powered, dirty, belching black smoke, catching fire and occasionally exploding. Also, the cost of installation of a heavy and cumbersome producer gas kit was about £100, or the equivalent of 16 times the average weekly wage, and a bag of charcoal lasted between 30 and 60 miles at a cost between 6 and 10 shillings. Charcoal had a cost advantage over petrol and tests earlier in 1938-39 indicated savings in the order 80% for truck operations for producer gas compared to the cost of petrol. At the time, a new car, if you could get one, cost around £250 for a small Austin and £525 for a large Buick.
Over the war-years, some 56,000 gas producer units of varying design were fitted to private and commercial vehicles in Australia. This low conversion rate to producer gas technology represented less than 6.5% of all vehicles on the road by 1944.
The overall consumption of charcoal in Victoria rose thirtyfold from about 110 tons per month before the War to over 3300 tons by mid-1942. This massive increase required nearly 280,000 tones of dry wood annually to feed the hundreds of kilns set up across State forests, as well as on private land, at a time when labour was critically short.
Forests Commission Victoria
The task of ensuring adequate supplies of charcoal fell to the Forests Commission Victoria (FCV). It subsequently formed the State Charcoal Branch to organise the increased production of charcoal, to build up reserves to meet emergencies and to regulate the cost to consumers. The assistance of an expert Advisory Panel, representing charcoal producers, manufacturers and distributors of vehicle gas equipment, the Department of Supply and Development, and the Victorian Automobile Chamber of Commerce, was enlisted under the Chairman of the Forests Commission, Alfred Vernon Galbraith. Preliminary arrangements were made for bag supplies, for railway sidings in Melbourne, and for processing of charcoal bought by the Branch in excess of the requirements of private grading firms. In its first year, 17,421 tons of charcoal were produced compared with 1,650 tons before the War. Production peaked at 38,922 tons in 1942-43.
The Commission had the added responsibility of providing emergency firewood to Melbourne for heating and cooking purposes as a result of reductions in the supply of coal, electricity and gas. The Emergency Firewood Project continued long after the war ended and over the period from 1941 to 1954, nearly 2 million tons of firewood was produced.
An estimated 221 kilns and 12 pits were producing charcoal by the middle of 1942. Some of the labour was provided by Italian wartime internees. There were also over 600 commercial kilns operating mostly on private property. At least 50 to 60 private charcoal retorts were set up in the Barmah forest alone. The majority of the kilns were metal retorts because charcoal from earthen beehive kilns or unlined pits proved unsuitable for motor vehicles, causing gumming of engine valves and the controls in the gas lines due to the condensation of tar.
The Chairman of the Forests Commission, Alfred Vernon Galbraith became aware of experiments by Dr Ernest Kurth, Professor of Chemistry at the University of Tasmania who had been experimenting with the pyrolysis of wood and kiln designs since 1940. His work led to a quarter-size prototype kiln at Dover, Tasmania in March 1941. A second operational kiln was built at a sawmill near Launceston in January 1942.
In July 1941, Professor Kurth provided details of his kiln design and nine pages of handwritten notes on its operation to Galbraith which marked the beginning of the project at Gembrook.
The kiln was unusual in that it could operate continually with the top loading of wood billets and bottom recovery of charcoal. A water-cooled grate at the bottom of the stack caused the brittle charred wood to crumble under its own weight into manageable pieces, while at the same time maintaining the charring temperature at the critical point to produce a consistent quality charcoal. Not only was this process said to be 50% faster than any other method then in use, but it was also 10-15% more efficient. So cooled charcoal was raked out the bottom as more wood was added into the top.
Tests in Tasmania indicated that the prototype could produce about 1.4 tons of charcoal per day which compared favourably with a single ton every three days from a standard steel kiln.
Seven tons of wood produced one ton of charcoal with an output of 20 tons per week if the kiln was operated continuously with three shifts per day.
The cost of building the kiln was estimated to be about half that required for five or six portable steel kilns needed to produce the same quantity. Good quality charcoal sold for 4s 6d for a 50 lb bag.
The claimed advantages could not be ignored by an organisation under pressure to secure Victoria's war-time charcoal supplies, so Galbraith enclosed £5 to Professor Kurth in a letter on 16 July 1941 for the use of his patented design.
Gembrook site
The selection of the suitable site on State forest for Professor Kurth's kiln depended on adequate supplies of water, wood, and gradient.
Firstly, the kiln needed approximately 2000 gallons (9100 litres) per day for its cooling system, secondly, it needed about 28 cords (100 cubic metres) of wood per week as feedstock, and thirdly a slope of approximately 18 feet (6 metres) was required to facilitate top loading. The site chosen at Tomahawk Creek about 7 km north of Gembrook met all these requirements.
Water was supplied by an old mining race from the Tomahawk Creek which also powered a water wheel operating a vibrating screen to grade the charcoal before bagging.
Much of the Gembrook landscape had been cleared for agriculture, supported a thriving timber industry and also mined for gold and gemstones since 1859 so most of the older mature forest had been disturbed in some way. More importantly, a large area of older and damaged messmate trees (Eucalyptus obliqua) in the nearby State forest had been deliberately ringbarked as a silvicultural treatment to encourage new regeneration growth by Forests Commission unemployment relief workers some 10 years earlier during the 1930s depression. Approximately 145 men had worked over nearly 4000 acres during 1930-31. This left a large amount of standing dry wood suitable for the kilns operation within a one-mile radius.
Access to roads and the railway station at Gembrook for transport was an important consideration. There was also a critical shortage of labour to operate the kiln during the war years which was exacerbated by a major timber salvage program underway in the Central Highlands after the deadly 1939 Black Friday bushfires. Although the site at Gembrook had not been severely burnt in either 1926 or 1939.
Earth works commenced in late August 1941. A construction contract for the kiln was awarded to builders Stanley and Nance of Middle Park on 17 October and the detailed design work was done by the Forests Commission’s own architect, Mr S. J. B. Hart. Building commenced in November and by 18 December 1941 was nearly completed. The architect reported on 11 February 1942 to the Secretary of the Forests Commission that the kiln was ready for operations after a small trial run. The project cost £1799 17s 2d which included the establishment of the site, construction of the kiln, erection of buildings, purchase of equipment, the connection of telephone and supply of water. It was a sizable investment.
The kiln was a rectangular structure 4m x 3.5m x 8m high on a concrete foundation and the red brick walls were reinforced with iron strapping. The kiln held 25 tons of 3-feet-long billets per load which were carried up on a small inclined tramway to be loaded into the top.
The initial firing was on 18 March 1942 and after some initial teething problems with the steel doors the kiln was in full production by mid-1942. Combustion took two days to complete and kiln produced 243 tons of charcoal for its first financial year 1941-42. But in the latter half of the year, this reduced to only 29 tons.
Production by July 1942 had been so successful that problems of storage soon became apparent. Commissioner Finton George Gerraty reported that about 70 tons of graded charcoal was stockpiled and the storage sheds were taxed to the limit. Production was suspended to solve this short-term problem. However, at the same time the kiln suffered further structural problems. An inspection in September 1942 indicated that major repairs were required to some loosening brickwork near the inspection doors and this together with the emerging oversupply of charcoal raised serious questions about Kurth Kilns future.
Continuing transport and distribution difficulties from Gembrook combined with an oversupply of charcoal from private operators meant the kiln was used only intermittently during 1943 and was shut down soon after.
Petrol rationing also eased at the end of the War reducing the demand for charcoal but didn't end until 1950.
Over the total period of its operation, Kurth Kiln produced 471 tons of charcoal which represented only a tiny fraction of Victoria’s total production. But Kurth Kiln was a victim of circumstance and not the "white elephant" that these low production figures might suggest.
Forest workers camp
After the cessation of war hostilities, the Kallista District of the Forests Commission advised in February 1946, that they could absorb 40 returned servicemen for silvicultural, afforestation, fire protection, and utilisation works. This was part of a five-year statewide plan drawn up and approved by the Allied Works Council with funding allocated for the first two-year period totalling £3,842,175. Similar camps were established on State forest to house and employ migrants and refugees from war-torn Europe.
To provide accommodation, eighteen masonite huts were purchased from the Army and erected at the site. By July 1946 the Commission decided to make Kurth Kiln its main base camp for the region to house 80 to 100 men. The forest camp operated continuously until 8 January 1963 when three huts were burnt by bushfires. The remaining buildings began to slowly deteriorate and construction of an all-weather road network meant workmen could be housed at nearby townships instead. The site declined through to the 1970s. Three huts were demolished and the material used to modify a remaining one as a caretaker’s residence in about 1984 for Ron Thornton who lived on-site for another 16 years. The sheds were then mainly used as storage of eucalyptus seed needed for regeneration works and for Forests Commission equipment such and pumps and hoses. The small dam next to the kiln was regularly used to as a pump school to prepare staff for the summer fire season.
Kurth kiln began its transformation into a picnic ground in about 1978 led by District Forester Frank May with works supervised by two local overseers Tom Steege and Bob Ferris.
The site is now registered as historical and scientific significance with Kurth Kiln being included as an indicative place on the Register of the National Estate (004495) as well as being listed in the Heritage Inventory of archaeological sites maintained by Heritage Victoria (H8022-0013). Remnants of a historic steel charcoal kiln can also still be found at nearby Tonimbuk and another at Kinglake West.
Parks Victoria is now responsible for Kurth Kiln and has undertaken conservation works on several of the most urgent building repairs. A small and active volunteer friends group formed in June 1999 which helps to protect and interpret the site.
Gallery
References
External links
McHugh, Peter. (2020). Forests and Bushfire History of Victoria : A compilation of short stories, Victoria. https://nla.gov.au/nla.obj-2899074696/view
Forests Commission Retired Personnel Association (FCRPA) - Peter McHugh - https://www.victoriasforestryheritage.org.au/
https://www.friendsofkurthkiln.org.au/
Forestry in Australia
Government agencies of Victoria (state)
Charcoal
Kilns | Kurth Kiln | [
"Chemistry",
"Engineering"
] | 3,592 | [
"Chemical equipment",
"Kilns"
] |
58,742,475 | https://en.wikipedia.org/wiki/MAGESTIC | Multiplexed Accurate Genome Editing with Short, Trackable, Integrated Cellular barcodes (MAGESTIC) is a platform that builds on the CRISPR/Cas technique. It further improves CRISPR/Cas by making the gene-editing process more precise. It also increases cell survival during the editing process up to sevenfold.
This technology was invented at the Stanford Genome Technology Center in collaboration with the Joint Initiative for Metrology in Biology (JIMB) which is a coalition of Stanford University and the National Institute of Standards and Technology.
Overview
Gene editing is used for a variety of tasks including the modifying of crops, the modifying of bacteria, and the modifying of disease-causing genetic mutations in patients. When only a single edited cell line is required, CRISPR/Cas combined with the endogenous DNA repair efficiency is sufficient to obtain an edited cell line. However, when trying to introduce many edits in multiplex, a higher efficiency of Homology directed repair is required. The MAGESTIC technology has multiple components. One component, the LexA-Fkh1 protein is involved in the process of Donor Recruitment that increases the efficiency of homology directed repair. The second component is a library of CRISPR Guide RNAs paired with donor DNA which encodes for specified edited to be integrated through homology directed repair. This in turn is linked to a DNA barcodes that allows for specific variants to be tracked in pools, similar to how Genome-wide CRISPR-Cas9 knockout screens work, only MAGESTIC is more versatile as it allows for not only loss of function edits, but also DNA Codon changes, Single-nucleotide polymorphism, Indels, and other types of genetic changes to be introduced and tracked. By improving DNA repair efficiency, using array-synthesized guide–donor oligos for the plasmid-based high-throughput editing, and integrating a genomic barcode to prevent plasmid barcode loss, MAGESTIC leads to more uniform pools with genome integrated stable single copy barcodes and enables robust phenotyping.
Donor Recruitment
Because editing multiple sites in pools can be impacted by a number of factors including ineffective CRISPR Guide RNA, DNA synthesis errors, competition with Non-homologous end joining and other challenges that occur when building multiplex libraries, MAGESTIC screens required improved DNA repair. This is where the donor recruitment aspect of MAGESTIC comes in. MAGESTIC achieves greater editing efficiency by localizing donor DNA to the site of DNA breaks introduced by a CRISPR cut.
A CRISPR machinery cuts at desired locations in the genome, and then MAGESTIC direct the donor DNA to the site of this cut to direct cells to introduce designed edits at the DNA cut sites. This technology is called donor recruitment and relies on a fusion protein that contains one domain recruited to DNA breaks and another domain that binds to the donor DNA. This allows for the production of high quality precision edit pools in yeast, where each cells contains a single edit and a DNA barcode. The donor recruitment aspect of the technology also holds the potential to improve editing efficiency in additional cell types, such as mammalian cells. This may one day prove beneficial to gene therapies or other therapeutic editing.
References
Gene therapy
Medical genetics
Gene delivery
Applied genetics
Biological engineering
Biotechnology
2018 in science | MAGESTIC | [
"Chemistry",
"Engineering",
"Biology"
] | 663 | [
"Genetics techniques",
"Biological engineering",
"Genetic engineering",
"Biotechnology",
"Gene therapy",
"Molecular biology techniques",
"nan",
"Molecular biology",
"Gene delivery"
] |
38,179,442 | https://en.wikipedia.org/wiki/Schlessinger%27s%20theorem | In algebra, Schlessinger's theorem is a theorem in deformation theory introduced by that gives conditions for a functor of artinian local rings to be pro-representable, refining an earlier theorem of Grothendieck.
Definitions
Λ is a complete Noetherian local ring with residue field k, and C is the category of local Artinian Λ-algebras (meaning in particular that as modules over Λ they are finitely generated and Artinian) with residue field k.
A small extension in C is a morphism Y→Z in C that is surjective with kernel a 1-dimensional vector space over k.
A functor is called representable if it is of the form hX where hX(Y)=hom(X,Y) for some X, and is called pro-representable if it is of the form Y→lim hom(Xi,Y) for a filtered direct limit over i in some filtered ordered set.
A morphism of functors F→G from C to sets is called smooth if whenever Y→Z is an epimorphism of C, the map from F(Y) to F(Z)×G(Z)G(Y) is surjective. This definition is closely related to the notion of a formally smooth morphism of schemes. If in addition the map between the tangent spaces of F and G is an isomorphism, then F is called a hull of G.
Grothendieck's theorem
showed that a functor from the category C of Artinian algebras to sets is pro-representable if and only if it preserves all finite limits. This condition is equivalent to asking that the functor preserves pullbacks and the final object. In fact Grothendieck's theorem applies not only to the category C of Artinian algebras, but to any category with finite limits whose objects are Artinian.
By taking the projective limit of the pro-representable functor in the larger category of linearly topologized local rings, one obtains a complete linearly topologized local ring representing the functor.
Schlessinger's representation theorem
One difficulty in applying Grothendieck's theorem is that it can be hard to check that a functor preserves all pullbacks. Schlessinger showed that it is sufficient to check that the functor preserves pullbacks of a special form, which is often easier to check. Schlessinger's theorem also gives conditions under which the functor has a hull, even if it is not representable.
Schessinger's theorem gives conditions for a set-valued functor F on C to be representable by a complete local Λ-algebra R with maximal ideal m such that R/mn is in C for all n.
Schlessinger's theorem states that a functor from C to sets with F(k) a 1-element set is representable by a complete Noetherian local algebra if it has the following properties, and has a hull if it has the first three properties:
H1: The map F(Y×XZ)→F(Y)×F(X)F(Z) is surjective whenever Z→X is a small extension in C and Y→X is some morphism in C.
H2: The map in H1 is a bijection whenever Z→X is the small extension k[x]/(x2)→k.
H3: The tangent space of F is a finite-dimensional vector space over k.
H4: The map in H1 is a bijection whenever Y=Z is a small extension of X and the maps from Y and Z to X are the same.
See also
Formal moduli
Artin's criterion
References
Theorems in algebraic geometry | Schlessinger's theorem | [
"Mathematics"
] | 793 | [
"Theorems in algebraic geometry",
"Theorems in geometry"
] |
38,179,812 | https://en.wikipedia.org/wiki/Molypermalloy%20powder%20core | A molypermalloy powder (MPP) core is a toroidal magnetic core comprised from the powder of multiple alloys. It is distributed with air gaps to help condense its magnetic field to minimize core losses. Its composition is made from approximately 79% nickel, 17% iron, and 4% molybdenum.
It maintains the lowest core losses out of all the magnetic powdered cores used. Its relative permeability can range from 14 to 550. (See permeabilities of common materials.)
Toroidal powder cores are used in the development of a subgroup of microelectronics known as inductors, transformers and electronic filters.
An MPP core possesses many positive magnetic qualities which makes it more optimal to use in the creation of such devices. A few of its properties include: low eddy current losses and hysteresis, low permeability changes in high temperatures, high Curie temperature, high electrical resistivity at operating frequency and exemplary inductive stability under both AC and DC currents.
MPP cores are primarily used in inductors that require a core to have higher saturation point while maintaining other valuable magnetic properties. A standard MPP core saturates at around 0.75 Tesla. A ferrite core saturates at around 0.45 Tesla. Molypermalloy powder cores are commonly used in the making of: flyback transformers, resonant circuits, quartz filters, loading coils, choke coils, pulse transformers, and other industrial and military circuits.
Molybdenum permalloy powder is made by grinding hot-rolled and embrittled cast ingots; then, the alloy is insulated and screened to a fineness of 120 mesh for use in audio frequency applications, and 400 mesh for use at high frequencies.
MPP was developed into cores by the Western Electric Company and the Bell Telephone Laboratory (formerly known as AT&T) in the early 1940s. It has made its largest impact in the power conversion field by permitting increased frequency, resulting in weight reduction and increased compactness in computer systems.
As frequency increases, desired permeability decreases. Thus, when using frequencies higher than 500 kHz MPP cores are often replaced by ferrite cores.
Disadvantages of MPP cores: Due to the complex nature of its manufacturability and tooling feasibility, shapes are limited to a toroidal configuration and these types of cores generally have maximum operating frequency of around 1 MHz.
References
Electromagnetic components
Iron compounds
Powders | Molypermalloy powder core | [
"Physics"
] | 516 | [
"Materials",
"Powders",
"Matter"
] |
38,180,478 | https://en.wikipedia.org/wiki/Solar%20particle%20event | In solar physics, a solar particle event (SPE), also known as a solar energetic particle event or solar radiation storm, is a solar phenomenon which occurs when particles emitted by the Sun, mostly protons, become accelerated either in the Sun's atmosphere during a solar flare or in interplanetary space by a coronal mass ejection shock. Other nuclei such as helium and HZE ions may also be accelerated during the event. These particles can penetrate the Earth's magnetic field and cause partial ionization of the ionosphere. Energetic protons are a significant radiation hazard to spacecraft and astronauts.
Description
SPEs occur when charged particles in the Sun's atmosphere are accelerated to extremely high velocities. These charged particles, referred to as solar energetic particles, can escape into interplanetary space where they follow the interplanetary magnetic field.
When solar energetic particles interact with the Earth's magnetosphere, they are guided by the Earth's magnetic field towards the North and South poles where they can penetrate into the upper atmosphere.
Cause
The physical mechanism behind the acceleration of solar energetic particles leading up to SPEs is currently debated. However, SPEs can generally be divided into two classes
Gradual events
Gradual SPEs are thought to involve the acceleration of particles by shocks driven by coronal mass ejections in the upper corona. They are associated with type II radio bursts and are characterized by elemental abundances, charge states, and temperatures similar to that of the ambient corona. These events produce the highest particle intensities near Earth.
Impulsive events
Impulsive SPEs are thought to involve the acceleration of particles mostly by processes associated with magnetic reconnection and wave-particle interactions at the locations of solar flares. They are associated with short-duration flare emissions at low altitudes and type III radio bursts. They are less intense near Earth than gradual events.
An additional hybrid class has been identified which involves characteristics of both gradual and impulsive events.
Terrestrial effects
Protons accelerated during an SPE normally have insufficient energy to penetrate the Earth's magnetic field. However, during unusually strong flares, protons can be accelerated to sufficient energies to reach the Earth's magnetosphere and ionosphere around the North Pole and South Pole.
Polar cap absorption events
Energetic protons that are guided into the polar regions collide with atmospheric constituents and release their energy through the process of ionization. The majority of the energy is deposited in the extreme lower region (D-region) of the ionosphere (around 50–80 km in altitude). This area is particularly important to ionospheric radio communications because this is the area where most of the absorption of radio signal energy occurs. The enhanced ionization produced by incoming energetic protons increases the absorption levels in the lower ionosphere and can have the effect of completely blocking all ionospheric radio communications through the polar regions. Such events are known as polar cap absorption events. These events commence and last as long as the energy of incoming protons at approximately greater than 10 MeV (million electron volts) exceeds roughly 10 pfu (particle flux units or particles sr−1 cm−2 s−1) at geosynchronous satellite altitudes.
Polar cap absorption events and the associated HF radio blackout pose unique problems to commercial and military aviation. Routes that transit polar regions, especially above about 82-degrees north latitude, can only rely on HF radio communications. Hence, if polar cap absorption events are ongoing or forecast, commercial airlines are required to redirect their routes such that HF communications remain viable.
Ground level enhancements
Extremely intense SPEs capable of producing energetic protons with energies in excess of 200 MeV can increase neutron count rates at ground levels through secondary radiation effects. These rare events are known as ground level enhancements (or GLEs).
Presently, 73 GLE events are known.
The strongest known GLE event was detected on 23-Feb-1956.
Some events produce large amounts of HZE ions, although their contribution to the total radiation is small compared to the level of protons.
Miyake events
Solar particle events are thought to be responsible for Miyake events, observed sharp enhancements of the concentration of certain isotopes found in tree rings. These events, discovered by physicist Fusa Miyake, have enabled the dating of a number of past SPEs to specific years.
Hazards
Humans
High altitude commercial transpolar aircraft flights have measured increases in radiation during these events. In 2019, the International Civil Aviation Organization introduced the Space Weather Centres that publish space weather advisories pertinent to international air navigation, describing the effects of space weather on aviation and possible mitigation actions. Aircraft flights away from the polar regions are far less likely to see an impact from SPEs.
Significant proton radiation exposure can be experienced by astronauts who are outside of the protective shield of the Earth's magnetosphere, such as an astronaut in-transit to, or located on, the Moon. However, the effects can be minimized if the astronauts are in a low Earth orbit and remain confined to the most heavily shielded regions of their spacecraft. Proton radiation levels in low Earth orbit increase with orbital inclination. Therefore, the closer a spacecraft approaches the polar regions, the greater the exposure to energetic proton radiation will be.
Spacecraft
Energetic protons from SPEs can electrically charge spacecraft to levels that can damage electronic components. They can also cause electronic components to behave erratically. For example, solid state memory on spacecraft can be altered, which may cause data or software contamination and result in unexpected (phantom) spacecraft commands being executed. Energetic proton storms also destroy the efficiency of the solar panels that are designed to collect and convert sunlight to electricity. During years of exposure to energetic proton activity from the Sun, spacecraft can lose a substantial amount of electrical power that may require important instruments to be turned off.
When energetic protons strike the sensitive optical electronics in spacecraft (such as star trackers and other cameras) flashes occur in the images being captured. The effect can be so pronounced that during extreme events, it is not possible to obtain quality images of the Sun or stars. This can cause spacecraft to lose their orientation, which is critical if ground controllers are to maintain control.
Associated phenomena
Major SPEs can be associated with geomagnetic storms that can cause widespread disruption to electrical grids. However, proton events themselves are not responsible for producing anomalies in power grids, nor are they responsible for producing geomagnetic storms. Power grids are only sensitive to fluctuations in the Earth's magnetic field.
See also
Heliophysics
List of solar storms
Solar energetic particles
Space weather
Explanatory notes
References
External links
Solar Particle Events Affecting the Earth Environment 1976 - present
SWPC S-scale
SWPC alert descriptions
Carrington Super Flare, NASA Science News, May 6, 2008
Solar phenomena
Space hazards | Solar particle event | [
"Physics"
] | 1,383 | [
"Physical phenomena",
"Stellar phenomena",
"Solar phenomena"
] |
38,180,687 | https://en.wikipedia.org/wiki/Base%20calling | Base calling is the process of assigning nucleobases to chromatogram peaks, light intensity signals, or electrical current changes resulting from nucleotides passing through a nanopore. One computer program for accomplishing this job is Phred, which is a widely used base calling software program by both academic and commercial DNA sequencing laboratories because of its high base calling accuracy.
Base callers for Nanopore sequencing use neural networks trained on current signals obtained from accurate sequencing data.
Base calling accuracy
Base calling can be assessed by two metrics, read accuracy and consensus accuracy. Read accuracy refers to the called base's accuracy to a known reference. Consensus accuracy refers to how accurate a consensus sequence is compared to overlapping reads from the same genetic locus.
References
Molecular biology
Bioinformatics | Base calling | [
"Chemistry",
"Engineering",
"Biology"
] | 160 | [
"Bioinformatics",
"Biological engineering",
"Biochemistry",
"Molecular biology"
] |
43,807,943 | https://en.wikipedia.org/wiki/Hydrophosphination | Hydrophosphination is the insertion of a carbon-carbon multiple bond into a phosphorus-hydrogen bond forming a new phosphorus-carbon bond. Like other hydrofunctionalizations, the rate and regiochemistry of the insertion reaction is influenced by the catalyst. Catalysts take many forms, but most prevalent are bases and free-radical initiators. Most hydrophosphinations involve reactions of phosphine (PH3).
Acid-base routes
The usual application of hydrophosphination involves reactions of phosphine (PH3). Typically base-catalysis allows addition of Michael acceptors such as acrylonitrile to give tris(cyanoethyl)phosphine:
PH3 + 3 CH2=CHZ → P(CH2CH2Z)3 (Z = NO2, CN, C(O)NH2)
Acid catalysis is applicable to hydrophosphination with alkenes that form stable carbocations. These alkenes include isobutylene and many analogues:
PH3 + R2C=CH2 → R2(CH3)CPH2 (R = Me, alkyl, etc)
Bases catalyze the addition of secondary phosphines to vinyldiphenylphosphine:
HPR2 + CH2=CHPR'2 → R2PCH2CH2PR'2
Free-radical methods
Many hydrophosphination reactions are initiated by free-radicals. AIBN and peroxides are typical initiators, as well as Ultraviolet irradiation. In this way, the commercially important tributylphosphine and trioctylphosphine are prepared in good yields from 1-butene and 1-octene, respectively.
The reactions proceed by abstraction of an H atom the phosphine precursor, producing the phosphino radical, a seven electron species. This radical then adds to the alkene, and subsequent H-atom transfer completes the cycle. Some highly efficient hydrophosphinations appear not to proceed via radicals, but alternative explanations are lacking.
Metal-catalyzed reactions
Metal-catalyzed hydrophosphinations are not widely used, although they have been extensively researched. Studies mainly focus on secondary and primary organophosphines (R2PH and RPH2, respectively). These substrates bind to metals, and the resulting adducts insert alkenes and alkynes into the P-H bonds via diverse mechanisms.
Early transition metal and lanthanide catalysts
Metal complexes of d0 configurations are effective catalysts for hydrophosphinations of simple alkenes and alkynes. Intramolecular reactions are facile, e.g. starting with α,ω-pentenylphosphine. The primary phosphine undergoes a σ-bond metathesis with the bis(trimethylsilyl)methylene ligand forming the lanthanide-phosphido complex. Subsequently, the pendant terminal alkene or alkyne inserts into the Ln-P bond. Finally, protonolysis of the Ln-C bond with the starting primary phosphine releases the new phosphine and regenerates the catalyst. Given that the metal is electron-poor, the M-C bond is sufficiently enough to be protonolyzed by the substrate primary phosphine.
Most metal catalyzed hydrophosphinations proceed via metal phosphido intermediates. Some however proceed by metal-phosphinidene intermediates, i.e. species with M=PR double bonds. One such example is the Ti-catalyzed hydrophosphination of diphenylacetylene with phenylphosphine. This system involves a cationic catalyst precursor that is stabilized by the bulky 2,4,6-tri(isopropyl)phenyl- substituent on the phosphinidene and the close ionic association of methyltris(pentafluorophenyl)borate. This precursor undergoes exchange with phenylphosphine to give the titanium-phenylphosphinidene complex, which is the catalyst. The Ti=PPh species undergoes a [2+2] cycloaddition with diphenylacetylene to make the corresponding metallacyclobutene. The substrate, phenylphosphine, protonolyzes the Ti-C bond and after a proton shift regenerates the catalyst and releases the new phosphine.
Titanium-catalyzed 1,4-hydrophosphination of 1,3-dienes with diphenylphosphine has been demonstrated. It is a rare example of a d2 catalyst. In the first step, the Ti(II) precursor inserted in the P-H bond of diphenylphosphine (Ph2PH).
Late transition metal catalysts
Late transition metal hydrophosphination catalysts, i.e. those reliant on the nickel-triad and neighboring elements, generally require alkenes and alkynes with electron withdrawing substituents. A strong base is required as a cocatalyst.
Some late metal hydrophosphination catalysts proceed via oxidative addition of a P-H bond. For example, a Pt(0) catalyst undergoes oxidative addition of a secondary phosphine to form the corresponding hydrido Pt(II) phosphido complex. These systems catalyze hydrophosphination of acrylonitrile, although this reaction can be achieved without metal catalysts. The key P-C bond-forming step occurs through an outer-sphere, Michael-type addition.
The usual mechanism for hydrophosphination for late metal catalysts involves insertion of the alkene into the metal-phosphorus bond. Insertion into the metal-hydrogen bond is also possible. The product phosphine is produced through reductive elimination of a P-C bond rather than a P-H bond in Glueck's system. The Ni(0) catalyst involves oxidation addition of a P-H bond to the metal, followed by insertion of the alkene into the M-H bond.
Hydrophosphorylation and related reactions
Utilizing phosphorus(V) precursors hydrophosphorylation entails the insertion of alkenes and alkynes into the P-H bonds of secondary phosphine oxides:
R2P(O)H + CH2=CHR → R2P(O)CH2CH2R
The reaction can be effected both using metal catalysts or free-radical initiators.
Further reading
References
Addition reactions
Green chemistry
Organophosphanes
Stoichiometry | Hydrophosphination | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,419 | [
"Green chemistry",
"Chemical reaction engineering",
"Stoichiometry",
"Chemical engineering",
"Environmental chemistry",
"nan"
] |
43,815,140 | https://en.wikipedia.org/wiki/Interceptor%20ditch | In geotechnical engineering, an interceptor ditch is a small ditch or channel constructed to intercept and drain water to an area where it can be safely discharged. These are used for excavation purposes of limited depth made in a coarse-grained soils. These are constructed around an area to be dewatered. Sump pits are also placed at suitable intervals for installation of centrifugal pumps to remove the water collected in an efficient manner. In fine sands and silts, there may be sloughing, erosion or quick conditions. For such type of soils the method is confined to a depth of 1 to 2 m. Interceptor ditches are most economical for carrying away water which emerge on the slopes and near the bottom of the foundation pit. Its size depends on the original ground slope, runoff area, type of soil and vegetation, and other factors related to runoff volume.
Construction guidelines
The interceptor ditch commonly consists of a ditch and may have an associated dike.
Sediment control measures may be required to filter or trap sediments before the runoff leaves the construction area.
The construction of the interceptor ditch at the crown of a slope is normally accomplished prior to the excavation of the cut section.
Maintenance
Inspection and maintenance is necessary after completion of construction of any structure. Here some steps followed in the maintenance of interceptor ditches are summarized below:
Periodic inspection and maintenance will be required based on post-construction site conditions.
Make any repairs necessary to ensure that it is operating properly.
Locate any damaged areas and repair as necessary.
Remove any channel obstructions (particularly waste materials) which would otherwise obstruct dewatering.
See also
Earthworks (engineering)
Digging
References
Soil mechanics
Geotechnical engineering
Civil engineering | Interceptor ditch | [
"Physics",
"Engineering"
] | 341 | [
"Applied and interdisciplinary physics",
"Soil mechanics",
"Geotechnical engineering",
"Construction",
"Civil engineering"
] |
43,815,192 | https://en.wikipedia.org/wiki/Liftware | Liftware is a brand name for a spoon designed to counteract the tremor associated with medical conditions such as Parkinson's disease or essential tremors. The company which designed the projects, Lift Labs, was founded by Anupam Pathak, a University of Michigan Ph.D. student.
The device works by detecting tremors with an accelerometer then responding to them with an actuator. The product first became available in December 2013.
Lift Labs (which made the Liftware spoon) was acquired by Google in September 2014 for integration into then life sciences division of Google X. Anupam Pathak became the technical lead for the division.
Google launched its version of the spoon in November 2014, priced at $195.
References
External links
Videos demonstrating product
Shaky hand, stable spoon: High-tech help for essential tremor - medical review
Google
Medical equipment
Spoons
Motion control
Parkinson's disease
Verily
Manufacturing companies based in San Francisco
Manufacturing companies established in 2010
2010 establishments in California | Liftware | [
"Physics",
"Engineering",
"Biology"
] | 200 | [
"Physical phenomena",
"Life sciences industry",
"Automation",
"Medical equipment",
"Motion (physics)",
"Motion control",
"Verily",
"Medical technology"
] |
50,063,858 | https://en.wikipedia.org/wiki/X-ray%20motion%20analysis | X-ray motion analysis is a technique used to track the movement of objects using X-rays. This is done by placing the subject to be imaged in the center of the X-ray beam and recording the motion using an image intensifier and a high-speed camera, allowing for high quality videos sampled many times per second. Depending on the settings of the X-rays, this technique can visualize specific structures in an object, such as bones or cartilage. X-ray motion analysis can be used to perform gait analysis, analyze joint movement, or record the motion of bones obscured by soft tissue. The ability to measure skeletal motions is a key aspect to one's understanding of vertebrate biomechanics, energetics, and motor control.
Imaging Methods
Planar
Many X-ray studies are performed with a single X-ray emitter and camera. This type of imaging allows for tracking movements in the two-dimensional plane of the X-ray. Movements are performed parallel to the camera's imaging plane in order for the motion to be accurately tracked. In gait analysis, planar X-ray studies are done in the sagittal plane to allow for highly accurate tracking of large movements. Methods have been developed to allow for estimating all six degrees of freedom of movement from a planar X-ray and a model of the tracked object.
Biplanar
Few movements are truly planar; planar X-ray imaging can capture the majority of movement, but not all of it. Accurately capturing and quantifying all three dimensions of movement requires a biplanar imaging system. Biplanar imaging is difficult to perform because many facilities have access to only one X-ray emitter. With the addition of a second X-ray and camera system, the 2-D plane of imaging expands to a 3-D volume of imaging at the intersection of the X-ray beams. Because the volume of imaging is at the intersection of two X-ray beams, the overall size of it is limited by the area of the X-ray emitters.
Tracking Techniques
Markered
Motion capture techniques often use reflective markers for the image capturing. In X-ray imaging, markers that appear opaque in the X-ray images are utilized. This frequently involves using radio-opaque spheres attached to the subject. Markers can be implanted in the subject's bones, which would then appear visible in the X-ray images. This method requires surgical procedures for implanting and a healing period before the subject can undergo a motion analysis. For accurate 3-D tracking, at least three markers need to be implanted onto each bone to be tracked. Markers can also be placed on the subject's skin to track the motion of the underlying bones, though markers placed on the skin are sensitive to skin movement artifacts. These are errors in the measurement of the location of a skin-placed marker compared to a bone-implanted marker. This occurs at locations where soft tissue moves more freely than the overlaying skin. The markers are then tracked relative to the X-ray camera(s) and the motions are mapped to the local anatomical bodies.
Markerless
Emerging techniques and software are allowing for motion to be tracked without the need for radio-opaque markers. By using a 3-D model of the object being tracked, the object can be overlaid on the images of the X-ray video at each frame. The translations and rotations of the model, as opposed to a set of markers, are then tracked relative to the X-ray camera(s). Using a local coordinate system, these translations and rotations can then be mapped to standard anatomical movements. The 3-D model of the object is generated from any 3-D imaging technique, such as an MRI or CT scan. Markerless tracking has the benefit of being a non-invasive tracking method, avoiding any complications due to surgeries. One difficulty comes from generating the 3-D model in animal studies, as the animals are required to be sedated or sacrificed for the scan.
Analysis
In planar X-ray imaging, the motions of the markers or bodies are tracked in a specialized software. An initial location guess is supplied by the user for the markers or bodies. The software, depending on its capabilities, requires the user to manually locate the markers or bodies for each frame of the video, or can automatically track the locations throughout the video. The automatic tracking has to be monitored for accuracy and may require manually relocating the markers or bodies. After the tracking data is generated for each marker or body of interest, the tracking is applied to the local anatomical bodies. For example, markers placed at the hip and knee would track the motion of the femur. Using knowledge of the local anatomy, these motions can then be translated into anatomical terms of motion in the plane of the X-ray.
In biplanar X-ray imaging, the motions are also tracked in a specialized software. Similar to planar analysis, the user provides an initial location guess and either tracks the markers or bodies manually or the software can automatically track them. However, biplanar analysis requires that all tracking be done on both video frames at the same time, positioning the object in free space. Both X-ray cameras have to be calibrated using an object of known volume. This allows the software to locate the cameras' positions relative to each other and then allows the user to position the 3-D model of the object in line with both video frames. The tracking data is generated for each marker or body and then applied to the local anatomical bodies. The tracking data is then further defined as anatomical terms of motion in free space.
Applications
X-ray motion analysis can be used in human gait analysis to measure the kinematics of the lower limbs. Treadmill gait or overground gait can be measured depending on the mobility of the X-ray system. Other types of movements, such as a jump-cut maneuver, have also been recorded. By combining X-ray motion analysis with force platforms, a joint torque analysis can be performed. Rehabilitation is an important application of X-ray motion analysis. X-ray imaging has been used for medical diagnostic purposes since shortly after its discovery in 1895. X-ray motion analysis can be utilized in joint imaging or analyzing joint-related diseases. It has been used to quantify osteoarthritis in the knee, estimate knee cartilage contact areas, and analyze the results of rotator cuff repair by imaging the shoulder joint, among other applications.
Animal locomotion can also be analyzed with X-ray imaging. As long as the animal can be placed between the X-ray emitter and the camera, the subject can be imaged. Examples of gaits that have been studied are rats, guineafowl, horses, bipedal birds, and frogs, among others. Aside from locomotion, X-ray motion analysis has been utilized in the study and research of other moving morphology analyses, such as pig mastication and movement of the temporomandibular joint in rabbits.
See also
Video motion analysis
Roentgen Stereophotogrammetric Analysis
Radiography
Fluoroscopy
References
X-rays
Fluoroscopy
Film and video technology
Radiography
Motion in computer vision | X-ray motion analysis | [
"Physics"
] | 1,484 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"X-rays",
"Electromagnetic spectrum",
"Motion (physics)",
"Motion in computer vision"
] |
50,069,550 | https://en.wikipedia.org/wiki/Physics%20of%20financial%20markets | Physics of financial markets is a non-orthodox economics discipline that studies financial markets as physical systems. It seeks to understand the nature of financial processes and phenomena by employing the scientific method and avoiding beliefs, unverifiable assumptions and immeasurable notions, not uncommon to economic disciplines.
Physics of financial markets addresses issues such as theory of price formation, price dynamics, market ergodicity, collective phenomena, market self-action, and market instabilities.
Physics of financial markets should not be confused with mathematical finance, which are only concerned with descriptive mathematical modeling of financial instruments without seeking to understand nature of underlying processes.
See also
Econophysics
Social physics
Quantum economics
Thermoeconomics
Quantum finance
Kinetic exchange models of markets
Brownian model of financial markets
Ergodicity economics
References
Applied and interdisciplinary physics
Financial markets
Technical analysis | Physics of financial markets | [
"Physics"
] | 167 | [
"Applied and interdisciplinary physics"
] |
50,073,184 | https://en.wikipedia.org/wiki/Generative%20adversarial%20network | A generative adversarial network (GAN) is a class of machine learning frameworks and a prominent framework for approaching generative artificial intelligence. The concept was initially developed by Ian Goodfellow and his colleagues in June 2014. In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss.
Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proved useful for semi-supervised learning, fully supervised learning, and reinforcement learning.
The core idea of a GAN is based on the "indirect" training through the discriminator, another neural network that can tell how "realistic" the input seems, which itself is also being updated dynamically. This means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator. This enables the model to learn in an unsupervised manner.
GANs are similar to mimicry in evolutionary biology, with an evolutionary arms race between both networks.
Definition
Mathematical
The original GAN is defined as the following game:Each probability space defines a GAN game.
There are 2 players: generator and discriminator.
The generator's strategy set is , the set of all probability measures on .
The discriminator's strategy set is the set of Markov kernels , where is the set of probability measures on .
The GAN game is a zero-sum game, with objective function
The generator aims to minimize the objective, and the discriminator aims to maximize the objective.The generator's task is to approach , that is, to match its own output distribution as closely as possible to the reference distribution. The discriminator's task is to output a value close to 1 when the input appears to be from the reference distribution, and to output a value close to 0 when the input looks like it came from the generator distribution.
In practice
The generative network generates candidates while the discriminative network evaluates them. The contest operates in terms of data distributions. Typically, the generative network learns to map from a latent space to a data distribution of interest, while the discriminative network distinguishes candidates produced by the generator from the true data distribution. The generative network's training objective is to increase the error rate of the discriminative network (i.e., "fool" the discriminator network by producing novel candidates that the discriminator thinks are not synthesized (are part of the true data distribution)).
A known dataset serves as the initial training data for the discriminator. Training involves presenting it with samples from the training dataset until it achieves acceptable accuracy. The generator is trained based on whether it succeeds in fooling the discriminator. Typically, the generator is seeded with randomized input that is sampled from a predefined latent space (e.g. a multivariate normal distribution). Thereafter, candidates synthesized by the generator are evaluated by the discriminator. Independent backpropagation procedures are applied to both networks so that the generator produces better samples, while the discriminator becomes more skilled at flagging synthetic samples. When used for image generation, the generator is typically a deconvolutional neural network, and the discriminator is a convolutional neural network.
Relation to other statistical machine learning methods
GANs are implicit generative models, which means that they do not explicitly model the likelihood function nor provide a means for finding the latent variable corresponding to a given sample, unlike alternatives such as flow-based generative model.
Compared to fully visible belief networks such as WaveNet and PixelRNN and autoregressive models in general, GANs can generate one complete sample in one pass, rather than multiple passes through the network.
Compared to Boltzmann machines and linear ICA, there is no restriction on the type of function used by the network.
Since neural networks are universal approximators, GANs are asymptotically consistent. Variational autoencoders might be universal approximators, but it is not proven as of 2017.
Mathematical properties
Measure-theoretic considerations
This section provides some of the mathematical theory behind these methods.
In modern probability theory based on measure theory, a probability space also needs to be equipped with a σ-algebra. As a result, a more rigorous definition of the GAN game would make the following changes:Each probability space defines a GAN game.
The generator's strategy set is , the set of all probability measures on the measure-space .
The discriminator's strategy set is the set of Markov kernels , where is the Borel σ-algebra on .Since issues of measurability never arise in practice, these will not concern us further.
Choice of the strategy set
In the most generic version of the GAN game described above, the strategy set for the discriminator contains all Markov kernels , and the strategy set for the generator contains arbitrary probability distributions on .
However, as shown below, the optimal discriminator strategy against any is deterministic, so there is no loss of generality in restricting the discriminator's strategies to deterministic functions . In most applications, is a deep neural network function.
As for the generator, while could theoretically be any computable probability distribution, in practice, it is usually implemented as a pushforward: . That is, start with a random variable , where is a probability distribution that is easy to compute (such as the uniform distribution, or the Gaussian distribution), then define a function . Then the distribution is the distribution of .
Consequently, the generator's strategy is usually defined as just , leaving implicit. In this formalism, the GAN game objective is
Generative reparametrization
The GAN architecture has two main components. One is casting optimization into a game, of form , which is different from the usual kind of optimization, of form . The other is the decomposition of into , which can be understood as a reparametrization trick.
To see its significance, one must compare GAN with previous methods for learning generative models, which were plagued with "intractable probabilistic computations that arise in maximum likelihood estimation and related strategies".
At the same time, Kingma and Welling and Rezende et al. developed the same idea of reparametrization into a general stochastic backpropagation method. Among its first applications was the variational autoencoder.
Move order and strategic equilibria
In the original paper, as well as most subsequent papers, it is usually assumed that the generator moves first, and the discriminator moves second, thus giving the following minimax game:
If both the generator's and the discriminator's strategy sets are spanned by a finite number of strategies, then by the minimax theorem,that is, the move order does not matter.
However, since the strategy sets are both not finitely spanned, the minimax theorem does not apply, and the idea of an "equilibrium" becomes delicate. To wit, there are the following different concepts of equilibrium:
Equilibrium when generator moves first, and discriminator moves second:
Equilibrium when discriminator moves first, and generator moves second:
Nash equilibrium , which is stable under simultaneous move order:
For general games, these equilibria do not have to agree, or even to exist. For the original GAN game, these equilibria all exist, and are all equal. However, for more general GAN games, these do not necessarily exist, or agree.
Main theorems for GAN game
The original GAN paper proved the following two theorems:
Interpretation: For any fixed generator strategy , the optimal discriminator keeps track of the likelihood ratio between the reference distribution and the generator distribution:where is the logistic function.
In particular, if the prior probability for an image to come from the reference distribution is equal to , then is just the posterior probability that came from the reference distribution:
Training and evaluating GAN
Training
Unstable convergence
While the GAN game has a unique global equilibrium point when both the generator and discriminator have access to their entire strategy sets, the equilibrium is no longer guaranteed when they have a restricted strategy set.
In practice, the generator has access only to measures of form , where is a function computed by a neural network with parameters , and is an easily sampled distribution, such as the uniform or normal distribution. Similarly, the discriminator has access only to functions of form , a function computed by a neural network with parameters . These restricted strategy sets take up a vanishingly small proportion of their entire strategy sets.
Further, even if an equilibrium still exists, it can only be found by searching in the high-dimensional space of all possible neural network functions. The standard strategy of using gradient descent to find the equilibrium often does not work for GAN, and often the game "collapses" into one of several failure modes. To improve the convergence stability, some training strategies start with an easier task, such as generating low-resolution images or simple images (one object with uniform background), and gradually increase the difficulty of the task during training. This essentially translates to applying a curriculum learning scheme.
Mode collapse
GANs often suffer from mode collapse where they fail to generalize properly, missing entire modes from the input data. For example, a GAN trained on the MNIST dataset containing many samples of each digit might only generate pictures of digit 0. This was termed "the Helvetica scenario".
One way this can happen is if the generator learns too fast compared to the discriminator. If the discriminator is held constant, then the optimal generator would only output elements of . So for example, if during GAN training for generating MNIST dataset, for a few epochs, the discriminator somehow prefers the digit 0 slightly more than other digits, the generator may seize the opportunity to generate only digit 0, then be unable to escape the local minimum after the discriminator improves.
Some researchers perceive the root problem to be a weak discriminative network that fails to notice the pattern of omission, while others assign blame to a bad choice of objective function. Many solutions have been proposed, but it is still an open problem.
Even the state-of-the-art architecture, BigGAN (2019), could not avoid mode collapse. The authors resorted to "allowing collapse to occur at the later stages of training, by which time a model is sufficiently trained to achieve good results".
Two time-scale update rule
The two time-scale update rule (TTUR) is proposed to make GAN convergence more stable by making the learning rate of the generator lower than that of the discriminator. The authors argued that the generator should move slower than the discriminator, so that it does not "drive the discriminator steadily into new regions without capturing its gathered information".
They proved that a general class of games that included the GAN game, when trained under TTUR, "converges under mild assumptions to a stationary local Nash equilibrium".
They also proposed using the Adam stochastic optimization to avoid mode collapse, as well as the Fréchet inception distance for evaluating GAN performances.
Vanishing gradient
Conversely, if the discriminator learns too fast compared to the generator, then the discriminator could almost perfectly distinguish . In such case, the generator could be stuck with a very high loss no matter which direction it changes its , meaning that the gradient would be close to zero. In such case, the generator cannot learn, a case of the vanishing gradient problem.
Intuitively speaking, the discriminator is too good, and since the generator cannot take any small step (only small steps are considered in gradient descent) to improve its payoff, it does not even try.
One important method for solving this problem is the Wasserstein GAN.
Evaluation
GANs are usually evaluated by Inception score (IS), which measures how varied the generator's outputs are (as classified by an image classifier, usually Inception-v3), or Fréchet inception distance (FID), which measures how similar the generator's outputs are to a reference set (as classified by a learned image featurizer, such as Inception-v3 without its final layer). Many papers that propose new GAN architectures for image generation report how their architectures break the state of the art on FID or IS.
Another evaluation method is the Learned Perceptual Image Patch Similarity (LPIPS), which starts with a learned image featurizer , and finetunes it by supervised learning on a set of , where is an image, is a perturbed version of it, and is how much they differ, as reported by human subjects. The model is finetuned so that it can approximate . This finetuned model is then used to define .
Other evaluation methods are reviewed in.
Variants
There is a veritable zoo of GAN variants. Some of the most prominent are as follows:
Conditional GAN
Conditional GANs are similar to standard GANs except they allow the model to conditionally generate samples based on additional information. For example, if we want to generate a cat face given a dog picture, we could use a conditional GAN.
The generator in a GAN game generates , a probability distribution on the probability space . This leads to the idea of a conditional GAN, where instead of generating one probability distribution on , the generator generates a different probability distribution on , for each given class label .
For example, for generating images that look like ImageNet, the generator should be able to generate a picture of cat when given the class label "cat".
In the original paper, the authors noted that GAN can be trivially extended to conditional GAN by providing the labels to both the generator and the discriminator.
Concretely, the conditional GAN game is just the GAN game with class labels provided:where is a probability distribution over classes, is the probability distribution of real images of class , and the probability distribution of images generated by the generator when given class label .
In 2017, a conditional GAN learned to generate 1000 image classes of ImageNet.
GANs with alternative architectures
The GAN game is a general framework and can be run with any reasonable parametrization of the generator and discriminator . In the original paper, the authors demonstrated it using multilayer perceptron networks and convolutional neural networks. Many alternative architectures have been tried.
Deep convolutional GAN (DCGAN): For both generator and discriminator, uses only deep networks consisting entirely of convolution-deconvolution layers, that is, fully convolutional networks.
Self-attention GAN (SAGAN): Starts with the DCGAN, then adds residually-connected standard self-attention modules to the generator and discriminator.
Variational autoencoder GAN (VAEGAN): Uses a variational autoencoder (VAE) for the generator.
Transformer GAN (TransGAN): Uses the pure transformer architecture for both the generator and discriminator, entirely devoid of convolution-deconvolution layers.
Flow-GAN: Uses flow-based generative model for the generator, allowing efficient computation of the likelihood function.
GANs with alternative objectives
Many GAN variants are merely obtained by changing the loss functions for the generator and discriminator.
Original GAN:
We recast the original GAN objective into a form more convenient for comparison:
Original GAN, non-saturating loss:
This objective for generator was recommended in the original paper for faster convergence.The effect of using this objective is analyzed in Section 2.2.2 of Arjovsky et al.
Original GAN, maximum likelihood:
where is the logistic function. When the discriminator is optimal, the generator gradient is the same as in maximum likelihood estimation, even though GAN cannot perform maximum likelihood estimation itself.
Hinge loss GAN:Least squares GAN:where are parameters to be chosen. The authors recommended .
Wasserstein GAN (WGAN)
The Wasserstein GAN modifies the GAN game at two points:
The discriminator's strategy set is the set of measurable functions of type with bounded Lipschitz norm: , where is a fixed positive constant.
The objective is
One of its purposes is to solve the problem of mode collapse (see above). The authors claim "In no experiment did we see evidence of mode collapse for the WGAN algorithm".
GANs with more than two players
Adversarial autoencoder
An adversarial autoencoder (AAE) is more autoencoder than GAN. The idea is to start with a plain autoencoder, but train a discriminator to discriminate the latent vectors from a reference distribution (often the normal distribution).
InfoGAN
In conditional GAN, the generator receives both a noise vector and a label , and produces an image . The discriminator receives image-label pairs , and computes .
When the training dataset is unlabeled, conditional GAN does not work directly.
The idea of InfoGAN is to decree that every latent vector in the latent space can be decomposed as : an incompressible noise part , and an informative label part , and encourage the generator to comply with the decree, by encouraging it to maximize , the mutual information between and , while making no demands on the mutual information between .
Unfortunately, is intractable in general, The key idea of InfoGAN is Variational Mutual Information Maximization: indirectly maximize it by maximizing a lower boundwhere ranges over all Markov kernels of type .
The InfoGAN game is defined as follows:Three probability spaces define an InfoGAN game:
, the space of reference images.
, the fixed random noise generator.
, the fixed random information generator.
There are 3 players in 2 teams: generator, Q, and discriminator. The generator and Q are on one team, and the discriminator on the other team.
The objective function iswhere is the original GAN game objective, and
Generator-Q team aims to minimize the objective, and discriminator aims to maximize it:
Bidirectional GAN (BiGAN)
The standard GAN generator is a function of type , that is, it is a mapping from a latent space to the image space . This can be understood as a "decoding" process, whereby every latent vector is a code for an image , and the generator performs the decoding. This naturally leads to the idea of training another network that performs "encoding", creating an autoencoder out of the encoder-generator pair.
Already in the original paper, the authors noted that "Learned approximate inference can be performed by training an auxiliary network to predict given ". The bidirectional GAN architecture performs exactly this.
The BiGAN is defined as follows: Two probability spaces define a BiGAN game:
, the space of reference images.
, the latent space.
There are 3 players in 2 teams: generator, encoder, and discriminator. The generator and encoder are on one team, and the discriminator on the other team.
The generator's strategies are functions , and the encoder's strategies are functions . The discriminator's strategies are functions .
The objective function is
Generator-encoder team aims to minimize the objective, and discriminator aims to maximize it: In the paper, they gave a more abstract definition of the objective as:where is the probability distribution on obtained by pushing forward via , and is the probability distribution on obtained by pushing forward via .
Applications of bidirectional models include semi-supervised learning, interpretable machine learning, and neural machine translation.
CycleGAN
CycleGAN is an architecture for performing translations between two domains, such as between photos of horses and photos of zebras, or photos of night cities and photos of day cities.
The CycleGAN game is defined as follows:There are two probability spaces , corresponding to the two domains needed for translations fore-and-back.
There are 4 players in 2 teams: generators , and discriminators .
The objective function is
where is a positive adjustable parameter, is the GAN game objective, and is the cycle consistency loss:The generators aim to minimize the objective, and the discriminators aim to maximize it: Unlike previous work like pix2pix, which requires paired training data, cycleGAN requires no paired data. For example, to train a pix2pix model to turn a summer scenery photo to winter scenery photo and back, the dataset must contain pairs of the same place in summer and winter, shot at the same angle; cycleGAN would only need a set of summer scenery photos, and an unrelated set of winter scenery photos.
GANs with particularly large or small scales
BigGAN
The BigGAN is essentially a self-attention GAN trained on a large scale (up to 80 million parameters) to generate large images of ImageNet (up to 512 x 512 resolution), with numerous engineering tricks to make it converge.
Invertible data augmentation
When there is insufficient training data, the reference distribution cannot be well-approximated by the empirical distribution given by the training dataset. In such cases, data augmentation can be applied, to allow training GAN on smaller datasets. Naïve data augmentation, however, brings its problems.
Consider the original GAN game, slightly reformulated as follows:Now we use data augmentation by randomly sampling semantic-preserving transforms and applying them to the dataset, to obtain the reformulated GAN game:This is equivalent to a GAN game with a different distribution , sampled by , with . For example, if is the distribution of images in ImageNet, and samples identity-transform with probability 0.5, and horizontal-reflection with probability 0.5, then is the distribution of images in ImageNet and horizontally-reflected ImageNet, combined.
The result of such training would be a generator that mimics . For example, it would generate images that look like they are randomly cropped, if the data augmentation uses random cropping.
The solution is to apply data augmentation to both generated and real images:The authors demonstrated high-quality generation using just 100-picture-large datasets.
The StyleGAN-2-ADA paper points out a further point on data augmentation: it must be invertible. Continue with the example of generating ImageNet pictures. If the data augmentation is "randomly rotate the picture by 0, 90, 180, 270 degrees with equal probability", then there is no way for the generator to know which is the true orientation: Consider two generators , such that for any latent , the generated image is a 90-degree rotation of . They would have exactly the same expected loss, and so neither is preferred over the other.
The solution is to only use invertible data augmentation: instead of "randomly rotate the picture by 0, 90, 180, 270 degrees with equal probability", use "randomly rotate the picture by 90, 180, 270 degrees with 0.1 probability, and keep the picture as it is with 0.7 probability". This way, the generator is still rewarded to keep images oriented the same way as un-augmented ImageNet pictures.
Abstractly, the effect of randomly sampling transformations from the distribution is to define a Markov kernel . Then, the data-augmented GAN game pushes the generator to find some , such that where is the Markov kernel convolution.
A data-augmentation method is defined to be invertible if its Markov kernel satisfiesImmediately by definition, we see that composing multiple invertible data-augmentation methods results in yet another invertible method. Also by definition, if the data-augmentation method is invertible, then using it in a GAN game does not change the optimal strategy for the generator, which is still .
There are two prototypical examples of invertible Markov kernels:
Discrete case: Invertible stochastic matrices, when is finite.
For example, if is the set of four images of an arrow, pointing in 4 directions, and the data augmentation is "randomly rotate the picture by 90, 180, 270 degrees with probability , and keep the picture as it is with probability ", then the Markov kernel can be represented as a stochastic matrix: and is an invertible kernel iff is an invertible matrix, that is, .
Continuous case: The gaussian kernel, when for some .
For example, if is the space of 256x256 images, and the data-augmentation method is "generate a gaussian noise , then add to the image", then is just convolution by the density function of . This is invertible, because convolution by a gaussian is just convolution by the heat kernel, so given any , the convolved distribution can be obtained by heating up precisely according to , then wait for time . With that, we can recover by running the heat equation backwards in time for .
More examples of invertible data augmentations are found in the paper.
SinGAN
SinGAN pushes data augmentation to the limit, by using only a single image as training data and performing data augmentation on it. The GAN architecture is adapted to this training method by using a multi-scale pipeline.
The generator is decomposed into a pyramid of generators , with the lowest one generating the image at the lowest resolution, then the generated image is scaled up to , and fed to the next level to generate an image at a higher resolution, and so on. The discriminator is decomposed into a pyramid as well.
StyleGAN series
The StyleGAN family is a series of architectures published by Nvidia's research division.
Progressive GAN
Progressive GAN is a method for training GAN for large-scale image generation stably, by growing a GAN generator from small to large scale in a pyramidal fashion. Like SinGAN, it decomposes the generator as, and the discriminator as .
During training, at first only are used in a GAN game to generate 4x4 images. Then are added to reach the second stage of GAN game, to generate 8x8 images, and so on, until we reach a GAN game to generate 1024x1024 images.
To avoid shock between stages of the GAN game, each new layer is "blended in" (Figure 2 of the paper). For example, this is how the second stage GAN game starts:
Just before, the GAN game consists of the pair generating and discriminating 4x4 images.
Just after, the GAN game consists of the pair generating and discriminating 8x8 images. Here, the functions are image up- and down-sampling functions, and is a blend-in factor (much like an alpha in image composing) that smoothly glides from 0 to 1.
StyleGAN-1
StyleGAN-1 is designed as a combination of Progressive GAN with neural style transfer.
The key architectural choice of StyleGAN-1 is a progressive growth mechanism, similar to Progressive GAN. Each generated image starts as a constant array, and repeatedly passed through style blocks. Each style block applies a "style latent vector" via affine transform ("adaptive instance normalization"), similar to how neural style transfer uses Gramian matrix. It then adds noise, and normalize (subtract the mean, then divide by the variance).
At training time, usually only one style latent vector is used per image generated, but sometimes two ("mixing regularization") in order to encourage each style block to independently perform its stylization without expecting help from other style blocks (since they might receive an entirely different style latent vector).
After training, multiple style latent vectors can be fed into each style block. Those fed to the lower layers control the large-scale styles, and those fed to the higher layers control the fine-detail styles.
Style-mixing between two images can be performed as well. First, run a gradient descent to find such that . This is called "projecting an image back to style latent space". Then, can be fed to the lower style blocks, and to the higher style blocks, to generate a composite image that has the large-scale style of , and the fine-detail style of . Multiple images can also be composed this way.
StyleGAN-2
StyleGAN-2 improves upon StyleGAN-1, by using the style latent vector to transform the convolution layer's weights instead, thus solving the "blob" problem.
This was updated by the StyleGAN-2-ADA ("ADA" stands for "adaptive"), which uses invertible data augmentation as described above. It also tunes the amount of data augmentation applied by starting at zero, and gradually increasing it until an "overfitting heuristic" reaches a target level, thus the name "adaptive".
StyleGAN-3
StyleGAN-3 improves upon StyleGAN-2 by solving the "texture sticking" problem, which can be seen in the official videos. They analyzed the problem by the Nyquist–Shannon sampling theorem, and argued that the layers in the generator learned to exploit the high-frequency signal in the pixels they operate upon.
To solve this, they proposed imposing strict lowpass filters between each generator's layers, so that the generator is forced to operate on the pixels in a way faithful to the continuous signals they represent, rather than operate on them as merely discrete signals. They further imposed rotational and translational invariance by using more signal filters. The resulting StyleGAN-3 is able to solve the texture sticking problem, as well as generating images that rotate and translate smoothly.
Other uses
Other than for generative and discriminative modelling of data, GANs have been used for other things.
GANs have been used for transfer learning to enforce the alignment of the latent feature space, such as in deep reinforcement learning. This works by feeding the embeddings of the source and target task to the discriminator which tries to guess the context. The resulting loss is then (inversely) backpropagated through the encoder.
Applications
Science
Iteratively reconstruct astronomical images
Simulate gravitational lensing for dark matter research.
Model the distribution of dark matter in a particular direction in space and to predict the gravitational lensing that will occur.
Model high energy jet formation and showers through calorimeters of high-energy physics experiments.
Approximate bottlenecks in computationally expensive simulations of particle physics experiments. Applications in the context of present and proposed CERN experiments have demonstrated the potential of these methods for accelerating simulation and/or improving simulation fidelity.
Reconstruct velocity and scalar fields in turbulent flows.
GAN-generated molecules were validated experimentally in mice.
Medical
One of the major concerns in medical imaging is preserving patient privacy. Due to these reasons, researchers often face difficulties in obtaining medical images for their research purposes. GAN has been used for generating synthetic medical images, such as MRI and PET images to address this challenge.
GAN can be used to detect glaucomatous images helping the early diagnosis which is essential to avoid partial or total loss of vision.
GANs have been used to create forensic facial reconstructions of deceased historical figures.
Malicious
Concerns have been raised about the potential use of GAN-based human image synthesis for sinister purposes, e.g., to produce fake, possibly incriminating, photographs and videos.
GANs can be used to generate unique, realistic profile photos of people who do not exist, in order to automate creation of fake social media profiles.
In 2019 the state of California considered and passed on October 3, 2019, the bill AB-602, which bans the use of human image synthesis technologies to make fake pornography without the consent of the people depicted, and bill AB-730, which prohibits distribution of manipulated videos of a political candidate within 60 days of an election. Both bills were authored by Assembly member Marc Berman and signed by Governor Gavin Newsom. The laws went into effect in 2020.
DARPA's Media Forensics program studies ways to counteract fake media, including fake media produced using GANs.
Fashion, art and advertising
GANs can be used to generate art; The Verge wrote in March 2019 that "The images created by GANs have become the defining look of contemporary AI art." GANs can also be used to
inpaint photographs
generate fashion models, shadows, photorealistic renders of interior design, industrial design, shoes, etc. Such networks were reported to be used by Facebook.
Some have worked with using GAN for artistic creativity, as "creative adversarial network". A GAN, trained on a set of 15,000 portraits from WikiArt from the 14th to the 19th century, created the 2018 painting Edmond de Belamy, which sold for US$432,500.
GANs were used by the video game modding community to up-scale low-resolution 2D textures in old video games by recreating them in 4k or higher resolutions via image training, and then down-sampling them to fit the game's native resolution (resembling supersampling anti-aliasing).
In 2020, Artbreeder was used to create the main antagonist in the sequel to the psychological web horror series Ben Drowned. The author would later go on to praise GAN applications for their ability to help generate assets for independent artists who are short on budget and manpower.
In May 2020, Nvidia researchers taught an AI system (termed "GameGAN") to recreate the game of Pac-Man simply by watching it being played.
In August 2019, a large dataset consisting of 12,197 MIDI songs each with paired lyrics and melody alignment was created for neural melody generation from lyrics using conditional GAN-LSTM (refer to sources at GitHub AI Melody Generation from Lyrics).
Miscellaneous
GANs have been used to
show how an individual's appearance might change with age.
reconstruct 3D models of objects from images,
generate novel objects as 3D point clouds,
model patterns of motion in video.
inpaint missing features in maps, transfer map styles in cartography or augment street view imagery.
use feedback to generate images and replace image search systems.
visualize the effect that climate change will have on specific houses.
reconstruct an image of a person's face after listening to their voice.
produces videos of a person speaking, given only a single photo of that person.
recurrent sequence generation.
History
In 1991, Juergen Schmidhuber published "artificial curiosity", neural networks in a zero-sum game. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. GANs can be regarded as a case where the environmental reaction is 1 or 0 depending on whether the first network's output is in a given set.
Other people had similar ideas but did not develop them similarly. An idea involving adversarial networks was published in a 2010 blog post by Olli Niemitalo. This idea was never implemented and did not involve stochasticity in the generator and thus was not a generative model. It is now known as a conditional GAN or cGAN. An idea similar to GANs was used to model animal behavior by Li, Gauci and Gross in 2013.
Another inspiration for GANs was noise-contrastive estimation, which uses the same loss function as GANs and which Goodfellow studied during his PhD in 2010–2014.
Adversarial machine learning has other uses besides generative modeling and can be applied to models other than neural networks. In control theory, adversarial learning based on neural networks was used in 2006 to train robust controllers in a game theoretic sense, by alternating the iterations between a minimizer policy, the controller, and a maximizer policy, the disturbance.
In 2017, a GAN was used for image enhancement focusing on realistic textures rather than pixel-accuracy, producing a higher image quality at high magnification. In 2017, the first faces were generated. These were exhibited in February 2018 at the Grand Palais. Faces generated by StyleGAN in 2019 drew comparisons with Deepfakes.
See also
References
External links
This Person Does Not Exist photorealistic images of people who do not exist, generated by StyleGAN
This Cat Does Not Exist photorealistic images of cats who do not exist, generated by StyleGAN
Neural network architectures
Cognitive science
Unsupervised learning
Generative artificial intelligence | Generative adversarial network | [
"Engineering"
] | 7,617 | [
"Artificial intelligence engineering",
"Generative artificial intelligence"
] |
50,075,073 | https://en.wikipedia.org/wiki/Modelling%20Condensate%20Distillation%20Coloumn | Distillation is a process in which we separate components of different vapour pressure. One fraction leaves overhead and is condensed to distillate and the other is the bottom product. The bottom product is mostly liquid while the overhead fraction can be vapour or an aerosol.
This method requires the components to have different volatility to be separated.
The column consists of three sections: a stripping section, a rectification section, and a feed section.
For rectification and stripping a countercurrent liquid phase must flow through the column, so that liquid and vapour can contact each other on each stage.
The distillation column is fed with a mixture containing the mole fraction xf of the desired compound. The overhead mixture is a gas or an aerosol which contains the mole fraction xD of the desired compound and the bottom product contains a mixture with the fraction xB of the desired compound.
An overhead condenser is a heat exchange equipment used for condensing the mixture leaving the top of the column. Either cooling water or air is used as a cooling agent.
An overhead accumulator is a horizontal pressure vessel containing the condensed mixture.
Pumps can be used to control the reflux to the column.
A Reboiler produces the vapour stream in the distillation column. It can be used internally and externally.
Math model
The total molar hold up in the nth tray Mn is considered constant.
The imbalances in the input and output flows are taken into account for in the component and the heat balance equations.
Inlet
Flow rate of the liquid phase and mole fraction of the desired compound in it are and .
Flow rate of the vapour phase and mole fraction of the desired compound in it are and .
Outlet
Flow rate of the liquid phase and molar fractions of the desired compound in it are and .
Flow rate of the vapour phase and molar fractions of the desired compound in it are and .
Mass balance
, with
By differentiating and substituting above equation we get:
Energy Balance
, where is the enthalpy of the liquid and is the enthalpy of the vapour
By substituting the mass balance equation in above equation we get the following expression:
References
Distillation
Separation processes
Laboratory techniques | Modelling Condensate Distillation Coloumn | [
"Chemistry"
] | 459 | [
"Distillation",
"nan",
"Separation processes"
] |
56,823,699 | https://en.wikipedia.org/wiki/Poisoning%20of%20Sergei%20and%20Yulia%20Skripal | The poisoning of Sergei and Yulia Skripal, also known as the Salisbury Poisonings, was a botched assassination attempt to poison Sergei Skripal, a former Russian military officer and double agent for the British intelligence agencies in the city of Salisbury, England on 4 March 2018. Sergei and his daughter, Yulia Skripal, were poisoned by means of a Novichok nerve agent. Both spent several weeks in hospital in a critical condition, before being discharged. A police officer, Nick Bailey, was also taken into intensive care after attending the incident, and was later discharged.
The British government accused Russia of attempted murder and announced a series of punitive measures against Russia, including the expulsion of diplomats. The UK's official assessment of the incident was supported by 28 other countries which responded similarly. Altogether, an unprecedented 153 Russian diplomats were expelled by the end of March 2018. Russia denied the accusations, expelled foreign diplomats in retaliation for the expulsion of its own diplomats, and accused Britain of the poisoning.
On 30 June 2018, a similar poisoning of two British nationals in Amesbury, north of Salisbury, involved the same nerve agent. Charlie Rowley found a perfume bottle, later discovered to contain the agent, in a litter bin somewhere in Salisbury and gave it to Dawn Sturgess who sprayed it on her wrist. Sturgess fell ill within 15 minutes and died on 8 July, but Rowley, who had also come into contact with the poison, survived. British police believe this incident was not a targeted attack, but a result of the way the nerve agent was disposed of after the poisoning in Salisbury. A public inquiry was launched into the circumstances of Sturgess's death. On 5 September 2018, British authorities identified two Russian nationals, using the names Alexander Petrov and Ruslan Boshirov, as suspected of the Skripals' poisoning, and alleged that they were active officers in Russian military intelligence. Later, investigative website Bellingcat stated that it had positively identified Ruslan Boshirov as being the highly decorated GRU Colonel Anatoliy Chepiga, that Alexander Petrov was Alexander Mishkin, also of the GRU, and that a third GRU officer present in the UK at the time was identified as Denis Vyacheslavovich Sergeev, believed to hold the rank of major general in the GRU. The pattern of his communications while in the UK indicates that he liaised with superior officers in Moscow.
The attempted assassination and subsequent agent exposures was an embarrassment for Putin and for Russia's spying organisation. It was allegedly organised by the secret Unit 29155 of the Russian GRU, under the command of Major General Andrei V. Averyanov. On 27 November 2019, the Organisation for the Prohibition of Chemical Weapons (OPCW) added Novichok, the Soviet-era nerve agent used in the attack, to its list of banned substances.
Chronology of events
At 14:40 GMT on 3 March 2018, Yulia Skripal, the 33-year-old daughter of Sergei Skripal, a 66-year-old resident of Salisbury, flew into Heathrow Airport from Sheremetyevo International Airport in Moscow, Russia.
At 09:15 on 4 March Sergei Skripal's burgundy 2009 BMW 320d was seen in the area of London Road, Churchill Way North and Wilton Road at Salisbury.
At 13:30 Skripal's car was seen on Devizes Road on the way towards the town centre.
At 13:40 the Skripals arrived in the upper level car park at the Maltings, Salisbury and then went to the Bishop's Mill pub in the town centre.
At 14:20 they dined at Zizzi on Castle Street, leaving at 15:35.
At 16:15 an emergency services call reported that a man and woman, later identified as Sergei and Yulia, had been found unconscious on a public bench in the centre of Salisbury by the passing Chief Nursing Officer for the British Army and her daughter. An eyewitness saw the woman foaming at the mouth with her eyes wide open but completely white. According to a later British government statement they were "slipping in and out of consciousness on a public bench".
At 17:10, they were taken separately to Salisbury District Hospital by an ambulance and an air ambulance.
At 09:03 the following morning, Salisbury NHS Foundation Trust declared a major incident in response to concerns raised by medical staff; shortly afterwards this became a multi-agency incident named Operation Fairline.
Health authorities checked 21 members of the emergency services and the public for possible symptoms; two police officers were treated for minor symptoms, said to be itchy eyes and wheezing, while one, Detective Sergeant Nick Bailey, who had been sent to Skripal's house, was in a serious condition. On 22 March, Bailey was discharged from the hospital. In a statement he said "normal life for me will probably never be the same" and thanked the hospital staff.
On 26 March, Skripal and his daughter were reported to still be critically ill. On 29 March it was announced that Yulia's condition was improving and she was no longer in a critical condition. After three weeks in a critical condition, Yulia regained consciousness and was able to speak. Sergei was also in a critical condition until he regained consciousness one month after the attack. On 5 April, doctors said that Sergei was no longer in critical condition and was responding well to treatment. On 9 April, Yulia was discharged from hospital and taken to a secure location. On 18 May, Sergei Skripal was discharged from the hospital too. On 23 May, a handwritten letter and a video statement by Yulia were released to the Reuters news agency for the first time after the poisoning. She stated that she was lucky to be alive after the poisoning and thanked the staff of the Salisbury hospital. She described her treatment as slow, heavy and extremely painful and mentioned a scar on her neck, apparently from a tracheotomy. She expressed hope that someday she would return to Russia. She thanked the Russian embassy for its offer of assistance but said she and her father were "not ready to take it".
On 5 April, British authorities said that inside Skripal's house, which had been sealed by the police, two guinea pigs were found dead by vets, when they were allowed in, along with a cat in a distressed state, which had to be put down.
On 22 November the first interview with DS Bailey was released, in which he reported that he had been poisoned, despite the fact that he inspected the Skripals' house wearing a forensic suit. In addition to the poisoning, Bailey and his family had lost their home and all their possessions, because of contamination. Investigators said that the perfume bottle containing Novichok nerve agent, which was later found in a bin, had contained enough of the nerve agent to potentially kill thousands of people.
In early 2019, building contractors built a scaffolding "sealed frame" over the house and the garage of Skripal's home. A military team then dismantled and removed the roofs on both buildings over the course of two weeks. Cleaning and decontamination was followed by rebuilding over a period of four months. On 22 February 2019, Government officials announced that the last of the 12 sites that had been undergoing an intense and hazardous clean-up – Skripal's house – had been judged safe.
In May 2019, Sergei Skripal made a phone call and left a voice message to his niece Viktoria living in Russia. This was the first time after the poisoning that his voice had been heard by the public.
In August 2019 it was confirmed that a second police officer had been poisoned while investigating, but only in trace amounts.
Investigation
The first public response to the poisoning came on 6 March. It was agreed under the Counter Terrorism Policing network that the Counter Terrorism Command based within the Metropolitan Police would take over the investigation from Wiltshire Police. Assistant Commissioner Mark Rowley, head of Counter Terrorism Policing, appealed for witnesses to the incident following a COBR meeting chaired by Home Secretary Amber Rudd.
Samples of the nerve agent used in the attack tested positive at the Defence Science and Technology Laboratory at Porton Down for a "very rare" nerve agent, according to the UK Home Secretary.
180 military experts in chemical warfare defence and decontamination, as well as 18 vehicles, were deployed on 9 March to assist the Metropolitan Police to remove vehicles and objects from the scene and look for any further traces of the nerve agent. The personnel were drawn mostly from the Army, including instructors from the Defence CBRN Centre and the 29 Explosive Ordnance Disposal and Search Group, as well as from the Royal Marines and Royal Air Force. The vehicles included TPz Fuchs operated by Falcon Squadron from the Royal Tank Regiment. On 11 March, the UK government advised those present at either The Mill pub or the Zizzi restaurant in Salisbury on 4 and 5 March to wash or wipe their possessions, emphasising that the risk to the general public was low.
Several days later, on 12 March, Prime Minister Theresa May said the agent had been identified as one of the Novichok family of agents, believed to have been developed in the 1980s by the Soviet Union. According to the Russian ambassador to the UK, Alexander Yakovenko, the British authorities identified the agent as A-234, derived from an earlier version known as A-232.
By 14 March, the investigation was focused on Skripal's home and car, a bench where the two fell unconscious, a restaurant in which they dined and a pub where they had drinks. A recovery vehicle was removed by the military from Gillingham in Dorset on 14 March, in connection with the poisoning.
Subsequently, there was speculation within the British media that the nerve agent had been planted in one of the personal items in Yulia Skripal's suitcase before she left Moscow for London, and in US media that it had been planted in their car.
Ahmet Üzümcü, Director-General of the Organisation for the Prohibition of Chemical Weapons (OPCW), said on 20 March that it will take "another two to three weeks to finalise the analysis" of samples taken from the poisoning of Skripal. On 22 March, the Court of Protection gave permission for new blood samples to be obtained from Yulia and Sergei Skripal for use by the OPCW. By 28 March, the police investigation concluded that the Skripals were poisoned at Sergei's home, with the highest concentration being found on the handle of his front door. On 12 April the OPCW confirmed the UK's analysis of the type of nerve agent and reported it was of a "high purity", stating that the "name and structure of the identified toxic chemical are contained in the full classified report of the Secretariat, available to States Parties".
A declassified letter from the UK's national security adviser, Sir Mark Sedwill, to NATO Secretary General Jens Stoltenberg, stated Russian military intelligence hacked Yulia Skripal's email account since at least 2013 and tested methods for delivering nerve agents including on door handles.
The Department for Environment confirmed the nerve agent was delivered "in a liquid form". They said eight sites require decontamination, which will take several months to complete and cost millions of pounds. The BBC reported experts said the nerve agent does not evaporate or disappear over time. Intense cleaning with caustic chemicals is required to get rid of it. The Skripals' survival was possibly due to the weather – there had been heavy fog and high humidity, and according to its inventor and other scientists, moisture weakens the potency of this type of toxin.
On 22 April 2018, it was reported that British counter-terror police had identified a suspect in the poisoning: a former Federal Security Service (FSB) officer (reportedly a 54-year-old former FSB captain) who acted under several code names including "Gordon" and "Mihails Savickis". According to detectives, he led a team of six Russian assassins who organised the chemical weapons attack. Sedwill reported on 1 May 2018 however that UK intelligence and police agencies had failed to identify the individual or individuals who carried out the attack.
On 3 May 2018, the head of the OPCW, Ahmet Üzümcü, informed the New York Times that he had been told that about 50–100 grams of the nerve agent was thought to have been used in the attack, which indicated it was likely created for use as a weapon and was enough to kill a large number of people. The next day however the OPCW made a correcting statement that the "quantity should probably be characterised in milligrams", though "the OPCW would not be able to estimate or determine the amount of the nerve agent that was used".
On 19 July the Press Association reported that police believed they had identified "several Russians" as the suspected perpetrators of the attack. They had been identified through CCTV, cross-checked with border entry data.
On 6 August 2018, it was reported that the British government was "poised to submit an extradition request to Moscow for two Russians suspected of carrying out the Salisbury nerve agent attack". The Metropolitan Police used two super recognisers to identify the suspects after trawling through up to 5,000 hours of CCTV footage from Salisbury and numerous airports across the country.
British Prime Minister Theresa May announced in the Commons the same day that British intelligence services had identified the two suspects as officers in the G. U. Intelligence Service (formerly known as GRU) and the assassination attempt was not a rogue operation and was "almost certainly" approved at a senior level of the Russian government. May also said Britain would push for the EU to agree new sanctions against Russia.
On 5 September 2018, the Russian news site Fontanka reported that the numbers on leaked passport files for Petrov and Boshirov are only three digits apart, and fall in a range that includes the passport files for a Russian military official expelled from Poland for spying. It is not known how the passport files were obtained, but Andrew Roth, the Moscow correspondent for The Guardian, commented that "If the reporting is confirmed, it would be a major blunder by the intelligence agency, allowing any country to check passport data for Russians requesting visas or entering the country against a list of nearly 40 passport files of suspected GRU officers." On 14 September 2018, the online platforms Bellingcat and The Insider Russia observed that in Petrov's leaked passport files, there is no record of a residential address or any identification papers prior to 2009, suggesting that the name is an alias created that year; the analysis also noted that Petrov's dossier is stamped "Do not provide any information" and has the handwritten annotation "S.S.," a common abbreviation in Russian for "top secret". On 15 September 2018, the Russian opposition newspaper Novaya Gazeta reported finding in Petrov's passport files a cryptic number that seems to be a telephone number associated with the Russian Defence Ministry, most likely the Military Intelligence Directorate.
As part of the announcement Scotland Yard and the Counter Terrorism Command released a detailed track of the individuals' 48 hours in the UK. This covered their arrival from Moscow at Gatwick Airport, a trip to Salisbury by train the day before the attack, stated by police to be for reconnaissance, a trip to Salisbury by train on the day of the attack, and return to Moscow via Heathrow Airport. The two spent both nights at the City Stay Hotel, next to Bow Church DLR station in Bow, East London. Novichok was found in their hotel room after police sealed it off on 4 May 2018. Neil Basu, National Lead for Counter Terrorism Policing said that tests were carried out on their hotel room and it was "deemed safe".
On 26 September 2018, the real identity of the suspect named by police as Ruslan Boshirov was revealed as Colonel Anatoliy Vladimirovich Chepiga by The Daily Telegraph, citing reporting by itself and Bellingcat, with Petrov having a more junior rank in the GRU. The 39-year-old was made a Hero of the Russian Federation by decree of the President in 2014. Two European security sources confirmed that the details were accurate. The BBC commented: "The BBC understands there is no dispute over the identification." The Secretary of State for Defence Gavin Williamson wrote: "The true identity of one of the Salisbury suspects has been revealed to be a Russian Colonel. I want to thank all the people who are working so tirelessly on this case." However, that statement was subsequently deleted from Twitter.
On 8 October 2018, the real identity of the suspect named by police as Alexander Petrov was revealed as Alexander Mishkin.
On 22 November 2018, more CCTV footage, with the two suspects walking in Salisbury, was published by the police.
On 19 December 2018, Mishkin (a.k.a. Petrov) and Chepiga (a.k.a. Boshirov) were added to the sanctions list of the United States Treasury Department, along with other 13 members of the GRU agency.
On 6 January 2019, the Telegraph reported that the British authorities had established all the essential details of the assassination attempt, including the chain of command that leads up to Vladimir Putin. In February, a third GRU officer present in the UK at the time, Denis Sergeev, was identified. In September 2021, the BBC reported that Crown Prosecution Service had authorised charges against the three men but that formal charges could not be laid unless the men were arrested. The charges authorised against the three men are conspiracy to murder, attempted murder, causing grievous bodily harm and use and possession of a chemical weapon.
Response of the United Kingdom
Within days of the attack, political pressure began to mount on Theresa May's government to take action against the perpetrators, and most senior politicians appeared to believe that the Russian government was behind the attack. The situation was additionally sensitive for Russia as Russian president Vladimir Putin was facing his fourth presidential election in mid-March, and Russia was to host the 2018 FIFA World Cup football competition in June. When giving a response to an urgent question from Tom Tugendhat, the chairman of the Foreign Affairs Select Committee of the House of Commons, who suggested that Moscow was conducting "a form of soft war against the West", Foreign Secretary Boris Johnson on 6 March said the government would "respond appropriately and robustly" if the Russian state was found to have been involved in the poisoning. UK Home Secretary Amber Rudd said on 8 March 2018 that the use of a nerve agent on UK soil was a "brazen and reckless act" of attempted murder "in the most cruel and public way".
Prime Minister Theresa May said in the House of Commons on 12 March:
May also said that the UK government requested that Russia explain which of these two possibilities it was by the end of 13 March 2018. She also said: "[T]he extra-judicial killing of terrorists and dissidents outside Russia were given legal sanction by the Russian Parliament in 2006. And of course Russia used radiological substances in its barbaric assault on Mr Litvinenko." She said that the UK government would "consider in detail the response from the Russian State" and in the event that there was no credible response, the government would "conclude that this action amounts to an unlawful use of force by the Russian State against the United Kingdom" and measures would follow. British media billed the statement as "Theresa May's ultimatum to Putin".
On 13 March 2018, UK Home Secretary Amber Rudd ordered an inquiry by the police and security services into alleged Russian state involvement in 14 previous suspicious deaths of Russian exiles and businessmen in the UK.
May unveiled a series of measures on 14 March 2018 in retaliation for the poisoning attack, after the Russian government refused to meet the UK's request for an account of the incident. One of the chief measures was the expulsion of 23 Russian diplomats which she presented as "actions to dismantle the Russian espionage network in the UK", as these diplomats had been identified by the UK as "undeclared intelligence agents". The BBC reported other responses, including:
Increasing checks on private flights, customs and freight
Freezing Russian state assets where there is evidence that they may be used to threaten the life or property of UK nationals or residents
Plans to consider new laws to increase defences against "hostile state activity"
Ministers and the British royal family boycotting the 2018 FIFA World Cup in Russia
Suspending all high-level bilateral contacts between the UK and Russia
Retraction of the state invitation to Russia's foreign minister Sergey Lavrov
A new £48-million chemical weapons defence centre
Offering voluntary vaccinations against anthrax to British troops who are held at high readiness so that they are ready to deploy to areas where there is risk of this type of attack
May said that some measures which the government planned could "not be shared publicly for reasons of national security". Jeremy Corbyn cast doubt in his parliamentary response to May's statement concerning blaming the attack on Russia prior to the results of an independent investigation, which provoked criticism from some MPs, including members of his own party. A few days later, Corbyn was satisfied that the evidence pointed to Russia. He supported the expulsion but argued that a crackdown on money laundering by UK financial firms on behalf of Russian oligarchs would be a more effective measure against "the Putin regime" than the Conservative government's plans. Corbyn pointed to the pre-Iraq War judgements about Iraq and weapons of mass destruction as reason to be suspicious.
The United Nations Security Council called an urgent meeting on 14 March 2018 on the initiative of the UK to discuss the Salisbury incident. According to the Russian mission's press secretary, the draft press statement introduced by Russia at the United Nations Security Council meeting was blocked by the UK. The UK and the US blamed Russia for the incident during the meeting, with the UK accusing Russia of breaking its obligations under the Chemical Weapons Convention. Separately, the White House fully supported the UK in attributing the attack to Russia, as well as the punitive measures taken against Russia. The White House also accused Russia of undermining the security of countries worldwide.
The UK, and subsequently NATO, requested Russia provide "full and complete disclosure" of the Novichok programme to the OPCW. On 14 March 2018, the government stated it would supply a sample of the substance used to the OPCW once UK legal obligations from the criminal investigation permitted.
Boris Johnson said on 16 March that it was "overwhelmingly likely" that the poisoning had been ordered directly by Russian president Vladimir Putin, which marked the first time the British government accused Putin of personally ordering the poisoning. According to the UK Foreign Office, the UK attributed the attack to Russia based on Porton Down's determination that the chemical was Novichok, additional intelligence, and a lack of alternative explanations from Russia. The Defence Science and Technology Laboratory announced that it was "completely confident" that the agent used was Novichok, but they still did not know the "precise source" of the agent.
The UK had held an intelligence briefing with its allies in which it stated that the Novichok chemical used in the Salisbury poisoning was produced at a chemical facility in the town of Shikhany, Saratov Oblast, Russia.
Response of Russia
Russian government
On 6 March 2018 Andrey Lugovoy, deputy of Russia's State Duma and alleged killer of Alexander Litvinenko, in his interview with the Echo of Moscow said: "Something constantly happens to Russian citizens who either run away from Russian justice, or for some reason choose for themselves a way of life they call a change of their Motherland. So the more Britain accepts on its territory every good-for-nothing, every scum from all over the world, the more problems they will have."
Russian Foreign Minister Sergey Lavrov on 9 March rejected Britain's claim of Russia's involvement in Skripal's poisoning and accused the United Kingdom of spreading propaganda. Lavrov said that Russia was "ready to cooperate" and demanded access to the samples of the nerve-agent which was used to poison Skripal. The request was rejected by the British government.
Following Theresa May's 12 March statement in Parliament – in which she gave President Putin's administration until midnight of the following day to explain how a former spy was poisoned in Salisbury, otherwise she would conclude it was an "unlawful use of force" by the Russian state against the UK, Lavrov, talking to the Russian press on 13 March, said that the procedure stipulated by the Chemical Weapons Convention should be followed whereunder Russia was entitled to have access to the substance in question and 10 days to respond.
On 17 March, Russia announced that it was expelling 23 British diplomats and ordered the closure of the UK's consulate in St Petersburg and the British Council office in Moscow, stopping all British Council activities in Russia.
Russia has officially declared the poisoning to be a fabrication and a "grotesque provocation rudely staged by the British and U.S. intelligence agencies" to undermine the country.
The Russian government and embassy of Russia in the United Kingdom repeatedly requested access to the Skripals, and sought to offer consular assistance. These requests and offers were respectively denied or declined.
On 5 September 2018 Putin's Press Secretary, Dmitry Peskov, stated that Russia had not received any official request from Britain for assistance in identifying the two suspected Russian GRU military intelligence officers that Scotland Yard believes carried out the Skripal attack. The same day, the Foreign Ministry of Russia asserted that UK ambassador in Moscow, Laurie Bristow, had said that London would not provide Russia with the suspects' fingerprints, passport numbers, visa numbers, or any extra data.
On 12 September 2018, Putin, while answering questions at the plenary meeting of the 4th Eastern Economic Forum in Russia's Far Eastern port city of Vladivostok said that the identities of both men London suspected of involvement in the Skripal case were known to the Russian authorities and that both were civilians, who had done nothing criminal. He also said he would like the men to come forward to tell their story. In a 13 September 2018 interview on the state-funded television channel RT, the accused claimed to be sports nutritionists who had gone to Salisbury merely to see the sights and look for nutrition products, saying that they took a second day-trip to Salisbury because slush had dampened their first one.
On 26 September, the same day one of the suspects was identified as the Colonel of GRU, Lavrov urged the British authorities to cooperate in the investigation of the case, said Britain had given no proof of Russia's guilt and suggested that Britain had something to hide.
On 25 September, the FSB began searching for Ministry of Internal Affairs (MVD) officers who had provided journalists with foreign passport and flight information about the suspects.
Russian state media
For a few days following the poisoning, the story was discussed by web sites, radio stations and newspapers, but Russian state-run main national TV channels largely ignored the incident.
Eventually, on 7 March, anchor Kirill Kleimyonov of the state television station Channel One Russia's current affairs programme Vremya mentioned the incident by attributing the allegation to Boris Johnson. After speaking of Johnson disparagingly, Kleimyonov said that being "a traitor to the motherland" was one of the most hazardous professions and warned: "Don't choose England as a next country to live in. Whatever the reasons, whether you're a professional traitor to the motherland or you just hate your country in your spare time, I repeat, no matter, don't move to England. Something is not right there. Maybe it's the climate, but in recent years there have been too many strange incidents with a grave outcome. People get hanged, poisoned, they die in helicopter crashes and fall out of windows in industrial quantities." Kleimyonov's commentary was accompanied by a report highlighting previous suspicious Russia-related deaths in the UK, namely those of financier Alexander Perepilichny, businessman Boris Berezovsky, ex-FSB officer Alexander Litvinenko and radiation expert Matthew Puncher. Puncher discovered that Litvinenko was poisoned by polonium; he died in 2006, five months after a trip to Russia.
Dmitry Kiselyov, pro-Kremlin TV presenter, said on 11 March that the poisoning of Sergei Skripal, who was "completely wrung out and of little interest" as a source, was only advantageous to the British to "nourish their Russophobia" and organise the boycott of the FIFA World Cup scheduled for June 2018. Kiselyov referred to London as a "pernicious place for Russian exiles".
The prominent Russian television hosts' warnings to Russians living in the UK were echoed by a similar direct warning from a senior member of the Russian Federation Council, Andrey Klimov, who said: "It's going to be very unsafe for you."
Claims made by Russian media were fact-checked by UK media organisations.
An interview with two men claiming to be the suspects named by the UK was aired on RT on 13 September 2018 with RT editor Margarita Simonyan. They said they were ordinary tourists who had wished to see Stonehenge, Old Sarum, and the "famous ... 123-metre spire" of Salisbury Cathedral. They also said that they "maybe approached Skripal's house, but we didn't know where it was located," and denied using Novichok, which they had allegedly transported in a fake perfume bottle, saying, "Is it silly for decent lads to have women's perfume? The customs are checking everything, they would have questions as to why men have women's perfume in their luggage." Although Simonyan avoided most questions about the two men's backgrounds, she hinted that they might be gay by asking, "All footage features you two together ... What do you have in common that you spend so much time together?" The New York Times interpreted the hint by noting that "The possibility that Mr. Petrov and Mr. Boshirov could be gay would, for a Russian audience, immediately rule out the possibility that they serve as military intelligence officers."
On 22 August 2022, the editor-in-chief of Kremlin-backed RT network, Margarita Simonyan, appeared to lend support to the suggestion that Russia had been involved in the poisoning, with her remark "I am sure we can find professionals willing to admire the famous spires in the vicinity of Tallinn" – seen as a reference to the agents' claims that they were sightseeing in Salisbury.
Chemical weapons experts and intelligence
Porton Down
On 3 April 2018 Gary Aitkenhead, the chief executive of the Government's Defence Science and Technology Laboratory (Dstl) at Porton Down responsible for testing the substance involved in the case, said they had established the agent was Novichok or from that family but had been unable to verify the "precise source" of the nerve agent and that they had "provided the scientific info to Government who have then used a number of other sources to piece together the conclusions you have come to". Aitkenhead refused to comment on whether the laboratory had developed or maintains stocks of Novichok. He also dismissed speculations the substance could have come from Porton Down: "There is no way anything like that could have come from us or left the four walls of our facility." Aitkenhead stated the creation of the nerve agent was "probably only within the capabilities of a state actor", and that there was no known antidote.
Former Russian scientists and intelligence officers
Vil Mirzayanov, a former Soviet Union scientist who worked at the research institute that developed the Novichok class of nerve agents and lives in the United States, believes that hundreds of people could have been affected by residual contamination in Salisbury. He said that Sergei and Yulia Skripal, if poisoned with Novichok, would be left with debilitating health issues for the rest of their lives. He also criticised the response of Public Health England, saying that washing personal belongings was insufficient to remove traces of the chemical.
Two other Russian scientists who now live in Russia and have been involved in Soviet-era chemical weapons development, Vladimir Uglev and Leonid Rink, were quoted as saying that Novichok agents had been developed in the 1970s–1980s within the programme that was officially titled FOLIANT, while the term Novichok referred to a whole system of chemical weapons use; they, as well as Mirzayanov, who published Novichok's formula in 2008, also noted that Novichok-type agents might be synthesised in other countries. In 1995, Leonid Rink received a one-year suspended sentence for selling Novichok agents to unnamed buyers, soon after the fatal poisoning of Russian banker Ivan Kivilidi by Novichok.
A former KGB and FSB officer, Boris Karpichkov, who operated in Latvia in the 1990s and fled to the UK in 1998, told ITV's Good Morning Britain that on 12 February 2018, three weeks before the Salisbury attack and exactly on his birthday, he received a message over the burner phone from "a very reliable source" in the FSB telling Karpichkov that "something bad [wa]s going to happen with [him] and seven other people, including Mr. Skripal", whom he then knew nothing about. Karpichkov said he disregarded the message at the time, thinking it was not serious, as he had previously received such messages. According to Karpichkov, the FSB's list includes the names of Oleg Gordievsky and William Browder.
Spiez Laboratory in Switzerland
The Swiss Federal Intelligence Service announced on 14 September 2018 that two Russian spies had been caught in the Netherlands and expelled earlier in the year for attempting to hack into the Spiez Laboratory in the Swiss town of Spiez, a designated lab of the OPCW that had been tasked with confirming that the samples of poison collected in Salisbury were Novichok. The spies were discovered through a joint investigation by the Swiss, Dutch, and British intelligence services. The two men expelled were not the same as the Salisbury suspects.
Response from other countries and organisations
US government
Following Theresa May's statement in Parliament, the US Secretary of State Rex Tillerson released a statement on 12 March that fully supported the stance of the UK government on the poisoning attack, including "its assessment that Russia was likely responsible for the nerve agent attack that took place in Salisbury". The following day, US President Donald Trump said that Russia was likely responsible.
United States Ambassador to the United Nations Nikki Haley at the Security Council briefing on 14 March 2018 stated: "The United States believes that Russia is responsible for the attack on two people in the United Kingdom using a military-grade nerve agent".
Following the United States National Security Council's recommendation, President Trump, on 26 March, ordered the expulsion of sixty Russian diplomats (referred to by the White House as "Russian intelligence officers") and the closure of the Russian consulate in Seattle. The action was cast as being "in response to Russia's use of a military-grade chemical weapon on the soil of the United Kingdom, the latest in its ongoing pattern of destabilising activities around the world".
On 8 August, five months after the poisoning, the US government agreed to place sanctions on Russian banks and exports. On 6 August, the US State Department concluded that Russia was behind the poisoning. The sanctions, which are enforced under the Chemical and Biological Weapons Control and Warfare Elimination Act of 1991 (CBW Act), were planned to come into effect on 27 August. However, these sanctions were not implemented by the Trump administration.
European Union and member states
European Commission Vice-President Frans Timmermans argued for "unequivocal, unwavering and very strong" European solidarity with the United Kingdom when speaking to lawmakers in Strasburg on 13 March. Federica Mogherini, the High Representative of the Union for Foreign Affairs and Security Policy, expressed shock and offered the bloc's support. MEP and leader of the Alliance of Liberals and Democrats for Europe in the European Parliament Guy Verhofstadt proclaimed solidarity with the British people.
During a meeting in the Foreign Affairs Council on 19 March, all foreign ministers of the European Union declared in a joint statement that the "European Union expresses its unqualified solidarity with the UK and its support, including for the UK's efforts to bring those responsible for this crime to justice." In addition, the statement also pointed out that "The European Union takes extremely seriously the UK Government's assessment that it is highly likely that the Russian Federation is responsible."
Norbert Röttgen, a former federal minister in Angela Merkel's government and current chairman of Germany's parliamentary foreign affairs committee, said the incident demonstrated the need for Britain to review its open-door policy towards Russian capital of dubious origin.
Sixteen EU countries expelled 33 Russian diplomats on 26 March.
The European Union officially sanctioned 4 Russians that were suspected of carrying out the attack on 21 January 2019. The head of the GRU Igor Kostyukov and the deputy head Vladimir Alexseyev were both sanctioned along with Mishkin and Chepiga. The sanctions banned them from travelling to the EU and froze any assets they may have there along with banning any person or company in the EU providing any financial support to those sanctioned.
Other non-EU countries
Albania, Australia, Canada, Georgia, North Macedonia, Moldova, Norway and Ukraine expelled a total of 27 Russian diplomats who were believed to have been intelligence officers. Australia's Malcolm Turnbull said, "We responded with the solidarity we've always shown when Britain's freedoms have been challenged." The New Zealand Government also issued a statement supporting the actions, noting that it would have expelled any Russian intelligence agents who had been detected in the country.
NATO
NATO issued an official response to the attack on 14 March. The alliance expressed its deep concern over the first offensive use of a nerve agent on its territory since its foundation and said that the attack was in breach of international treaties. It called on Russia to fully disclose its research of the Novichok agent to the OPCW.
Jens Stoltenberg, NATO Secretary General, announced on 27 March that NATO would be expelling seven Russian diplomats from the Russian mission to NATO in Brussels. In addition, 3 unfilled positions at the mission have been denied accreditation from NATO. Russia blamed the US for the NATO response.
Joint responses
The leaders of France, Germany, the United States and the United Kingdom released a joint statement on 15 March which supported the UK's stance on the incident, stating that it was "highly likely that Russia was responsible" and calling on Russia to provide complete disclosure to the OPCW concerning its Novichok nerve agent program. On 19 March, the European Union also issued a statement strongly condemning the attack and stating it "takes extremely seriously the UK Government's assessment that it is highly likely that the Russian Federation is responsible".
On 6 September 2018, Canada, France, Germany and the United States issued a joint statement saying they had "full confidence" that the Salisbury attack was orchestrated by Russia's Main Intelligence Directorate and "almost certainly approved at a senior government level" and urged Russia to provide full disclosure of its Novichok programme to the OPCW.
Expulsion of diplomats
By the end of March 2018 a number of countries and other organisations expelled a total of more than 150 Russian diplomats in a show of solidarity with the UK. According to the BBC it was "the largest collective expulsion of Russian intelligence officers in history".
The UK expelled 23 Russian diplomats on 14 March 2018. Three days later, Russia expelled an equal number of British diplomats and ordered the closure of the UK consulate in St. Petersburg and of the British Council in Russia. Nine countries expelled Russian diplomats on 26 March: along with 6 other EU nations, the US, Canada, Ukraine and Albania. The following day, several nations inside and outside of the EU, and NATO responded similarly. By 30 March, Russia expelled an equal number of diplomats of most nations who had expelled Russian diplomats. By that time, Belgium, Montenegro, Hungary and Georgia had also expelled one or more Russian diplomats. Additionally on 30 March, Russia reduced the size of the total UK mission's personnel in Russia to match that of the Russian mission to the UK.
Bulgaria, Luxembourg, Malta, Portugal, Slovakia, Slovenia and the European Union itself have not expelled any Russian diplomats but have recalled their ambassadors from Russia for consultations. Furthermore, Iceland decided to diplomatically boycott the 2018 FIFA World Cup held in Russia.
This cooperation of countries for the mass expulsions of Russian diplomats was used again just four years later in 2022 as the format for the diplomatic expulsions during the Russo-Ukrainian War.
Notes
4 diplomats expelled. 3 pending applications declined.
7 expelled and 3 pending applications declined. Maximum delegation reduced by 10 (from 30 to 20).
48 Russian diplomats expelled from Washington D.C. and 12 expelled from New York.
Aftermath
Russia
The failed poisoning of the Skripals became an embarrassment for Putin, and ended up causing severe damage to Russia's spying organisations. Once Bellingcat exposed the agents' names in September, Moscow then targeted interior ministry leaks that may have helped expose dozens of undercover operatives. It also prompted fury in the Kremlin, the result of which was a purge in the senior ranks of the GRU. Furthermore a number of botched attempts by the GRU were also revealed in October – the Sandworm cybercrime unit had attempted unsuccessfully to hack the UK Foreign Office and the Porton Down facility within a month of the poisonings. Another hack was attempted in April this time on the headquarters of the Organisation for the Prohibition of Chemical Weapons (OPCW) in the Netherlands. The OPCW was investigating the poisonings in the UK, as well the Douma chemical attack in Syria. Four Russian intelligence officers, believed to have been part of a 'GRU cleanup' unit for the earlier failed operations, travelled to The Hague on diplomatic passports. The incident was thwarted by Dutch military intelligence, who had been tipped off by British intelligence officials. The four tried – and failed – to destroy their equipment and were immediately put on a plane back to Moscow. Soon after these events Vladimir Putin's tone changed; at the Russian Energy Forum in Moscow he described Skripal as a "scum and a traitor to the motherland."
The 2018 disclosure of the link on sequential passport numbers issued to GRU agents led to a number of other Russian agents fleeing the west and returning to Russia, including Maria Adela Kuhfeldt Rivera – real name Olga Kolobova, a deep cover agent in Naples. Another was Sergey Vladimirovich Cherkasov, arrested and jailed in Brazil in 2022.
Russia's chief of military intelligence, Igor Korobov and his agency thus came under heavy criticism. Putin was angered over the identification of the agents and the botched failures, and in a meeting apparently scolded Korobov. Soon after this Korobov then collapsed at home in sudden "ill health", (claimed journalist Sergey Kanev) and died not long after in November after a "long illness". GRU defector Viktor Suvorov claimed that 'Korobov was murdered, and everyone in the GRU understood why'. Alexander Golts, a Russian military analyst even admitted that agents 'got a bit too relaxed' and went on to say 'such sloppy work is the reality'.
In February 2019, Bellingcat confirmed that a third GRU officer present in the UK at the time was identified as Denis Vyacheslavovich Sergeev, believed to hold the rank of major general in the GRU. The pattern of his communications while in the UK indicated that he liaised with superior officers in Moscow. In September 2021, Bellingcat revealed that "Russian authorities had taken the unusual measure of erasing any public records" of Sergeev's existence, as well as the other two main suspects in the Skripal poisoning. Sergeev is said to have had a senior position to Chepiga and Mishkin and was likely in charge of coordinating the operation in Salisbury.
In April 2021, Mishkin and Chepiga were named as having been involved in the 2014 Vrbětice ammunition warehouses explosions in the Czech Republic.
The Moscow Times reported a public opinion survey later in the year of the poisonings:
United Kingdom
In the UK, the response to the poisonings was viewed as a success. Initially there were criticisms of the intelligence failures with the supposed GRU agents gaining access to the UK in the first place. After the Litvinenko poisoning, however, there were calls for more robust action against Russia, should an event like it unfold. The Salisbury poisonings put that robustness into action, rallying significant solidarity from the West. In addition, the response also exposed many Russian intelligence officers, and British officials believe they did real damage to Russian intelligence operations, even if it was short term.
Some of the emergency vehicles used in the response to the poisoning were buried in a landfill site near Cheltenham. In June 2019 it was revealed emergency services spent £891,000 on replacing and discarding contaminated vehicles. South Western ambulance service discarded eight vehicles, comprising three ambulances and five paramedic cars. Wiltshire Police destroyed a total of 16 vehicles at a cost of £460,000.
On 13 September 2018, Chris Busby, a retired research scientist, who is regularly featured as an expert on the Russian government-controlled RT television network, was arrested after his home in Bideford was raided by police. Busby was an outspoken critic of the British Government's handling of the Salisbury poisoning. In one video he said: "Just to make it perfectly clear, there's no way that there's any proof that the material that poisoned the Skripals came from Russia." Busby was held for 19 hours under the Explosive Substances Act 1883, before being released with no further action. Following his release, Busby told the BBC he believed that the fact that two of the officers who had raided his property had felt unwell was explained by "psychological problems associated with their knowledge of the Skripal poisoning".
On 16 September, fears of Novichok contamination flared up again after two people fell ill at a Prezzo restaurant, from the Zizzi location where the Skripals had eaten before collapsing. The restaurant, a nearby pub, and surrounding streets were cordoned off, with some patrons under observation or unable to leave the area. The next day, the police said there was "nothing to suggest that Novichok" was the cause of the two people falling ill. However, on 19 September, one of the apparent victims, Anna Shapiro, claimed in The Sun newspaper that the incident had been an attempted assassination against her and her husband by Russia. This article was later removed from The Sun "for legal reasons" and the police began to investigate the incident as a "possible hoax" after the couple were discharged from hospital.
In 2020, senior British officials told The Times that Sergei and Yulia Skripal had been given new identities and state support to start a new life. Both had relocated to New Zealand under the assumed identities.
In May 2021 Nick Bailey, who continued to feel the effects of his poisoning and had retired early as a result, began personal injury litigation against Wiltshire Police; an undisclosed settlement was reached in April 2022.
Recovery money
As of 17 October 2018, a total of £7.5 million had been pledged by government in support of the city and to support businesses, boost tourism and to cover unexpected costs. Wiltshire Council had spent or pledged £7,338,974 on recovery, and a further £500,000 "was in the pipeline":
£733,381 towards unexpected closure and loss of footfall to businesses
£404,024 in revenue grants for 74 businesses
£99,891 in capital grants
£229,446 in business rate relief for 56 businesses
£210,491 on events to boost tourism
£500,000 from the Department of Digital, Culture, Media and Sport
£4,000 on dry cleaning or disposal of clothes believed to be contaminated by Novichock
£1 million towards keeping contaminated sites safe
£570,000 recovery money to cover costs of free parking, and free park and ride services
£4.1 million of the money pledged by the Home Office to cover Wiltshire Police's costs. A council commissioner said total policing cost had exceeded £10 million. Having £6.6 million allocated for funding the police force, he said he hoped to "recoup the full amount from central government".
Recognition of responders
Deputy Chief Constable Paul Mills and Superintendent Dave Minty of Wiltshire Police were each awarded the Queen's Police Medal in the 2020 New Year Honours for their roles in responding to the incident.
The combined Wiltshire Emergency Services received Wiltshire Life'''s 2019 "Pride of Wiltshire" award.
Media depictions The Salisbury Poisonings, a three-part dramatisation of the events in Salisbury and Amesbury, with a focus on the response of local officials and the local community, was broadcast on BBC One in June 2020 and later released on Netflix in December 2021.
See also
2018 Amesbury poisonings
Intelligence agencies of Russia
Assassination of Kim Jong-nam by North Korea with VX nerve agent
Poisoning of Alexander Litvinenko putatively by Russian intelligence agents with Polonium-210
Poisoning of Alexei Navalny, Russian politician poisoned with Novichok
Bulgarian umbrella used to assassinate Georgi Markov in London
Lists of poisonings
Russian spies in the Russo-Ukrainian War
Diplomatic expulsions during the Russo-Ukrainian War
Our Guys in SalisburyNotes
References
External links
Report from the Russian Embassy to the UK, "Salisbury Unanswered Questions," 4 March 2019
"Salisbury & Amesbury Investigation – Counter Terrorism Policing", 5 September 2018
"Russian spy: What we know so far", BBC, 19 March 2018
"Amanda Erickson: The long, terrifying history of Russian dissidents being poisoned abroad", The Washington Post'', 7 March 2018
"Joel Gunter: Sergei Skripal and the 14 deaths under scrutiny", bbc.com, 7 March 2018
Bellingcat's investigative page for the Chepiga identification – Skripal Suspect Boshirov Identified as GRU Colonel Anatoliy Chepiga
2018 controversies
2018 crimes in the United Kingdom
2018 in British politics
2018 in international relations
Attacks in the United Kingdom in 2018
Crime in Wiltshire
Diplomatic incidents
Failed assassination attempts in the United Kingdom
Forensic toxicology
History of Salisbury
March 2018 crimes in Europe
March 2018 events in the United Kingdom
Poisoning by drugs, medicaments and biological substances
Russian intelligence operations
Russia–United Kingdom relations
Russia–United States relations
Russian spies
2010s in Wiltshire
Chemical weapons attacks
State-sponsored terrorism
Novichok agents | Poisoning of Sergei and Yulia Skripal | [
"Chemistry",
"Environmental_science"
] | 10,489 | [
"Toxicology",
"Forensic toxicology",
"Chemical weapons",
" medicaments and biological substances",
"Chemical weapons attacks",
"Poisoning by drugs"
] |
56,824,194 | https://en.wikipedia.org/wiki/Lethal%20synthesis | Lethal synthesis, or suicide metabolism, is the biosynthesis of a toxin from a precursor which is not itself toxic, such as the synthesis of fluorocitrate from fluoroacetate or the synthesis of methylglyoxal from glycerol.
The term was first publicised by Rudolph Peters in his Croonian Lecture of 1951.
Lethal synthesis of methylglyoxal
A 1971 study published by the Harvard Medical School identified methylglyoxal, a form of glycerol, as a product of lethal synthesis in a specific E.coli mutant. In E.coli, the synthesis of triose phosphate from glycerol is a reaction regulated by the synthesis rate of glycerol kinase and by feedback inhibition by fructose-1,6-bisphosphate. The study demonstrated that, in E.coli mutants that had lost both control mechanisms, glycerol kinase no longer reacted to feedback regulation and instead produced the cytotoxic methylglyoxal. A more recent review of research done on methylglyoxal metabolism concluded that the compound's cytotoxic nature is dependent on its ability to form advanced glycation end products (AGEs). These compounds, which are thought to be factors in ageing and in the progression of degenerative diseases, have been shown to hinder the functions of the proteins they target.
References
Metabolism
Toxins | Lethal synthesis | [
"Chemistry",
"Biology",
"Environmental_science"
] | 289 | [
"Toxicology",
"Cellular processes",
"Biochemistry",
"Toxins",
"Metabolism"
] |
46,990,115 | https://en.wikipedia.org/wiki/Briggs%E2%80%93Bers%20criterion | In stability theory, the Briggs–Bers criterion is a criterion for determining whether the trivial solution to a linear partial differential equation with constant coefficients is stable, convectively unstable or absolutely unstable. This is often useful in applied mathematics, especially in fluid dynamics, because linear PDEs often govern small perturbations to a system, and we are interested in whether such perturbations grow or decay. The Briggs–Bers criterion is named after R. J. Briggs and A. Bers.
Suppose that the PDE is of the form , where is a function of space and time( and ). The partial differential operator has constant coefficients, which do not depend on and . Then a suitable ansatz for is the normal mode solution
Making this ansatz is equivalent to considering the problem in Fourier space – the solution may be decomposed into its Fourier components in space and time. Making this ansatz, the equation becomes
or, more simply,
This is a dispersion relation between and , and tells us how each Fourier component evolves in time. In general, the dispersion relation may be very complicated, and there may be multiple which satisfy the relation for a given value of , or vice versa. The solutions to the dispersion relation may be complex-valued.
Now, an initial condition can be written as a superposition of Fourier modes of the form . In practice, the initial condition will have components of all frequencies. Each of these components evolves according to the dispersion relation, and therefore the solution at a later time may be obtained by Fourier inversion. In the simple case where is first-order in time, the dispersion relation determines a unique value of for each given value of , and so
where
is the Fourier transform of the initial condition. In the more general case, the Fourier inversion must be performed by contour integration in the complex and planes.
While it may not be possible to evaluate the integrals explicitly, asymptotic properties of as may be obtained from the integral expression, using methods such as the method of stationary phase or the method of steepest descent. In particular, we can determine whether decays or grows exponentially in time, by considering the largest value that may take. If the dispersion relation is such that always, then any solution will decay as , and the trivial solution is stable. If there is some mode with , then that mode grows exponentially in time. By considering modes with zero group velocity and determining whether they grow or decay, we can determine whether an initial condition which is localised around moves away from as it grows, with (convective instability); or whether (absolute instability).
Transient growth
Suppose the PDE is of the form
where is a linear differential operator in . In general, is not a normal operator. While the large-time behaviour of is still determined by the eigenvalues of , the behaviour which takes place before this large-time behaviour may be dramatically different.
In particular, while the eigenvalues of may all have negative real part, which would predict that decays exponentially at large times and that the trivial state is stable, it is possible for to grow transiently and become large before decaying. In practice, the linear equations that we work with are linearisations of more complicated governing equations such as the Navier–Stokes equations about some base state, with the linearisations carried out under the assumption that the perturbation quantity is small. Transient growth may violate this assumption. When nonlinear effects are considered, then a system may be unstable even if the linearised system is stable.
Generalisation
When the coefficients of vary with , then this criterion is no longer applicable. However, if the variation is very slow, then the WKBJ approximation may be used to derive a leading-order approximation to the solution. This gives rise to the theory of global modes, which was first developed by Philip Drazin in 1974.
References
Stability theory | Briggs–Bers criterion | [
"Mathematics"
] | 813 | [
"Stability theory",
"Dynamical systems"
] |
46,995,394 | https://en.wikipedia.org/wiki/Robert%20Paton%20%28chemist%29 | Robert Paton won the 2015 Harrison-Meldola Memorial Prize awarded by the Royal Society of Chemistry. Up to three Harrison-Meldola Memorial Prizes are awarded each year. Paton received the OpenEye Outstanding Junior Faculty Award from the American Chemical Society COMP division in fall 2015.
Paton was formerly an associate professor in organic chemistry at the University of Oxford and a Fellow of St Hilda's College. Since 2018, He has been a professor at the Colorado State University in Fort Collins, Colorado.
References
External links
English chemists
Academics of the University of Oxford
Year of birth missing (living people)
People from Stockport
Living people
Place of birth missing (living people)
Computational chemists
Colorado State University faculty | Robert Paton (chemist) | [
"Chemistry"
] | 144 | [
"Computational chemistry",
"Theoretical chemists",
"Computational chemists"
] |
46,995,511 | https://en.wikipedia.org/wiki/Christopher%20A.%20Lipinski | Christopher A. Lipinski is a medicinal chemist who is working at Pfizer, Inc. He is known for his "rule of five", an algorithm that predicts drug compounds that are likely to have oral activity. By the number of citations, he is the most cited author of some pharmacology journals: Journal of Pharmacological and Toxicological Methods, Advanced Drug Delivery Reviews, Drug Discovery Today: Technologies.
Biography
Lipinski received his PhD from the University of California, Berkeley in 1968 in physical organic chemistry. The Advanced Drug Delivery Reviews article reporting his "rule of five" is one of the most cited publications in the journal's history. In 2006, he received an honorary law degree from the University of Dundee and he has won various awards, including being the Society for Biomolecular Sciences' winner of the 2006 SBS Achievement Award for Innovation in HTS.
References
University of California, Berkeley alumni
Year of birth missing (living people)
Living people
20th-century American chemists
21st-century American chemists
Pfizer people
Place of birth missing (living people)
Computational chemists
Drug discovery | Christopher A. Lipinski | [
"Chemistry",
"Biology"
] | 227 | [
"Life sciences industry",
"Drug discovery",
"Computational chemists",
"Computational chemistry",
"Theoretical chemists",
"Medicinal chemistry"
] |
46,995,734 | https://en.wikipedia.org/wiki/British%20Mass%20Spectrometry%20Society | The British Mass Spectrometry Society is a registered charity founded in 1964 that encourages participation in every aspect of mass spectrometry. It aims to encourage participation in all aspects of mass spectrometry on the widest basis, to promote knowledge and advancement in the field and to provide a forum for the exchange of views and information. The first foundations of the BMSS were laid in 1949 with the establishment of the Mass Spectrometry Panel by the Hydrocarbon Research Group.
Conferences
The society's annual meeting is held in the first week of September as well as regular special interest group meetings (Lipidomics, MALDI & Imaging, Ambient Ionisation, Environmental & Food Analysis) through the year, in locations throughout the United Kingdom. Locations of the society's annual meetings beginning in 1965:
Grants
In 1985, the Society used the proceeds from the 10th International Mass Spectrometry Conference to establish 7 Beynon PhD Studentships. In 2007, the Society announced they would initiate summer studentship projects and in 2012 they announced BMSS research grants.
Publications
Mass Matters
Governance
Executive committee
The management of the Society is vested in an Executive Committee made up of Officers and General Members, they also act as Trustees of the Society. There are currently 10 officers of the Society namely the Chair, Vice-Chair, Treasurer, General Secretary, Meetings Secretary, Papers Secretary, Education Officer, Publicity Secretary, Special Interest Group Co-ordinator, and Digital Communications Officer.
Presidents
John Monaghan 2003 - 2008
Past chairs
Awards
Aston Medal
In 1987 the society announce the establishment of the Aston Medal to be awarded to “individuals deserving special recognition by reason of their outstanding contributions to knowledge in the biological, chemical, engineering, mathematical, medical, or physical sciences relating directly to mass spectrometry”.
BMSS Medal
In 2002 the BMSS Medal was established by the society “to recognise sustained contributions by individual members of the British Mass Spectrometry Society to the development of mass spectrometry, primarily within the UK.”
Prizes
The BMSS awards prizes to early career members during its annual meeting, named in honour of prominent past members.
Barber Prize
Collen Maxwell
Tia Hawkins
Bordoli Prize
Maria Elena Catellani
Atakan Nalbant
B.N. Green Prize
Cara Jackson
Callan Littlejohn
References
1964 establishments in the United Kingdom
Chemistry education
Chemical industry in the United Kingdom
Learned societies of the United Kingdom
Mass spectrometry
Science and technology in the United Kingdom
Scientific organizations established in 1964 | British Mass Spectrometry Society | [
"Physics",
"Chemistry"
] | 510 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
46,996,238 | https://en.wikipedia.org/wiki/Single-root%20input/output%20virtualization | In virtualization, single root input/output virtualization (SR-IOV) is a specification that allows the isolation of PCI Express resources for manageability and performance reasons.
Details
A single physical PCI Express bus can be shared in a virtual environment using the SR-IOV specification. The SR-IOV offers different virtual functions to different virtual components (e.g. network adapter) on a physical server machine. SR-IOV uses physical and virtual functions to control or configure PCIe devices. Physical functions have the ability to move data in and out of the device while virtual functions are lightweight PCIe functions that support data flowing but also have a restricted set of configuration resources. The virtual or physical functions available to the hypervisor or guest operating system depend on the PCIe device.
The SR-IOV allows different virtual machines (VMs) in a virtual environment to share a single PCI Express hardware interface. In contrast, MR-IOV allows I/O PCI Express to share resources among different VMs on different physical machines.
InfiniBand
A major field of application for SR-IOV is within high-performance computing (HPC). The use of high-performance InfiniBand networking cards is growing within the HPC sector, and there is early research into the use of SR-IOV to allow for the use of InfiniBand within virtual machines such as Xen.
See also
I/O virtualization
References
Hardware virtualization
Computer networking
Peripheral Component Interconnect | Single-root input/output virtualization | [
"Technology",
"Engineering"
] | 315 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
45,680,415 | https://en.wikipedia.org/wiki/Epiphytic%20bacteria | Epiphytic bacteria are bacteria which live non-parasitically on the surface of a plant on various organs such as the leaves, roots, flowers, buds, seeds and fruit. In current studies it has been determined that epiphytic bacteria generally don't harm the plant, but promote the formation of ice crystals. Some produce an auxin hormone which promotes plant growth and plays a role in the life cycle of the bacteria.
Different bacteria prefer different plants and different plant organs depending on the organ's nutritional content, and depending on the bacteria's colonization system which is controlled by the host plant. Bacteria which live on leaves are referred to as phyllobacteria, and bacteria which live on the root system are referred to as rhizabacteria. They adhere to the plant surface forms as 1-cluster 2- individual bacterial cell 3- biofilm . The age of the organ also affects the epiphytic bacteria population and characteristics and has a role in the inhibition of phytopathogen on plant. Epiphytic bacteria found in the marine environment have a role in the nitrogen cycle.
Species
There are diverse species of epiphytic bacteria. An incomplete list:
Citrobacter youngae
Bacillus thuringiensis
Enterobacter soli
Bacillus tequilensis
Bacillus aryabhattai
Pantoea eucalypti
Pseudomonas palleroniana
Serratia nematodiphila
Stenotrophomonas maltophilia
Pseudomonas mosselii
Pseudomonas putida
Lysinibacillus xylanilyticus
Enterobacter asburiae
Acinetobacter johnsonii
Pseudomonas macerans
Serratia marcescens
Classification
Many epiphytic bacteria are rod-shaped, and classified as either gram negative or gram positive, pigmented or non-pigmented, fermentative or non-fermentative .
Non-pigmented epiphytic bacteria have high a GC content in their genome, a characteristic which protects the bacteria from the ultraviolet rays of the sun. Because of this, these bacteria have special nutritional requirements.
Current studies on epiphytic bacteria are underway for biotechnological applications areas such as the promotion of plant growth. Epiphytic bacteria are removed from the plant surface through ultraviolet radiation, chemical surface disinfection, and washing .
See also
Epiliths, organisms that grow on rocks
Zoochory, seed dispersal by animals
Epibiont, an organism that grows on another life form
Foliicolous, lichens or bryophytes that grow on leaves
Epiphyte
Endosymbiont
Epiphytic fungus
External links
http://www.pjoes.com/pdf/12.1/83-93.pdf
References
Bacteria
Bacteriology
Botany | Epiphytic bacteria | [
"Biology"
] | 587 | [
"Plants",
"Prokaryotes",
"Bacteria",
"Botany",
"Microorganisms"
] |
45,687,744 | https://en.wikipedia.org/wiki/ETC%20Group%20%28eco-justice%29 | The Action Group on Erosion, Technology, and Concentration (ETC) is an advocacy organization based around "the conservation and sustainable advancement of cultural and ecological diversity and human rights." 'ETC' is intended to be pronounced "et cetera." ETC frequently publishes opinions on scientific research by its staff and board members, covering topics such as community and regional planning, ecology, evolutionary biology, and political science.
History
The ETC Group, until September 1, 2001 known as Rural Advancement Foundation International (RAFI) has roots tracing back to the National Sharecroppers Fund established during the 1930s. The Fund, initiated by Eleanor Roosevelt and others, aimed to alleviate the challenges faced by predominantly black tenant farmers in the United States.
In the early 1970s, Pat Mooney, Hope Shand, and Cary Fowler initiated work on seed-related issues under the auspices of the Rural Advancement Foundation. Over time, they established an international branch focused on advocating for farmers' rights in the global south.
RAFI was a pioneer in civil society research, critiques, and advocacy related to farmers' rights and seed monopoly laws. The organization opposed the adoption of genetic engineering in agriculture, patents on life, biopiracy (a term coined by RAFI), and emerging life science technologies such as terminator technology, genomic technologies, and nanotechnology. RAFI played a crucial role in advocating for and influencing UN recognition of farmers' rights and the establishment of the International Treaty on Plant Genetic Resources for Food and Agriculture.
In order to secure nonprofit status in the United States, RAFI conducted a name change contest on their website in early 2001, eventually selecting the name ETC Group (etcetera) after considering numerous suggestions from the public.
Geoengineering
The organization has been active against geoengineering, as highlighted through their "Hands off Mother Earth!" campaign, which was launched in April 2010. In October 2010, they published a detailed report titled "Geopiracy: The Case Against Geoengineering," which examined various dimensions of geoengineering. The report covered proposed technologies, governance frameworks, key stakeholders in the geoengineering field, and the involvement and interests of military forces and corporations.
Diana Bronson, a spokesperson for the ETC Group, argued that global warming was largely caused by the actions of the scientific, corporate, and political elites in developed nations. She expressed concerns about entrusting these same entities to resolve the climate crisis and protect the biosphere, highlighting her skepticism regarding their motivations and effectiveness in addressing environmental issues. The organization continues to advocate for sustainable and community-led solutions, warning against quick technological fixes that may have long-term consequences.
Synthetic biology
The ETC Group actively advocates for increased regulation within the emerging scientific domain of synthetic biology, which they characterize as "extreme genetic engineering." The group's primary concerns regarding this field encompass issues related to corporate involvement as well as potential threats to biosafety and biosecurity. They have sought to raise public awareness and understanding of synthetic biology through the creation and dissemination of comic-style illustrations concerning "Synthia," the cell with the first synthetic genome, engineered by Craig Venter and the J. Craig Venter Institute. Another illustration, titled "The Story of Synthia," was later released as a small video clip.
On December 16, 2010, the Presidential Commission for the Study of Bioethical Issues issued a report recommending self-regulation by synthetic biologists, asserting that the fledgling technology posed minimal risks to society. This recommendation faced strong opposition from Jim Thomas of the ETC Group, who characterized the commission's suggestions as "disappointingly empty and timid." The ETC Group aligned with more than 50 environmental organizations, urging a moratorium on synthetic biology through a letter to government officials. They labeled the commission's conclusions as "irresponsible and dangerous," contending that "self-regulation amounts to no regulation."
On January 23, 2012, UC Berkeley's Richmond Field Station was selected as the site for the Lawrence Berkeley National Lab's secondary campus. In a press conference addressing concerns about synthetic biology at local, national, and international levels, a panel comprising five members, including Jim Thomas of the ETC Group, highlighted the risks associated with synthetic biology. The panel criticized the laboratory's affiliation with UC Berkeley as a superficial endorsement for an inadequately regulated industry with potentially perilous consequences. Additionally, Thomas characterized the industry as a "1.6 billion dollar industry" akin to "genetic engineering on steroids."
See also
Climate engineering
Marine cloud brightening
Mycoplasma laboratorium
Biological patent
References
External links
ETC group - Action Group on Erosion, Technology, and Concentration
The ETC Blog
Appropriate technology organizations
Environmental justice organizations
Environmental organizations based in the United States
Climate change organizations based in the United States
Climate engineering
Planetary engineering
Synthetic biology
Organizations established in the 1930s
1930s establishments in the United States
Year of establishment missing | ETC Group (eco-justice) | [
"Engineering",
"Biology"
] | 1,004 | [
"Planetary engineering",
"Synthetic biology",
"Biological engineering",
"Geoengineering",
"Bioinformatics",
"Molecular genetics"
] |
36,755,649 | https://en.wikipedia.org/wiki/Subsumption%20lattice | A subsumption lattice is a mathematical structure used in the theoretical background of automated theorem proving and other symbolic computation applications.
Definition
A term t1 is said to subsume a term t2 if a substitution σ exists such that σ applied to t1 yields t2. In this case, t1 is also called more general than t2, and t2 is called more specific than t1, or an instance of t1.
The set of all (first-order) terms over a given signature can be made a lattice over the partial ordering relation "... is more specific than ..." as follows:
consider two terms equal if they differ only in their variable naming,
add an artificial minimal element Ω (the overspecified term), which is considered to be more specific than any other term.
This lattice is called the subsumption lattice. Two terms are said to be unifiable if their meet differs from Ω.
Properties
The join and the meet operation in this lattice are called anti-unification and unification, respectively. A variable x and the artificial element Ω are the top and the bottom element of the lattice, respectively. Each ground term, i.e. each term without variables, is an atom of the lattice. The lattice has infinite descending chains, e.g. x, g(x), g(g(x)), g(g(g(x))), ..., but no infinite ascending ones.
If f is a binary function symbol, g is a unary function symbol, and x and y denote variables, then the terms f(x,y), f(g(x),y), f(g(x),g(y)), f(x,x), and f(g(x),g(x)) form the minimal non-modular lattice N5 (see Pic. 1); its appearance prevents the subsumption lattice from being modular and hence also from being distributive.
The set of terms unifiable with a given term need not be closed with respect to meet; Pic. 2 shows a counter-example.
Denoting by Gnd(t) the set of all ground instances of a term t, the following properties hold:
t equals the join of all members of Gnd(t), modulo renaming,
t1 is an instance of t2 if and only if Gnd(t1) ⊆ Gnd(t2),
terms with the same set of ground instances are equal modulo renaming,
if t is the meet of t1 and t2, then Gnd(t) = Gnd(t1) ∩ Gnd(t2),
if t is the join of t1 and t2, then Gnd(t) ⊇ Gnd(t1) ∪ Gnd(t2).
'Sublattice' of linear terms
The set of linear terms, that is of terms without multiple occurrences of a variable, is a sub-poset of the subsumption lattice, and is itself a lattice. This lattice, too, includes N5 and the minimal non-distributive lattice M3 as sublattices (see Pic. 3 and Pic. 4) and is hence not modular, let alone distributive.
The meet operation yields always the same result in the lattice of all terms as in the lattice of linear terms.
The join operation in the all terms lattice yields always an instance of the join in the linear terms lattice; for example, the (ground) terms f(a,a) and f(b,b) have the join f(x,x) and f(x,y) in the all terms lattice and in the linear terms lattice, respectively. As the join operations do not in general agree, the linear terms lattice is not properly speaking a sublattice of the all terms lattice.
Join and meet of two proper linear terms, i.e. their anti-unification and unification, corresponds to intersection and union of their path sets, respectively. Therefore, every sublattice of the lattice of linear terms that does not contain Ω is isomorphic to a set lattice, and hence distributive (see Pic. 5).
Origin
Apparently, the subsumption lattice was first investigated by Gordon D. Plotkin, in 1970.
References
Lattice theory
Unification (computer science) | Subsumption lattice | [
"Mathematics"
] | 911 | [
"Automated theorem proving",
"Lattice theory",
"Unification (computer science)",
"Mathematical objects",
"Equations",
"Fields of abstract algebra",
"Order theory"
] |
36,758,458 | https://en.wikipedia.org/wiki/C13H17N3 | {{DISPLAYTITLE:C13H17N3}}
The molecular formula C13H17N3 (molar mass: 215.29 g/mol, exact mass: 215.1422 u) may refer to:
BRL-44408
Tramazoline
Molecular formulas | C13H17N3 | [
"Physics",
"Chemistry"
] | 63 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
36,758,654 | https://en.wikipedia.org/wiki/Rigidity%20matroid | In the mathematics of structural rigidity, a rigidity matroid is a matroid that describes the number of degrees of freedom of an undirected graph with rigid edges of fixed lengths, embedded into Euclidean space. In a rigidity matroid for a graph with n vertices in d-dimensional space, a set of edges that defines a subgraph with k degrees of freedom has matroid rank dn − k. A set of edges is independent if and only if, for every edge in the set, removing the edge would increase the number of degrees of freedom of the remaining subgraph.
Definition
A framework is an undirected graph, embedded into d-dimensional Euclidean space by providing a d-tuple of Cartesian coordinates for each vertex of the graph. From a framework with n vertices and m edges, one can define a matrix with m rows and nd columns, an expanded version of the incidence matrix of the graph called the rigidity matrix. In this matrix, the entry in row e and column (v,i) is zero if v is not an endpoint of edge e. If, on the other hand, edge e has vertices u and v as endpoints, then the value of the entry is the difference between the ith coordinates of v and u.
The rigidity matroid of the given framework is a linear matroid that has as its elements the edges of the graph. A set of edges is independent, in the matroid, if it corresponds to a set of rows of the rigidity matrix that is linearly independent. A framework is called generic if the coordinates of its vertices are algebraically independent real numbers. Any two generic frameworks on the same graph G determine the same rigidity matroid, regardless of their specific coordinates. This is the (d-dimensional) rigidity matroid of G.
Statics
A load on a framework is a system of forces on the vertices (represented as vectors). A stress is a special case of a load, in which equal and opposite forces are applied to the two endpoints of each edge (which may be imagined as a spring) and the forces formed in this way are added at each vertex. Every stress is an equilibrium load, a load that does not impose any translational force on the whole system (the sum of its force vectors is zero) nor any rotational force. A linear dependence among the rows of the rigidity matrix may be represented as a self-stress, an assignment of equal and opposite forces to the endpoints of each edge that is not identically zero but that adds to zero at every vertex. Thus, a set of edges forms an independent set in the rigidity matroid if and only if it has no self-stress.
The vector space of all possible loads, on a system of n vertices, has dimension dn, among which the equilibrium loads form a subspace of dimension
. An independent set in the rigidity matroid has a system of equilibrium loads whose dimension equals the cardinality of the set, so the maximum rank that any set in the matroid can have is . If a set has this rank, it follows that its set of stresses is the same as the space of equilibrium loads. Alternatively and equivalently, in this case every equilibrium load on the framework may be resolved by a stress that generates an equal and opposite set of forces, and the framework is said to be statically rigid.
Kinematics
If the vertices of a framework are in a motion, then that motion may be described over small scales of distance by its gradient, a vector for each vertex specifying its speed and direction. The gradient describes a linearized approximation to the actual motion of the points, in which each point moves at constant velocity in a straight line. The gradient may be described as a row vector that has one real number coordinate for each pair where is a vertex of the framework and is the index of one of the Cartesian coordinates of -dimensional space; that is, the dimension of the gradient is the same as the width of the rigidity matrix.
If the edges of the framework are assumed to be rigid bars that can neither expand nor contract (but can freely rotate) then any motion respecting this rigidity must preserve the lengths of the edges: the derivative of length, as a function of the time over which the motion occurs, must remain zero. This condition may be expressed in linear algebra as a constraint that the gradient vector of the motion of the vertices must have zero inner product with the row of the rigidity matrix that represents the given edge. Thus, the family of gradients of (infinitesimally) rigid motions is given by the nullspace of the rigidity matrix. For frameworks that are not in generic position, it is possible that some infinitesimally rigid motions (vectors in the nullspace of the rigidity matrix) are not the gradients of any continuous motion, but this cannot happen for generic frameworks.
A rigid motion of the framework is a motion such that, at each point in time, the framework is congruent to its original configuration. Rigid motions include translations and rotations of Euclidean space; the gradients of rigid motions form a linear space having the translations and rotations as bases, of dimension , which must always be a subspace of the nullspace of the rigidity matrix.
Because the nullspace always has at least this dimension, the rigidity matroid can have rank at most , and when it does have this rank the only motions that preserve the lengths of the edges of the framework are the rigid motions. In this case the framework is said to be first-order (or infinitesimally) rigid. More generally, an edge belongs to the matroid closure operation of a set if and only if there does not exist a continuous motion of the framework that changes the length of but leaves the lengths of the edges in unchanged.
Although defined in different terms (column vectors versus row vectors, or forces versus motions) static rigidity and first-order rigidity reduce to the same properties of the underlying matrix and therefore coincide with each other. In two dimensions, the generic rigidity matroid also describes the number of degrees of freedom of a different kind of motion, in which each edge is constrained to stay parallel to its original position rather than being constrained to maintain the same length; however, the equivalence between rigidity and parallel motion breaks down in higher dimensions.
Unique realization
A framework has a unique realization in d-dimensional space if every placement of the same graph with the same edge lengths is congruent to it. Such a framework must necessarily be rigid, because otherwise there exists a continuous motion bringing it to a non-congruent placement with the same edge lengths, but unique realizability is stronger than rigidity. For instance, the diamond graph (two triangles sharing an edge) is rigid in two dimensions, but it is not uniquely realizable because it has two different realizations, one in which the triangles are on opposite sides of the shared edge and one in which they are both on the same side. Uniquely realizable graphs are important in applications that involve reconstruction of shapes from distances, such as triangulation in land surveying, the determination of the positions of the nodes in a wireless sensor network, and the reconstruction of conformations of molecules via nuclear magnetic resonance spectroscopy.
Bruce Hendrickson defined a graph to be redundantly rigid if it remains rigid after removing any one of its edges. In matroidal terms, this means that the rigidity matroid has the full rank and that the matroid does not have any coloops. Hendrickson
proved that every uniquely realizable framework (with generic edge lengths) is either a complete graph or a -vertex-connected, redundantly rigid graph, and he conjectured that this is an exact characterization of the uniquely realizable frameworks. The conjecture is true for one and two dimensions; in the one-dimensional case, for instance, a graph is uniquely realizable if and only if it is connected and bridgeless. However, Henrickson's conjecture is false for three or more dimensions. For frameworks that are not generic, it is NP-hard to determine whether a given framework is uniquely realizable.
Relation to sparsity
define a graph as being -sparse if every nonempty subgraph with vertices has at most edges, and -tight if it is -sparse and has exactly edges. From the consideration of loads and stresses it can be seen that a set of edges that is independent in the rigidity matroid forms a -sparse graph, for if not there would exist a subgraph whose number of edges would exceed the dimension of its space of equilibrium loads, from which it follows that it would have a self-stress.
By similar reasoning, a set of edges that is both independent and rigid forms a -tight graph. For instance, in one dimension, the independent sets form the edge sets of forests, (1,1)-sparse graphs, and the independent rigid sets form the edge sets of trees, (1,1)-tight graphs. In this case the rigidity matroid of a framework is the same as the graphic matroid of the corresponding graph.
In two dimensions, showed that the same characterization is true: the independent sets form the edge sets of (2,3)-sparse graphs and the independent rigid sets form the edge sets of (2,3)-tight graphs. Based on this work the (2,3)-tight graphs (the graphs of minimally rigid generic frameworks in two dimensions) have come to be known as Laman graphs. The family of Laman graphs on a fixed set of vertices forms the set of bases of the rigidity matroid of a complete graph, and more generally for every graph that forms a rigid framework in two dimensions, the spanning Laman subgraphs of are the bases of the rigidity matroid of .
However, in higher dimensions not every -tight graph is minimally rigid, and characterizing the minimally rigid graphs (the bases of the rigidity matroid of the complete graph) is an important open problem.
References
Mathematics of rigidity
Matroid theory | Rigidity matroid | [
"Physics",
"Mathematics"
] | 2,066 | [
"Mathematics of rigidity",
"Mechanics",
"Matroid theory",
"Combinatorics"
] |
36,765,013 | https://en.wikipedia.org/wiki/Non-expanding%20horizon | A non-expanding horizon (NEH) is an enclosed null surface whose intrinsic structure is preserved. An NEH is the geometric prototype of an isolated horizon which describes a black hole in equilibrium with its exterior from the quasilocal perspective. It is based on the concept and geometry of NEHs that the two quasilocal definitions of black holes, weakly isolated horizons and isolated horizons, are developed.
Definition of NEHs
A three-dimensional submanifold ∆ is defined as a generic (rotating and distorted) NEH if it respects the following conditions:
(i) ∆ is null and topologically ;
(ii) Along any null normal field tangent to ∆, the outgoing expansion rate vanishes;
(iii) All field equations hold on ∆, and the stress–energy tensor on ∆ is such that is a future-directed causal vector () for any future-directed null normal .
Condition (i) is fairly trivial and just states the general fact that from a 3+1 perspective an NEH ∆ is foliated by spacelike 2-spheres ∆'=S2, where S2 emphasizes that ∆' is topologically compact with genus zero (). The signature of ∆ is (0,+,+) with a degenerate temporal coordinate, and the intrinsic geometry of a foliation leaf ∆'=S2 is nonevolutional. The property in condition (ii) plays a pivotal role in defining NEHs and the rich implications encoded therein will be extensively discussed below. Condition (iii) makes one feel free to apply the Newman–Penrose (NP) formalism of Einstein-Maxwell field equations to the horizon and its near-horizon vicinity; furthermore, the very energy inequality is motivated from the dominant energy condition and is a sufficient condition for deriving many boundary conditions of NEHs.
Note: In this article, following the convention set up in refs., "hat" over the equality symbol means equality on the black-hole horizons (NEHs), and "hat" over quantities and operators (, , etc.) denotes those on a foliation leaf of the horizon. Also, ∆ is the standard symbol for both an NEH and the directional derivative ∆ in NP formalism, and we believe this won't cause an ambiguity.
Boundary conditions implied by the definition
Now let's work out the implications of the definition of NEHs, and these results will be expressed in the language of NP formalism with the convention (Note: unlike the original convention , this is the usual one employed in studying trapped null surfaces and quasilocal definitions of black holes). Being a null normal to ∆, is automatically geodesic, , and twist free, . For an NEH, the outgoing expansion rate along is vanishing, , and consequently . Moreover, according to the Raychaudhuri-NP expansion-twist equation,
it follows that on ∆
where is the NP-shear coefficient. Due to the assumed energy condition (iii), we have (), and therefore is nonnegative on ∆. The product is of course nonnegative, too. Consequently, and must be simultaneously zero on ∆, i.e. and . As a summary,
Thus, the isolated horizon ∆ is nonevolutional and all foliation leaves ∆'=S2 look identical with one another. The relation implies that the causal vector in condition (iii) is proportional to and is proportional to on the horizon ∆; that is, and , . Applying this result to the related Ricci-NP scalars, we get , and , thus
The vanishing of Ricci-NP scalars signifies that, there is no energy–momentum flux of any kind of charge across the horizon, such as electromagnetic waves, Yang–Mills flux or dilaton flux. Also, there should be no gravitational waves crossing the horizon; however, gravitational waves are propagation of perturbations of the spacetime continuum rather than flows of charges, and therefore depicted by four Weyl-NP scalars (excluding ) rather than Ricci-NP quantities . According to the Raychaudhuri-NP shear equation
or the NP field equation on the horizon
it follows that . Moreover, the NP equation
implies that . To sum up, we have
which means that, geometrically, a principal null direction of Weyl's tensor is repeated twice and is aligned with the principal direction; physically, no gravitational waves (transverse component and longitudinal component ) enter the black hole. This result is consistent with the physical scenario defining NEHs.
Remarks: Spin coefficients related to Raychaudhuri's equation
For a better understanding of the previous section, we will briefly review the meanings of relevant NP spin coefficients in depicting null congruences. The tensor form of Raychaudhuri's equation governing null flows reads
where is defined such that . The quantities in Raychaudhuri's equation are related with the spin coefficients via
where Eq(10) follows directly from and
Moreover, a null congruence is hypersurface orthogonal if .
Constraints from electromagnetic fields
Vacuum NEHs on which are the simplest types of NEHs, but in general there can be various physically meaningful fields surrounding an NEH, among which we are mostly interested in electrovacuum fields with . This is the simplest extension of vacuum NEHs, and the nonvanishing energy-stress tensor for electromagnetic fields reads
where refers to the antisymmetric (, ) electromagnetic field strength, and is trace-free () by definition and respects the dominant energy condition. (One should be careful with the antisymmetry of in defining Maxwell-NP scalars ).
The boundary conditions derived in the previous section are applicable to generic NEHs. In the electromagnetic case, can be specified in a more particular way. By the NP formalism of Einstein-Maxwell equations, one has
where denote the three Maxwell-NP scalars. As an alternative to Eq(), we can see that the condition also results from the NP equation
as , so
It follows straightforwardly that
These results demonstrate that, there are no electromagnetic waves across (, ) or along (\Phi_{02}) the NEH except the null geodesics generating the horizon. It is also worthwhile to point out that, the supplementary equation in Eq() is only valid for electromagnetic fields; for example, in the case of Yang–Mills fields there will be where are Yang–Mills-NP scalars.
Adapted tetrad on NEHs and further properties
Usually, null tetrads adapted to spacetime properties are employed to achieve the most succinct NP descriptions. For example, a null tetrad can be adapted to principal null directions once the Petrov type is known; also, at some typical boundary regions such as null infinity, timelike infinity, spacelike infinity, black hole horizons and cosmological horizons, tetrads can be adapted to boundary structures. Similarly, a preferred tetrad adapted to on-horizon geometric behaviors is employed in the literature to further investigate NEHs.
As indicated from the 3+1 perspective from condition (i) in the definition, an NEH ∆ is foliated by spacelike hypersurfaces ∆'=S2 transverse to its null normal along an ingoing null coordinate , where we follow the standard notation of ingoing Eddington–Finkelstein null coordinates and use to label the 2-dimensional leaves at ; that is, . is set to be future-directed and choose the first tetrad covector as , and then there will be a unique vector field as null normals to satisfying the cross-normalization and affine parametrization ; such choice of would actually yields a preferred foliation of ∆. While are related to the extrinsic properties and null generators (i.e. null flows/geodesic congruence on ∆), the remaining two complex null vectors are to span the intrinsic geometry of a foliation leaf , tangent to ∆ and transverse to ; that is, .
Now let's check the consequences of this kind of adapted tetrad. Since
with , we have
Also, in such an adapted frame, the derivative on should be purely intrinsic; thus in the commutator
the coefficients for the directional derivatives and ∆ must be zero, that is
so the ingoing null normal field is twist-free by , and equals the ingoing expansion rate .
Discussion
So far, the definition and boundary conditions of NEHs have been introduced. The boundary conditions include those for an arbitrary NEH, specific characteristics for Einstein-Maxwell (electromagnetic) NEHs, as well as further properties in an adapted tetrad. Based on NEHs, WIHs which have valid surface gravity can be defined to generalize the black hole mechanics. WIHs are sufficient in studying the physics on the horizon, but for geometric purposes, stronger restrictions can be imposed to WIHs so as to introduce IHs, where the equivalence class of null normals fully preserves the induced connection on the horizon.
References
Black holes
General relativity | Non-expanding horizon | [
"Physics",
"Astronomy"
] | 1,850 | [
"Physical phenomena",
"Black holes",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"General relativity",
"Density",
"Theory of relativity",
"Stellar phenomena",
"Astronomical objects"
] |
34,204,168 | https://en.wikipedia.org/wiki/National%20Institute%20of%20Animal%20Biotechnology | The National Institute of Animal Biotechnology is an Indian autonomous research establishment of the Department of Biotechnology, Ministry of Science and Technology (India). The NIAB is set up in Hyderabad, India, under the leadership of Prof. Pallu Reddanna. "The state of the art of Animal Biotechnology and Transgenics institute" is housed in the NIAB Campus in Gachibowli.
The primary mandate of NIAB is towards the development of sustainability and globally competitive livestock (farm animals) for public and industry through innovative and cutting edge technology. There will emphasis on showing excellence in production of globally competitive livestock products, pharmaceuticals (medicines), nutritional products and other biologicals related to animal health care.
Academics and research
The main focus of NIAB is to nurture the bio-entrepreneurship in the animal biotechnology field and to perform translational research involving livestock which would be beneficial to mankind. There will be research laboratories of various disciplines e.g. Genomics, Nutrition enrichment, transgenic technology, infectious diseases and Reproductive biotechnology.
NIAB will conduct teaching and research programs of M.Sc. and Ph.D. to train young scientists. Dr. G. Taru Sharma took charge as new director since 7 December 2021.
See also
Genome Valley
Education in India
Literacy in India
List of institutions of higher education in Telangana
References
External links
Official Website NIAB
Research institutes in Hyderabad, India
Multidisciplinary research institutes
Science education in India
Biotechnology in India
Research institutes established in 2010
2010 establishments in Andhra Pradesh
Animal research institutes | National Institute of Animal Biotechnology | [
"Biology"
] | 305 | [
"Biotechnology in India",
"Biotechnology by country"
] |
34,204,639 | https://en.wikipedia.org/wiki/Decoded%20neurofeedback | Decoded Neurofeedback (DecNef) is the process of inducing knowledge in a subject by increasing neural activation in predetermined regions in the brain, such as the visual cortex. This is achieved by measuring neural activity in these regions via functional magnetic resonance imaging (fMRI), comparing this to the ideal pattern of neural activation in these regions (for the intended purpose), and giving subjects feedback on how close their current pattern of neural activity is to the ideal pattern. Without explicit knowledge of what they are supposed to be doing or thinking about, over time participants learn to induce this ideal pattern of neural activation. Corresponding to this, their 'knowledge' or way of thinking has been found to change accordingly.
Experiments conducted in 2011 at Boston University (BU) and ATR Computational Neuroscience Laboratories in Kyoto, Japan demonstrated that volunteers were able to quickly solve complex visual puzzles they had not previously had exposure to. They did so by receiving the brain patterns of other volunteers who had already learned to solve the puzzles through trial and error methods.
Neurofeedback, commonly referred to as EEG biofeedback, is a real-time method of measuring and adjusting brain activity such that the brain is rewarded at the appropriate time. This non-pharmaceutical approach to treating a variety of diseases, such as anxiety, ADHD, and depression, is based on notions of neuroplasticity and learning. Neurofeedback is used by Neuroperforma to assist patients in reaching their utmost wellbeing.
The research has far-reaching implications for treating patients with various learning disabilities, mental illness, memory problems, and motor functionality impairments.
External links
National Science Foundation: Vision Scientists Demonstrate Innovative Learning Method
Science Magazine: Perceptual Learning Incepted by Decoded fMRI Neurofeedback Without Stimulus Presentation
References
Neuroscience | Decoded neurofeedback | [
"Biology"
] | 376 | [
"Neuroscience"
] |
34,205,013 | https://en.wikipedia.org/wiki/Oscillator%20representation | In mathematics, the oscillator representation is a projective unitary representation of the symplectic group, first investigated by Irving Segal, David Shale, and André Weil. A natural extension of the representation leads to a semigroup of contraction operators, introduced as the oscillator semigroup by Roger Howe in 1988. The semigroup had previously been studied by other mathematicians and physicists, most notably Felix Berezin in the 1960s. The simplest example in one dimension is given by SU(1,1). It acts as Möbius transformations on the extended complex plane, leaving the unit circle invariant. In that case the oscillator representation is a unitary representation of a double cover of SU(1,1) and the oscillator semigroup corresponds to a representation by contraction operators of the semigroup in SL(2,C) corresponding to Möbius transformations that take the unit disk into itself.
The contraction operators, determined only up to a sign, have kernels that are Gaussian functions. On an infinitesimal level the semigroup is described by a cone in the Lie algebra of SU(1,1) that can be identified with a light cone. The same framework generalizes to the symplectic group in higher dimensions, including its analogue in infinite dimensions. This article explains the theory for SU(1,1) in detail and summarizes how the theory can be extended.
Historical overview
The mathematical formulation of quantum mechanics by Werner Heisenberg and Erwin Schrödinger was originally in terms of unbounded self-adjoint operators on a Hilbert space. The fundamental operators corresponding to position and momentum satisfy the Heisenberg commutation relations. Quadratic polynomials in these operators, which include the harmonic oscillator, are also closed under taking commutators.
A large amount of operator theory was developed in the 1920s and 1930s to provide a rigorous foundation for quantum mechanics. Part of the theory was formulated in terms of unitary groups of operators, largely through the contributions of Hermann Weyl, Marshall Stone and John von Neumann. In turn these results in mathematical physics were subsumed within mathematical analysis, starting with the 1933 lecture notes of Norbert Wiener, who used the heat kernel for the harmonic oscillator to derive the properties of the Fourier transform.
The uniqueness of the Heisenberg commutation relations, as formulated in the Stone–von Neumann theorem, was later interpreted within group representation theory, in particular the theory of induced representations initiated by George Mackey. The quadratic operators were understood in terms of a projective unitary representation of the group SU(1,1) and its Lie algebra. Irving Segal and David Shale generalized this construction to the symplectic group in finite and infinite dimensions—in physics, this is often referred to as bosonic quantization: it is constructed as the symmetric algebra of an infinite-dimensional space. Segal and Shale have also treated the case of fermionic quantization, which is constructed as the exterior algebra of an infinite-dimensional Hilbert space. In the special case of conformal field theory in 1+1 dimensions, the two versions become equivalent via the so-called "boson-fermion correspondence." Not only does this apply in analysis where there are unitary operators between bosonic and fermionic Hilbert spaces, but also in the mathematical theory of vertex operator algebras. Vertex operators themselves originally arose in the late 1960s in theoretical physics, particularly in string theory.
André Weil later extended the construction to p-adic Lie groups, showing how the ideas could be applied in number theory, in particular to give a group theoretic explanation of theta functions and quadratic reciprocity. Several physicists and mathematicians observed the heat kernel operators corresponding to the harmonic oscillator were associated to a complexification of SU(1,1): this was not the whole of SL(2,C), but instead a complex semigroup defined by a natural geometric condition. The representation theory of this semigroup, and its generalizations in finite and infinite dimensions, has applications both in mathematics and theoretical physics.
Semigroups in SL(2,C)
The group:
is a subgroup of Gc = SL(2,C), the group of complex 2 × 2 matrices with determinant 1. If G1 = SL(2,R) then
This follows since the corresponding Möbius transformation is the Cayley transform which carries the upper half plane onto the unit disk and the real line onto the unit circle.
The group SL(2,R) is generated as an abstract group by
and the subgroup of lower triangular matrices
Indeed, the orbit of the vector
under the subgroup generated by these matrices is easily seen to be the whole of R2 and the stabilizer of v in G1 lies in inside this subgroup.
The Lie algebra of SU(1,1) consists of matrices
The period 2 automorphism σ of Gc
with
has fixed point subgroup G since
Similarly the same formula defines a period two automorphism σ of the Lie algebra of Gc, the complex matrices with trace zero. A standard basis of over C is given by
Thus for −1 ≤ m, n ≤ 1
There is a direct sum decomposition
where is the +1 eigenspace of σ and the –1 eigenspace.
The matrices X in have the form
Note that
The cone C in is defined by two conditions. The first is By definition this condition is preserved under conjugation by G. Since G is connected it leaves the two components with x > 0 and x < 0 invariant. The second condition is
The group Gc acts by Möbius transformations on the extended complex plane. The subgroup G acts as automorphisms of the unit disk D. A semigroup H of Gc, first considered by , can be defined by the geometric condition:
The semigroup can be described explicitly in terms of the cone C:
In fact the matrix X can be conjugated by an element of G to the matrix
with
Since the Möbius transformation corresponding to exp Y sends z to e−2yz, it follows that the right hand side lies in the semigroup. Conversely if g lies in H it carries the closed unit disk onto a smaller closed disk in its interior. Conjugating by an element of G, the smaller disk can be taken to have centre 0. But then for appropriate y, the element carries D onto itself so lies in G.
A similar argument shows that the closure of H, also a semigroup, is given by
From the above statement on conjugacy, it follows that
where
If
then
since the latter is obtained by taking the transpose and conjugating by the diagonal matrix with entries ±1. Hence H also contains
which gives the inverse matrix if the original matrix lies in SU(1,1).
A further result on conjugacy follows by noting that every element of H must fix a point in D, which by conjugation with an element of G can be taken to be 0. Then the element of H has the form
The set of such lower triangular matrices forms a subsemigroup H0 of H.
Since
every matrix in H0 is conjugate to a diagonal matrix by a matrix M in H0.
Similarly every one-parameter semigroup S(t) in H fixes the same point in D so is conjugate by an element of G to a one-parameter semigroup in H0.
It follows that there is a matrix M in H0 such that
with S0(t) diagonal. Similarly there is a matrix N in H0 such that
The semigroup H0 generates the subgroup L of complex lower triangular matrices with determinant 1 (given by the above formula with a ≠ 0). Its Lie algebra consists of matrices of the form
In particular the one parameter semigroup exp tZ lies in H0 for all t > 0 if and only if and
This follows from the criterion for H or directly from the formula
The exponential map is known not to be surjective in this case, even though it is surjective on the whole group L. This follows because the squaring operation is not surjective in H. Indeed, since the square of an element fixes 0 only if the original element fixes 0, it suffices to prove this in H0. Take α with |α| < 1 and
If a = α2 and
with
then the matrix
has no square root in H0. For a square root would have the form
On the other hand,
The closed semigroup is maximal in SL(2,C): any larger semigroup must be the whole of SL(2,C).
Using computations motivated by theoretical physics, introduced the semigroup , defined through a set of inequalities. Without identification as a compression semigroup, they established the maximality of . Using the definition as a compression semigroup, maximality reduces to checking what happens when adding a new fractional transformation to . The idea of the proof depends on considering the positions of the two discs and . In the key cases, either one disc contains the other or they are disjoint. In the simplest cases, is the inverse of a scaling transformation or . In either case and generate an open neighbourhood of 1 and hence the whole of SL(2,C)
Later gave another more direct way to prove maximality by first showing that there is a g in S sending D onto the disk Dc, |z| > 1. In fact if then there is a small disk D1 in D such that xD1 lies in Dc. Then for some h in H, D1 = hD. Similarly yxD1 = Dc for some y in H. So g = yxh lies in S and sends D onto Dc. It follows that g2 fixes the unit disc D so lies in SU(1,1). So g−1 lies in S. If t lies in H then tgD contains gD. Hence So t−1 lies in S and therefore S contains an open neighbourhood of 1. Hence S = SL(2,C).
Exactly the same argument works for Möbius transformations on Rn and the open semigroup taking the closed unit sphere ||x|| ≤ 1 into the open unit sphere ||x|| < 1. The closure is a maximal proper semigroup in the group of all Möbius transformations. When n = 1, the closure corresponds to Möbius transformations of the real line taking the closed interval [–1,1] into itself.
The semigroup H and its closure have a further piece of structure inherited from G, namely inversion on G extends to an antiautomorphism of H and its closure, which fixes the elements in exp C and its closure. For
the antiautomorphism is given by
and extends to an antiautomorphism of SL(2,C).
Similarly the antiautomorphism
leaves G1 invariant and fixes the elements in exp C1 and its closure, so it has analogous properties for the semigroup in G1.
Commutation relations of Heisenberg and Weyl
Let be the space of Schwartz functions on R. It is dense in the Hilbert space L2(R) of square-integrable functions on R. Following the terminology of quantum mechanics, the "momentum" operator P and "position" operator Q are defined on by
There operators satisfy the Heisenberg commutation relation
Both P and Q are self-adjoint for the inner product on inherited from L2(R).
Two one parameter unitary groups U(s) and V(t) can be defined on and L2(R) by
By definition
for , so that formally
It is immediate from the definition that the one parameter groups U and V satisfy the Weyl commutation relation
The realization of U and V on L2(R) is called the Schrödinger representation.
Fourier transform
The Fourier transform is defined on by
It defines a continuous map of into itself for its natural topology.
Contour integration shows that the function
is its own Fourier transform.
On the other hand, integrating by parts or differentiating under the integral,
It follows that the operator on defined by
commutes with both Q (and P). On the other hand,
and since
lies in , it follows that
and hence
This implies the Fourier inversion formula:
and shows that the Fourier transform is an isomorphism of onto itself.
By Fubini's theorem
When combined with the inversion formula this implies that the Fourier transform preserves the inner product
so defines an isometry of onto itself.
By density it extends to a unitary operator on L2(R), as asserted by Plancherel's theorem.
Stone–von Neumann theorem
Suppose U(s) and V(t) are one parameter unitary groups on a Hilbert space satisfying the Weyl commutation relations
For let
and define a bounded operator on by
Then
where
The operators T(F) have an important non-degeneracy property: the linear span of all vectors T(F)ξ is dense in .
Indeed, if fds and gdt define probability measures with compact support, then the smeared operators
satisfy
and converge in the strong operator topology to the identity operator if the supports of the measures decrease to 0.
Since U(f)V(g) has the form T(F), non-degeneracy follows.
When is the Schrödinger representation on L2(R), the operator T(F) is given by
It follows from this formula that U and V jointly act irreducibly on the Schrödinger representation since this is true for the operators given by kernels that are Schwartz functions. A concrete description is provided by Linear canonical transformations.
Conversely given a representation of the Weyl commutation relations on , it gives rise to a non-degenerate representation of the *-algebra of kernel operators. But all such representations are on an orthogonal direct sum of copies of L2(R) with the action on each copy as above. This is a straightforward generalisation of the elementary fact that the representations of the N × N matrices are on direct sums of the standard representation on CN. The proof using matrix units works equally well in infinite dimensions.
The one parameter unitary groups U and V leave each component invariant, inducing the standard action on the Schrödinger representation.
In particular this implies the Stone–von Neumann theorem: the Schrödinger representation is the unique irreducible representation of the Weyl commutation relations on a Hilbert space.
Oscillator representation of SL(2,R)
Given U and V satisfying the Weyl commutation relations, define
Then
so that W defines a projective unitary representation of R2 with cocycle given by
where and B is the symplectic form on R2 given by
By the Stone–von Neumann theorem, there is a unique irreducible representation corresponding to this cocycle.
It follows that if g is an automorphism of R2 preserving the form B, i.e. an element of SL(2,R), then there is a unitary π(g) on L2(R) satisfying the covariance relation
By Schur's lemma the unitary π(g) is unique up to multiplication by a scalar ζ with |ζ| = 1, so that π defines a projective unitary representation of SL(2,R).
This can be established directly using only the irreducibility of the Schrödinger representation. Irreducibility was a direct consequence of the fact the operators
with K a Schwartz function correspond exactly to operators given by kernels with Schwartz functions.
These are dense in the space of Hilbert–Schmidt operators, which, since it contains the finite rank operators, acts irreducibly.
The existence of π can be proved using only the irreducibility of the Schrödinger representation. The operators are unique up to a sign with
so that the 2-cocycle for the projective representation of SL(2,R) takes values ±1.
In fact the group SL(2,R) is generated by matrices of the form
and it can be verified directly that the following operators satisfy the covariance relations above:
The generators gi satisfy the following Bruhat relations, which uniquely specify the group SL(2,R):
It can be verified by direct calculation that these relations are satisfied up to a sign by the corresponding operators, which establishes that the cocycle takes values ±1.
There is a more conceptual explanation using an explicit construction of the metaplectic group as a double cover of SL(2,R). SL(2,R) acts by Möbius transformations on the upper half plane H. Moreover, if
then
The function
satisfies the 1-cocycle relation
For each g, the function m(g,z) is non-vanishing on H and therefore has two possible holomorphic square roots. The metaplectic group is defined as the group
By definition it is a double cover of SL(2,R) and is connected. Multiplication is given by
where
Thus for an element g of the metaplectic group there is a uniquely determined function m(g,z)1/2 satisfying the 1-cocycle relation.
If , then
lies in L2 and is called a coherent state.
These functions lie in a single orbit of SL(2,R) generated by
since for g in SL(2,R)
More specifically if g lies in Mp(2,R) then
Indeed, if this holds for g and h, it also holds for their product. On the other hand, the formula is easily checked if gt has the form gi and these are generators.
This defines an ordinary unitary representation of the metaplectic group.
The element (1,–1) acts as multiplication by –1 on L2(R), from which it follows that the cocycle on SL(2,R) takes only values ±1.
Maslov index
As explained in , the 2-cocycle on SL(2,R) associated with the metaplectic representation, taking values ±1, is determined by the Maslov index.
Given three non-zero vectors u, v, w in the plane, their Maslov index is defined as the signature of the quadratic form on R3 defined by
Properties of the Maslov index:
it depends on the one-dimensional subspaces spanned by the vectors
it is invariant under SL(2,R)
it is alternating in its arguments, i.e. its sign changes if two of the arguments are interchanged
it vanishes if two of the subspaces coincide
it takes the values –1, 0 and +1: if u and v satisfy B(u,v) = 1 and w = au + bv, then the Maslov index is zero is if ab = 0 and is otherwise equal to minus the sign of ab
Picking a non-zero vector u0, it follows that the function
defines a 2-cocycle on SL(2,R) with values in the eighth roots of unity.
A modification of the 2-cocycle can be used to define a 2-cocycle with values in ±1 connected with the metaplectic cocycle.
In fact given non-zero vectors u, v in the plane, define f(u,v) to be
i times the sign of B(u,v) if u and v are not proportional
the sign of λ if u = λv.
If
then
The representatives π(g) in the metaplectic representation can be chosen so that
where the 2-cocycle ω is given by
with
Holomorphic Fock space
Holomorphic Fock space (also known as the Segal–Bargmann space) is defined to be the vector space of holomorphic functions f(z) on C with
finite. It has inner product
is a Hilbert space with orthonormal basis
Moreover, the power series expansion of a holomorphic function in gives its expansion with respect to this basis. Thus for z in C
so that evaluation at z is gives a continuous linear functional on In fact
where
Thus in particular is a reproducing kernel Hilbert space.
For f in and z in C define
Then
so this gives a unitary representation of the Weyl commutation relations. Now
It follows that the representation is irreducible.
Indeed, any function orthogonal to all the Ea must vanish, so that their linear span is dense in .
If P is an orthogonal projection commuting with W(z), let f = PE0. Then
The only holomorphic function satisfying this condition is the constant function. So
with λ = 0 or 1. Since E0 is cyclic, it follows that P = 0 or I.
By the Stone–von Neumann theorem there is a unitary operator from L2(R) onto , unique up to multiplication by a scalar, intertwining the two representations of the Weyl commutation relations. By Schur's lemma and the Gelfand–Naimark construction, the matrix coefficient of any vector determines the vector up to a scalar multiple. Since the matrix coefficients of F = E0 and f = H0 are equal, it follows that the unitary is uniquely determined by the properties
and
Hence for f in L2(R)
so that
where
The operator is called the Segal–Bargmann transform and B is called the Bargmann kernel.
The adjoint of is given by the formula:
Fock model
The action of SU(1,1) on holomorphic Fock space was described by and .
The metaplectic double cover of SU(1,1) can be constructed explicitly as pairs (g, γ) with
and
If g = g1g2, then
using the power series expansion of (1 + z)1/2 for |z| < 1.
The metaplectic representation is a unitary representation π(g, γ) of this group satisfying the covariance relations
where
Since is a reproducing kernel Hilbert space, any bounded operator T on it corresponds to a kernel given by a power series of its two arguments. In fact if
and F in , then
The covariance relations and analyticity of the kernel imply that for S = π(g, γ),
for some constant C. Direct calculation shows that
leads to an ordinary representation of the double cover.
Coherent states can again be defined as the orbit of E0 under the metaplectic group.
For w complex, set
Then if and only if |w| < 1. In particular F0 = 1 = E0. Moreover,
where
Similarly the functions zFw lie in and form an orbit of the metaplectic group:
Since (Fw, E0) = 1, the matrix coefficient of the function E0 = 1 is given by
Disk model
The projective representation of SL(2,R) on L2(R) or on break up as a direct sum of two irreducible representations, corresponding to even and odd functions of x or z. The two representations can be realized on Hilbert spaces of holomorphic functions on the unit disk; or, using the Cayley transform, on the upper half plane.
The even functions correspond to holomorphic functions F+ for which
is finite; and the odd functions to holomorphic functions F– for which
is finite. The polarized forms of these expressions define the inner products.
The action of the metaplectic group is given by
Irreducibility of these representations is established in a standard way. Each representation breaks up as a direct sum of one dimensional eigenspaces of the rotation group each of which is generated by a C∞ vector for the whole group. It follows that any closed invariant subspace is generated by the algebraic direct sum of eigenspaces it contains and that this sum is invariant under the infinitesimal action of the Lie algebra . On the other hand, that action is irreducible.
The isomorphism with even and odd functions in can be proved using the Gelfand–Naimark construction since the matrix coefficients associated to 1 and z in the corresponding representations are proportional. gave another method starting from the maps
from the even and odd parts to functions on the unit disk. These maps intertwine the actions of the metaplectic group given above and send zn to a multiple of wn. Stipulating that U± should be unitary determines the inner products on functions on the disk, which can expressed in the form above.
Although in these representations the operator L0 has positive spectrum—the feature that distinguishes the holomorphic discrete series representations of SU(1,1)—the representations do not lie in the discrete series of the metaplectic group. Indeed, noted that the matrix coefficients are not square integrable, although their third power is.
Harmonic oscillator and Hermite functions
Consider the following subspace of L2(R):
The operators
act on X is called the annihilation operator and Y the creation operator. They satisfy
Define the functions
We claim they are the eigenfunctions of the harmonic oscillator, D. To prove this we use the commutation relations above:
Next we have:
This is known for n = 0 and the commutation relation above yields
The nth Hermite function is defined by
pn is called the nth Hermite polynomial.
Let
Thus
The operators P, Q or equivalently A, A* act irreducibly on by a standard argument.
Indeed, under the unitary isomorphism with holomorphic Fock space can be identified with C[z], the space of polynomials in z, with
If a subspace invariant under A and A* contains a non-zero polynomial p(z), then, applying a power of A*, it contains a non-zero constant; applying then a power of A, it contains all zn.
Under the isomorphism Fn is sent to a multiple of zn and the operator D is given by
Let
so that
In the terminology of physics A, A* give a single boson and L0 is the energy operator. It is diagonalizable with eigenvalues 1/2, 1, 3/2, ...., each of multiplicity one. Such a representation is called a positive energy representation.
Moreover,
so that the Lie bracket with L0 defines a derivation of the Lie algebra spanned by A, A* and I. Adjoining L0 gives the semidirect product. The infinitesimal version of the Stone–von Neumann theorem states that the above representation on C[z] is the unique irreducible positive energy representation of this Lie algebra with L0 = A*A + 1/2. For A lowers energy and A* raises energy. So any lowest energy vector v is annihilated by A and the module is exhausted by the powers of A* applied to v. It is thus a non-zero quotient of C[z] and hence can be identified with it by irreducibility.
Let
so that
These operators satisfy:
and act by derivations on the Lie algebra spanned by A, A* and I.
They are the infinitesimal operators corresponding to the metaplectic representation of SU(1,1).
The functions Fn are defined by
It follows that the Hermite functions are the orthonormal basis obtained by applying the Gram-Schmidt orthonormalization process to the basis xn exp -x2/2 of .
The completeness of the Hermite functions follows from the fact that the Bargmann transform is unitary and carries the orthonormal basis en(z) of holomorphic Fock space onto the Hn(x).
The heat operator for the harmonic oscillator is the operator on L2(R) defined as the diagonal operator
It corresponds to the heat kernel given by Mehler's formula:
This follows from the formula
To prove this formula note that if s = σ2, then by Taylor's formula
Thus Fσ,x lies in holomorphic Fock space and
an inner product that can be computed directly.
establishes Mehler's formula directly and uses a classical argument to prove that
tends to f in L2(R) as t decreases to 0. This shows the completeness of the Hermite functions and also, since
can be used to derive the properties of the Fourier transform.
There are other elementary methods for proving the completeness of the Hermite functions, for example using Fourier series.
Sobolev spaces
The Sobolev spaces Hs, sometimes called Hermite-Sobolev spaces, are defined to be the completions of with respect to the norms
where
is the expansion of f in Hermite functions.
Thus
The Sobolev spaces are Hilbert spaces. Moreover, Hs and H–s are in duality under the pairing
For s ≥ 0,
for some positive constant Cs.
Indeed, such an inequality can be checked for creation and annihilation operators acting on Hermite functions Hn and this implies the general inequality.
It follows for arbitrary s by duality.
Consequently, for a quadratic polynomial R in P and Q
The Sobolev inequality holds for f in Hs with s > 1/2:
for any k ≥ 0.
Indeed, the result for general k follows from the case k = 0 applied to Qkf.
For k = 0 the Fourier inversion formula
implies
If s < t, the diagonal form of D, shows that the inclusion of Ht in Hs is compact (Rellich's lemma).
It follows from Sobolev's inequality that the intersection of the spaces Hs is . Functions in are characterized by the rapid decay of their Hermite coefficients an.
Standard arguments show that each Sobolev space is invariant under the operators W(z) and the metaplectic group. Indeed, it is enough to check invariance when g is sufficiently close to the identity. In that case
with D + A an isomorphism from to
It follows that
If then
where the derivatives lie in
Similarly the partial derivatives of total degree k of U(s)V(t)f lie in Sobolev spaces of order s–k/2.
Consequently, a monomial in P and Q of order 2k applied to f lies in Hs–k and can be expressed as a linear combination of partial derivatives of U(s)V(t)f of degree ≤ 2k evaluated at 0.
Smooth vectors
The smooth vectors for the Weyl commutation relations are those u in L2(R) such that the map
is smooth. By the uniform boundedness theorem, this is equivalent to the requirement that each matrix coefficient (W(z)u,v) be smooth.
A vector is smooth if and only it lies in . Sufficiency is clear. For necessity, smoothness implies that the partial derivatives of W(z)u lie in L2(R) and hence also Dku for all positive k. Hence u lies in the intersection of the Hk, so in .
It follows that smooth vectors are also smooth for the metaplectic group.
Moreover, a vector is in if and only if it is a smooth vector for the rotation subgroup of SU(1,1).
Analytic vectors
If Π(t) is a one parameter unitary group and for f in
then the vectors Π(f)ξ form a dense set of smooth vectors for Π.
In fact taking
the vectors v = Π(fε)ξ converge to ξ as ε decreases to 0 and
is an analytic function of t that extends to an entire function on C.
The vector is called an entire vector for Π.
The wave operator associated to the harmonic oscillator is defined by
The operator is diagonal with the Hermite functions Hn as eigenfunctions:
Since it commutes with D, it preserves the Sobolev spaces.
The analytic vectors constructed above can be rewritten in terms of the Hermite semigroup as
The fact that v is an entire vector for Π is equivalent to the summability condition
for all r > 0.
Any such vector is also an entire vector for U(s)V(t), that is the map
defined on R2 extends to an analytic map on C2.
This reduces to the power series estimate
So these form a dense set of entire vectors for U(s)V(t); this can also be checked directly using Mehler's formula.
The spaces of smooth and entire vectors for U(s)V(t) are each by definition invariant under the action of the metaplectic group as well as the Hermite semigroup.
Let
be the analytic continuation of the operators W(x,y) from R2 to C2 such that
Then W leaves the space of entire vectors invariant and satisfies
Moreover, for g in SL(2,R)
using the natural action of SL(2,R) on C2.
Formally
Oscillator semigroup
There is a natural double cover of the Olshanski semigroup H, and its closure that extends the double cover of SU(1,1) corresponding to the metaplectic group. It is given by pairs (g, γ) where g is an element of H or its closure
and γ is a square root of a.
Such a choice determines a unique branch of
for |z| < 1.
The unitary operators π(g) for g in SL(2,R) satisfy
for u in C2.
An element g of the complexification SL(2,C) is said to implementable if there is a bounded operator T such that it and its adjoint leave the space of entire vectors for W invariant, both have dense images and satisfy the covariance relations
for u in C2. The implementing operator T is uniquely determined up to multiplication by a non-zero scalar.
The implementable elements form a semigroup, containing SL(2,R). Since the representation has positive energy, the bounded compact self-adjoint operators
for t > 0 implement the group elements in exp C1.
It follows that all elements of the Olshanski semigroup and its closure are implemented.
Maximality of the Olshanki semigroup implies that no other elements of SL(2,C) are implemented. Indeed, otherwise every element of SL(2,C) would be implemented by a bounded operator, which would contradict the non-invertibility of the operators S0(t) for t > 0.
In the Schrödinger representation the operators S0(t) for t > 0 are given by Mehler's formula. They are contraction operators, positive and in every Schatten class. Moreover, they leave invariant each of the Sobolev spaces. The same formula is true for by analytic continuation.
It can be seen directly in the Fock model that the implementing operators can be chosen so that they define an ordinary representation of the double cover of H constructed above. The corresponding semigroup of contraction operators is called the oscillator semigroup. The extended oscillator semigroup is obtained by taking the semidirect product with the operators W(u). These operators lie in every Schatten class and leave invariant the Sobolev spaces and the space of entire vectors for W.
The decomposition
corresponds at the operator level to the polar decomposition of bounded operators.
Moreover, since any matrix in H is conjugate to a diagonal matrix by elements in H or H−1, every operator in the oscillator semigroup is quasi-similar to an operator S0(t) with . In particular it has the same spectrum consisting of simple eigenvalues.
In the Fock model, if the element g of the Olshanki semigroup H corresponds to the matrix
the corresponding operator is given by
where
and γ is a square root of a. Operators π(g,γ) for g in the semigroup H are exactly those that are Hilbert–Schmidt operators and correspond to kernels of the form
for which the complex symmetric matrix
has operator norm strictly less than one.
Operators in the extended oscillator semigroup are given by similar expressions with additional linear terms in z and w appearing in the exponential.
In the disk model for the two irreducible components of the metaplectic representation, the corresponding operators are given by
It is also possible to give an explicit formula for the contraction operators corresponding to g in H in the Schrödinger representation, It was by this formula that introduced the oscillator semigroup as an explicit family of operators on L2(R).
In fact consider the Siegel upper half plane consisting of symmetric complex 2x2 matrices with positive definite real part:
and define the kernel
with corresponding operator
for f in L2(R).
Then direct computation gives
where
Moreover,
where
By Mehler's formula for
with
The oscillator semigroup is obtained by taking only matrices with B ≠ 0. From the above, this condition is closed under composition.
A normalized operator can be defined by
The choice of a square root determines a double cover.
In this case SZ corresponds to the element
of the Olshankii semigroup H.
Moreover, SZ is a strict contraction:
It follows also that
Weyl calculus
For a function a(x,y) on R2 = C, let
So
where
Defining in general
the product of two such operators is given by the formula
where the twisted convolution or Moyal product is given by
The smoothing operators correspond to W(F) or ψ(a) with F or a Schwartz functions on R2. The corresponding operators T have kernels that are Schwartz functions. They carry each Sobolev space into the Schwartz functions. Moreover, every bounded operator on L2 (R) having this property has this form.
For the operators ψ(a) the Moyal product translates into the Weyl symbolic calculus. Indeed, if the Fourier transforms of a and b have compact support then
where
This follows because in this case b must extend to an entire function on C2 by the Paley-Wiener theorem.
This calculus can be extended to a broad class of symbols, but the simplest corresponds to convolution by a class of functions or distributions that all have the form T + S where T is a distribution of compact with singular support concentrated at 0 and where S is a Schwartz function. This class contains the operators P, Q as well as D1/2 and D−1/2 where D is the harmonic oscillator.
The mth order symbols Sm are given by smooth functions a satisfying
for all α and Ψm consists of all operators ψ(a) for such a.
If a is in Sm and χ is a smooth function of compact support equal to 1 near 0, then
with T and S as above.
These operators preserve the Schwartz functions and satisfy;
The operators P and Q lie in Ψ1 and D lies in Ψ2.
Properties:
A zeroth order symbol defines a bounded operator on L2(R).
D−1 lies in Ψ−2
If R = R* is smoothing, then D + R has a complete set of eigenvectors fn in with (D + R)fn = λnfn and λn tends to ≈ as n tends to ≈.
D1/2 lies in Ψ1 and hence D−1/2 lies in Ψ−1, since D−1/2 = D1/2 ·D−1
Ψ−1 consists of compact operators, Ψ−s consists of trace-class operators for s > 1 and Ψk carries Hm into Hm–k.
The proof of boundedness of is particularly simple: if
then
where the bracketed operator has norm less than . So if F is supported in |z| ≤ R, then
The property of D−1 is proved by taking
with
Then R = I – DS lies in Ψ−1, so that
lies in Ψ−2 and T = DA – I is smoothing. Hence
lies in Ψ−2 since D−1 T is smoothing.
The property for D1/2 is established similarly by constructing B in Ψ1/2 with real symbol such that D – B4 is a smoothing operator. Using the holomorphic functional calculus it can be checked that D1/2 – B2 is a smoothing operator.
The boundedness result above was used by to establish the more general inequality of Alberto Calderón and Remi Vaillancourt for pseudodifferential operators. An alternative proof that applies more generally to Fourier integral operators was given by . He showed that such operators can be expressed as integrals over the oscillator semigroup and then estimated using the Cotlar-Stein lemma.
Applications and generalizations
Theory for finite abelian groups
noted that the formalism of the Stone–von Neumann theorem and the oscillator representation of the symplectic group extends from the real numbers R to any locally compact abelian group. A particularly simple example is provided by finite abelian groups, where the proofs are either elementary or simplifications of the proofs for R.
Let A be a finite abelian group, written additively, and let Q be a non-degenerate quadratic form on A with values in T. Thus
is a symmetric bilinear form on A that is non-degenerate, so permits an identification between A and its dual group A* = Hom (A, T).
Let be the space of complex-valued functions on A with inner product
Define operators on V by
for x, y in A. Then U(x) and V(y) are unitary representations of A on V satisfying the commutation relations
This action is irreducible and is the unique such irreducible representation of these relations.
Let G = A × A and for z = (x, y) in G set
Then
where
a non-degenerate alternating bilinear form on G. The uniqueness result above implies that if W(z) is another family of unitaries giving a projective representation of G such that
then there is a unitary U, unique up to a phase, such that
for some λ(z) in T.
In particular if g is an automorphism of G preserving B, then there is an essentially unique unitary π(g) such that
The group of all such automorphisms is called the symplectic group for B and π gives a projective representation of G on V.
The group SL(2.Z) naturally acts on G = A x A by symplectic automorphisms. It is generated by the matrices
If Z = –I, then Z is central and
These automorphisms of G are implemented on V by the following operators:
It follows that
where μ lies in T. Direct calculation shows that μ is given by the Gauss sum
Transformation laws for theta functions
The metaplectic group was defined as the group
The coherent state
defines a holomorphic map of H into L2(R) satisfying
This is in fact a holomorphic map into each Sobolev space Hk and hence also .
On the other hand, in (in fact in H–1) there is a finite-dimensional space of distributions invariant under SL(2,Z) and isomorphic to the N-dimensional oscillator representation on where A = Z/NZ.
In fact let m > 0 and set N = 2m. Let
The operators U(x), V(y) with x and y in M all commute and have a finite-dimensional subspace of fixed vectors formed by the distributions
with b in M1, where
The sum defining Ψb converges in and depends only on the class of b in M1/M. On the other hand, the operators U(x) and V(y) with x, y in M1 commute with all the corresponding operators for M. So M1 leaves the subspace V0 spanned by the Ψb invariant. Hence the group A = M1 acts on V0. This action can immediately be identified with the action on V for the N-dimensional oscillator representation associated with A, since
Since the operators π(R) and π(S) normalise the two sets of operators U and V corresponding to M and M1, it follows that they leave V0 invariant and on V0 must be constant multiples of the operators associated with the oscillator representation of A. In fact they coincide. From R this is immediate from the definitions, which show that
For S it follows from the Poisson summation formula and the commutation properties with the operators U)x) and V(y). The Poisson summation is proved classically as follows.
For a > 0 and f in let
F is a smooth function on R with period a:
The theory of Fourier series shows that
with the sum absolutely convergent and the Fourier coefficients given by
Hence
the usual Poisson summation formula.
This formula shows that S acts as follows
and so agrees exactly with formula for the oscillator representation on A.
Identifying A with Z/2mZ, with
assigned to an integer n modulo 2m, the theta functions can be defined directly as matrix coefficients:
For τ in H and z in C set
so that |q| < 1. The theta functions agree with the standard classical formulas for the Jacobi-Riemann theta functions:
By definition they define holomorphic functions on H × C. The covariance properties of the function fτ and the distribution Ψb lead immediately to the following transformation laws:
Derivation of law of quadratic reciprocity
Because the operators π(S), π (R) and π(J) on L2(R) restrict to the corresponding operators on V0 for any choice of m, signs of cocycles can be determined by taking m = 1. In this case the representation is 2-dimensional and the relation
on L2(R) can be checked directly on V0.
But in this case
The relation can also be checked directly by applying both sides to the ground state exp -x2/2.
Consequently, it follows that for m ≥ 1 the Gauss sum can be evaluated:
For m odd, define
If m is odd, then, splitting the previous sum up into two parts, it follows that G(1,m) equals m1/2 if m is congruent to 1 mod 4 and equals i m1/2 otherwise. If p is an odd prime and c is not divisible by p, this implies
where is the Legendre symbol equal to 1 if c is a square mod p and –1 otherwise. Moreover, if p and q are distinct odd primes, then
From the formula for G(1,p) and this relation, the law of quadratic reciprocity follows:
Theory in higher dimensions
The theory of the oscillator representation can be extended from R to Rn with the group SL(2,R) replaced by the symplectic group Sp(2n,R). The results can be proved either by straightforward generalisations from the one-dimensional case as in or by using the fact that the n-dimensional case is a tensor product of n one-dimensional cases, reflecting the decomposition:
Let be the space of Schwartz functions on Rn, a dense subspace of L2(Rn). For s, t in Rn, define U(s) and V(t) on and L2(R) by
From the definition U and V satisfy the Weyl commutation relation
As before this is called the Schrödinger representation.
The Fourier transform is defined on by
The Fourier inversion formula
shows that the Fourier transform is an isomorphism of onto itself extending to a unitary mapping of L2(Rn) onto itself (Plancherel's theorem).
The Stone–von Neumann theorem asserts that the Schrödinger representation is irreducible and is the unique irreducible representation of the commutation relations: any other representation is a direct sum of copies of this representation.
If U and V satisfying the Weyl commutation relations, define
Then
so that W defines a projective unitary representation of R2n with cocycle given by
where and B is the symplectic form on R2n given by
The symplectic group Sp (2n,R) is defined to be group of automorphisms g of R2n preserving the form B. It follows from the Stone–von Neumann theorem that for each such g there is a unitary π(g) on L2(R) satisfying the covariance relation
By Schur's lemma the unitary π(g) is unique up to multiplication by a scalar ζ with |ζ| = 1, so that π defines a projective unitary representation of Sp(n). Representatives can be chosen for π(g), unique up to a sign, which show that the 2-cocycle for the projective representation of Sp(2n,R) takes values ±1. In fact elements of the group Sp(n,R) are given by 2n × 2n real matrices g satisfying
where
Sp(2n,R) is generated by matrices of the form
and the operators
satisfy the covariance relations above. This gives an ordinary unitary representation of the metaplectic group, a double cover of Sp(2n,R). Indeed, Sp(n,R) acts by Möbius transformations on the generalised Siegel upper half plane Hn consisting of symmetric complex n × n matrices Z with strictly imaginary part by
if
The function
satisfies the 1-cocycle relation
The metaplectic group Mp(2n,R) is defined as the group
and is a connected double covering group of Sp(2n,R).
If , then it defines a coherent state
in L2, lying in a single orbit of Sp(2n) generated by
If g lies in Mp(2n,R) then
defines an ordinary unitary representation of the metaplectic group, from which it follows that the cocycle on Sp(2n,R) takes only values ±1.
Holomorphic Fock space is the Hilbert space of holomorphic functions f(z) on Cn with finite norm
inner product
and orthonormal basis
for α a multinomial. For f in and z in Cn, the operators
define an irreducible unitary representation of the Weyl commutation relations. By the Stone–von Neumann theorem there is a unitary operator from L2(Rn) onto intertwining the two representations. It is given by the Bargmann transform
where
Its adjoint is given by the formula:
Sobolev spaces, smooth and analytic vectors can be defined as in the one-dimensional case using the sum of n copies of the harmonic oscillator
The Weyl calculus similarly extends to the n-dimensional case.
The complexification Sp(2n,C) of the symplectic group is defined by the same relation, but allowing the matrices A, B, C and D to be complex. The subsemigroup of group elements that take the Siegel upper half plane into itself has a natural double cover. The representations of Mp(2n,R) on L2(Rn) and extend naturally to a representation of this semigroup by contraction operators defined by kernels, which generalise the one-dimensional case (taking determinants where necessary). The action of Mp(2n,R) on coherent states applies equally well to operators in this larger semigroup.
As in the 1-dimensional case, where the group SL(2,R) has a counterpart SU(1,1) through the Cayley transform with the upper half plane replaced by the unit disc, the symplectic group has a complex counterpart. Indeed, if C is the unitary matrix
then C Sp(2n) C−1 is the group of all matrices
such that
or equivalently
where
The Siegel generalized disk Dn is defined as the set of complex symmetric n x n matrices W with operator norm less than 1.
It consist precisely of Cayley transforms of points Z in the Siegel generalized upper half plane:
Elements g act on Dn
and, as in the one dimensional case this action is transitive. The stabilizer subgroup of 0 consists of matrices with A unitary and B = 0.
For W in Dn the metaplectic coherent states in holomorphic Fock space are defined by
The inner product of two such states is given by
Moreover, the metaplectic representation π satisfies
The closed linear span of these states gives the even part of holomorphic Fock space . The embedding of Sp(2n) in Sp(2(n+1)) and the compatible identification
lead to an action on the whole of . It can be verified directly that it is compatible with the action of the operators W(z).
Since the complex semigroup has as Shilov boundary the symplectic group, the fact that this representation has a well-defined contractive extension to the semigroup follows from the maximum modulus principle and the fact that the semigroup operators are closed under adjoints. Indeed, it suffices to check, for two such operators S, T and vectors vi proportional to metaplectic coherent states, that
which follows because the sum depends holomorphically on S and T, which are unitary on the boundary.
Index theorems for Toeplitz operators
Let S denote the unit sphere in Cn and define the Hardy space H2(S) be the closure in L2(S) of the restriction of polynomials in the coordinates z1, ..., zn. Let P be the projection onto Hardy space. It is known that if m(f) denotes multiplication by a continuous function f on S, then the commutator [P,m(f)] is compact. Consequently, defining the Toeplitz operator by
on Hardy space, it follows that T(fg) – T(f)T(g) is compact for continuous f and g. The same holds if f and g are matrix-valued functions (so that the corresponding Toeplitz operators are matrices of operators on H2(S)). In particular if f is a function on S taking values in invertible matrices, then
are compact and hence T(f) is a Fredholm operator with an index defined as
The index has been computed using the methods of K-theory by and coincides up to a sign with the degree of f as a continuous mapping from S into the general linear group.
gave an analytic way to establish this index theorem, simplied later by Howe. Their proof relies on the fact if f is smooth then the index is given by the formula of McKean and Singer:
noticed that there was a natural unitary isomorphism between H2(S) and L2(Rn) carrying the Toeplitz operators
onto the operators
These are examples of zeroth order operators constructed within the Weyl calculus. The traces in the McKean-Singer formula can be computed directly using the Weyl calculus, leading to another proof of the index theorem. This method of proving index theorems was generalised by Alain Connes within the framework of cyclic cohomology.
Theory in infinite dimensions
The theory of the oscillator representation in infinite dimensions is due to Irving Segal and David Shale. Graeme Segal used it to give a mathematically rigorous construction of projective representations of loop groups and the group of diffeomorphisms of the circle. At an infinitesimal level the construction of the representations of the Lie algebras, in this case the affine Kac–Moody algebra and the Virasoro algebra, was already known to physicists, through dual resonance theory and later string theory. Only the simplest case will be considered here, involving the loop group LU(1) of smooth maps of the circle into U(1) = T. The oscillator semigroup, developed independently by Neretin and Segal, allows contraction operators to be defined for the semigroup of univalent holomorphic maps of the unit disc into itself, extending the unitary operators corresponding to diffeomorphisms of the circle. When applied to the subgroup SU(1,1) of the diffeomorphism group, this gives a generalization of the oscillator representation on L2(R) and its extension to the Olshanskii semigroup.
The representation of commutation on Fock space is generalized to infinite dimensions by replacing Cn (or its dual space) by an arbitrary complex Hilbert space H. The symmetric group Sk acts on H⊗k. Sk(H) is defined to be the fixed point subspace of Sk and the symmetric algebra is the algebraic direct sum
It has a natural inner product inherited from H⊗k:
Taking the components Sk(H) to be mutually orthogonal, the symmetric Fock space S(H) is defined to be the Hilbert space completion of this direct sum.
For ξ in H define the coherent state eξ by
It follows that their linear span is dense in S(H), that the coherent states corresponding to n distinct vectors are linearly independent and that
When H is finite-dimensional, S(H) can naturally be identified with holomorphic Fock space for H*, since in the standard way Sk(H) are just homogeneous polynomials of degree k on H* and the inner products match up. Moreover, S(H) has functorial properties. Most importantly
A similar result hold for finite orthogonal direct sums and extends to infinite orthogonal direct sums, using von Neumman's definition of the infinite tensor product with 1 the reference unit vector in S0(Hi). Any contraction operator between Hilbert spaces induces a contraction operator between the corresponding symmetric Fock spaces in a functorial way.
A unitary operator on S(H) is uniquely determined by it values on coherent states. Moreover, for any assignment vξ such that
there is a unique unitary operator U on S(H) such that
As in the finite-dimensional case, this allows the unitary operators W(x) to be defined for x in H:
It follows immediately from the finite-dimensional case that these operators are unitary and satisfy
In particular the Weyl commutation relations are satisfied:
Taking an orthonormal basis en of H, S(H) can be written as an infinite tensor product of the S(C en). The irreducibility of W on each of these spaces implies the irreducibility of W on the whole of S(H). W is called the complex wave representation.
To define the symplectic group in infinite dimensions let HR be the underlying real vector space of H with the symplectic form
and real inner product
The complex structure is then defined by the orthogonal operator
so that
A bounded invertible operator real linear operator T on HR lies in the symplectic group if it and its inverse preserve B. This is equivalent to the conditions:
The operator T is said to be implementable on S(H) provided there is a unitary π(T) such that
The implementable operators form a subgroup of the symplectic group, the restricted symplectic group. By Schur's lemma, π(T) is uniquely determined up to a scalar in T, so π gives a projective unitary representation of this subgroup.
The Segal-Shale quantization criterion states that T is implementable, i.e. lies in the restricted symplectic group, if and only if the commutator TJ – JT is a Hilbert–Schmidt operator.
Unlike the finite-dimensional case where a lifting π could be chosen so that it was multiplicative up to a sign, this is not possible in the infinite-dimensional case. (This can be seen directly using the example of the projective representation of the diffeomorphism group of the circle constructed below.)
The projective representation of the restricted symplectic group can be constructed directly on coherent states as in the finite-dimensional case.
In fact, choosing a real Hilbert subspace of H of which H is a complexification, for any operator T on H a complex conjugate of T is also defined. Then the infinite-dimensional analogue of SU(1,1) consists of invertible bounded operators
satisfying gKg* = K (or equivalently the same relations as in the finite-dimensional case). These belong to the restricted symplectic group if and only if B is a Hilbert–Schmidt operator. This group acts transitively on the infinite-dimensional analogue D≈ of the Seigel generalized unit disk consisting of Hilbert–Schmidt operators W that are symmetric with operator norm less than 1 via the formula
Again the stabilizer subgroup of 0 consists of g with A unitary and B = 0. The metaplectic coherent states fW can be defined as before and their inner product is given by the same formula, using the Fredholm determinant:
Define unit vectors by
and set
where μ(ζ) = ζ/|ζ|. As before this defines a projective representation and, if g3 = g1g2, the cocycle is given by
This representation extends by analytic continuation to define contraction operators for the complex semigroup by the same analytic continuation argument as in the finite-dimensional case. It can also be shown that they are strict contractions.
Example Let HR be the real Hilbert space consisting of real-valued functions on the circle with mean 0
and for which
The inner product is given by
An orthogonal basis is given by the function sin(nθ) and cos(nθ) for n > 0. The Hilbert transform on the circle defined by
defines a complex structure on HR. J can also be written
where sign n = ±1 denotes the sign of n. The corresponding symplectic form is proportional to
In particular if φ is an orientation-preserving diffeomorphism of the circle and
then Tφ is implementable.
The operators W(f) with f smooth correspond to a subgroup of the loop group LT invariant under the diffeomorphism group of the circle. The infinitesimal operators corresponding to the vector fields
can be computed explicitly. They satisfy the Virasoro relations
In particular they cannor be adjusted by addition of scalar operators to remove the second term on the right hand side. This shows that the cocycle on the restricted symplectic group is not equivalent to one taking only the values ±1.
See also
Metaplectic group
Invariant convex cone
Notes
References
Operator theory
Harmonic analysis
Representation theory
Quantum mechanics
Theta functions | Oscillator representation | [
"Physics",
"Mathematics"
] | 12,875 | [
"Representation theory",
"Fields of abstract algebra",
"Theoretical physics",
"Quantum mechanics"
] |
39,630,493 | https://en.wikipedia.org/wiki/Stellar%20isochrone | In stellar evolution, an isochrone is a curve on the Hertzsprung-Russell diagram, representing a population of stars of the same age but with different mass.
The Hertzsprung-Russell diagram plots a star's luminosity against its temperature, or equivalently, its color. Stars change their positions on the HR diagram throughout their life. Newborn stars of low or intermediate mass are born cold but extremely luminous. They contract and dim along the Hayashi track, decreasing in luminosity but staying at roughly the same temperature, until reaching the main sequence directly or by passing through the Henyey track. Stars evolve relatively slowly along the main sequence as they fuse hydrogen, and after the vast majority of their lifespan, all but the least massive stars become giants. They then evolve quickly towards their stellar endpoints: white dwarfs, neutron stars, or black holes.
Isochrones can be used to date open clusters because their members all have roughly the same age. One of the first uses of an isochrone method to date an open cluster was by Demarque and Larson in 1963. If the initial mass function of the open cluster is known, isochrones can be calculated at any age by taking every star in the initial population, using numerical simulations to evolve it forwards to the desired age, and plotting the star's luminosity and magnitude on the HR diagram. The resulting curve is an isochrone, which can be compared against the observational color-magnitude diagram to determine how well they match. If they match well, the assumed age of the isochrone is close to the actual age of the cluster.
See also
Stellar birthline
References
Stellar evolution | Stellar isochrone | [
"Physics"
] | 347 | [
"Astrophysics",
"Stellar evolution"
] |
39,630,505 | https://en.wikipedia.org/wiki/C25H38O3 | The molecular formula C25H38O3 (molar mass: 386.57 g/mol) may refer to:
AM-2389
Dexanabinol (HU-211)
HHCP-O-acetate
HU-210
Testosterone isocaproate
Testosterone caproate
Molecular formulas | C25H38O3 | [
"Physics",
"Chemistry"
] | 64 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
39,632,612 | https://en.wikipedia.org/wiki/Excimer%20lamp | An excimer lamp (or excilamp) is a source of ultraviolet light based on spontaneous emission of excimer (exciplex) molecules.
Introduction
Excimer lamps are quasimonochromatic light sources operating over a wide range of wavelengths in the ultraviolet (UV) and vacuum ultraviolet (VUV) spectral regions. Operation of an excimer lamp is based on the formation of excited dimers (excimers), which spontaneously transiting from the excited state to the ground state result in the emission of UV photons. The spectral maximum of excimer lamp radiation is specified by a working excimer molecule:
{|class="wikitable" style="text-align:center"
|+ Wavelength and photon energy of excimer lamp radiation
! Excimer molecule
! Wavelength (nm)
! Photon energy (eV)
|-
| NeF*
| 108
| 11.48
|-
| Ar2*
| 126
| 9.84
|-
| Kr2*
| 146
| 8.49
|-
| F2*
| 158
| 7.85
|-
| ArBr*
| 165
| 7.52
|-
| Xe2*
| 172
| 7.21
|-
| ArCl*
| 175
| 7.08
|-
| KrI*
| 190
| 6.49
|-
| ArF*
| 193
| 6.42
|-
| KrBr*
| 207
| 5.99
|-
| KrCl*
| 222
| 5.58
|-
| KrF*
| 248
| 5.01
|-
| XeI*
| 253
| 4.91
|-
| Cl2*
| 259
| 4.79
|-
| XeBr*
| 282
| 4.41
|-
| Br2*
| 289
| 4.29
|-
| XeCl*
| 308
| 4.03
|-
| I2*
| 342
| 3.63
|-
| XeF*
| 351
| 3.53
|}
Excimers are diatomic molecules (dimers) or polyatomic molecules that have stable excited electronic states and an unbound or weakly bound (thermally unstable) ground state. Initially, only homonuclear diatomic molecules with a stable excited state but a repulsive ground state were called excimers (excited dimers). The term "excimer" was later extended to refer any polyatomic molecule with a repulsive or weakly bound ground state. One can also come across the term "exciplex" (from "excited complex"). It is also an excimer molecule but not a homonuclear dimer. For instance, Xe2*, Kr2*, Ar2* are excimer molecules, while XeCl*, KrCl*, XeBr*, ArCl*, Xe2Cl* are referred to exciplex molecules. Dimers of rare gases and rare-gas–halogen dimers are the most spread and studied excimers. Rare-gas–halide trimers, metal excimers, metal–rare-gas excimers, metal–halide excimers, and rare-gas–oxide excimers are also known, but they are rarely used.
An excimer molecule can exist in an excited electronic state for a limited time, as a rule from a few to a few tens of nanoseconds. After that, an excimer molecule transits to the ground electronic state, while releasing the energy of internal electronic excitation in the form of a photon. Owing to a specific electronic structure of an excimer molecule, the energy gap between the lowest bound excited electronic state and the ground state amounts from 3.5 to 10 eV, depending on a kind of an excimer molecule and provides light emission in the UV and VUV spectral region. A typical spectral characteristic of excimer lamp radiation consists mainly of one intense narrow emission band. About 70–80% of the whole radiation power of an excimer lamp is concentrated in this emission band. The full width at half maximum of the emission band depends on a kind of an excimer molecule and excitation conditions and ranges within 2 to 15 nm. In fact, excimer lamps are sources of quasimonochromatic light. Therefore, such sources are suitable for spectral-selective irradiation and can even replace lasers in some cases.
UV production
Radiation is produced owing to the spontaneous transition of an excimer molecule from an excited electronic state to the ground state. Excimer and exciplex molecules are not long-living formations. They rapidly decompose, typically within a few nanoseconds, releasing their excitation energy in the form of a UV photon:
emission by an excimer molecule:
emission by an exciplex molecule:
where Rg2* is an excimer molecule, RgX* is an exciplex molecule, Rg is an atom of rare gas, and X is an atom of halogen.
Excimer molecule formation
It is convenient to generate excimer molecules in a plasma. Electrons play an important role in a plasma and, in particular, in the formation of excimer molecules. To efficiently generate excimer molecules, the working medium (plasma) should contain sufficient concentration of electrons with energies that are high enough to produce the precursors of the excimer molecules, which are mainly excited and ionized rare gas atoms. Introduction of power into a gaseous mixture results in the formation of excited and ionized rare gas atoms as follows:
Electron excitation
Rg + e− → Rg* + e−,
Direct electron ionization
Rg + e− → Rg+ + 2e−,
Stepwise ionization
Rg* + e− → Rg+ + 2e−,
where Rg* is a rare gas atom in an excited electronic state, Rg+ is a rare gas ion, and e− is an electron.
When there are enough excited rare gas atoms accumulated in a plasma, the excimer molecules are formed by the following reaction:
Rg* + Rg + M → Rg2* + M,
where Rg2* is an excimer molecule, and M is a third particle carrying away the excess energy to stabilize an excimer molecule. As a rule, it is a rare gas atom of the working medium.
Analyzing this three-body reaction, one can see that the efficiency of the production of excimer molecules is proportional to the concentration of excited rare gas atoms and the square of the concentration of rare gas atoms in the ground state. From this point of view, the concentration of rare gas in the working medium should be as high as possible. A higher concentration of rare gas is achieved by increasing gas pressure. However, an increase in the concentration of rare gas also intensifies the collisional quenching of excimer molecules, resulting in their radiationless decay:
Rg2* + Rg → Rg* + 2Rg.
The collisional quenching of excimer molecules is negligible while the mean time between collisions is much higher than the lifetime of an excimer molecule in an excited electronic state. In practice, the optimal pressure of a working medium is found experimentally, and it amounts to approximately one atmosphere.
A mechanism underlying the formation of exciplex molecules (rare gas halides) is a bit more complicated than the mechanism of excimer molecule formation. The formation of exciplex molecules occurs in two main ways. The first way is due to a reaction of ion-ion recombination, i.e., recombination of a positive rare gas ion and a negative halogen ion:
Rg+ + X− + M → RgX* + M,
where RgX* is an exciplex molecule, and M is a collisional third partner, which is usually an atom or molecule of a gaseous mixture or buffer gas. The third particle takes the excess energy and stabilizes an exciplex molecule.
The formation of a negative halogen ion results from the interaction of a low-energy electron with a halogen molecule in a so-called process of the dissociative electron attachment:
X2 + e− → X + X−,
where X is a halogen atom.
The pressure of a gaseous mixture is of great importance for efficient production of exciplex molecules due to the reaction of ion-ion recombination. The process of ion-ion recombination is dependent on three-body collisions, and the probability of such a collision increases with pressure. At low pressures of a gaseous mixture (several tens of torr), the reaction of ion-ion recombination is of little efficiency, while it is quite productive at pressures above 100 Torr.
The second way of the formation of exciplex molecules is a harpoon reaction. In this case, a halogen molecule or halogen-containing compound captures a weakly bound electron of an excited rare gas atom, and an exciplex molecule in an excited electronic state is formed:
Rg* + X2 → RgX* + X.
Since the harpoon reaction is a process of a two-body collision, it can proceed productively at a pressure significantly lower than that required for a three-body reaction. Thus, the harpoon reaction makes possible the efficient operation of an excimer lamp at low pressures of a gaseous mixture. The collisional quenching of exciplex molecules at low pressures of a gaseous mixture is much lower than at pressures required for productive proceeding the reaction of ion-ion recombination. Due to this, a low-pressure excimer lamp ensures the maximum efficiency in converting the pumping energy to UV radiation.
It should be mentioned that both the harpoon reaction and reaction of ion-ion recombination proceed simultaneously. The dominance of the first or second reaction is mainly determined by the pressure of a gaseous mixture. The harpoon reaction predominates at low pressures (below 50 Torr), while the reaction of ion-ion recombination prevails at higher pressures (above 100 Torr).
The kinetics of reactions proceeding in a plasma is diverse and is not limited to the processes considered above. The efficiency of producing exciplex molecules depends on the composition of a gaseous mixture and conditions of its excitation. The type of halogen donor plays an important role. The most effective and widely used halogen-carriers are homonuclear diatomic halogen molecules. More complex halogen compounds such as hydrogen halides, metal halides, and interhalogens are also used as a halogen-carrier but to a lesser extent.
A noteworthy halogen-carrier is alkali halide. A feature of alkali halides is a similarity of their chemical bond with that of exciplex molecules in excited electronic states. Exciplex molecules in excited electronic states are characterized by the ionic bond as well as alkali halides in the ground state. It opens up alternative mechanisms for the formation of exciplex molecules, namely substitution reactions:
Rg* + AX → RgX* + A,
Rg+ + AX → RgX* + A+,
where AX is an alkali halide molecule, A is an alkali metal atom, and A+ is an alkali metal ion.
These mechanisms of the formation of exciplex molecules are fundamentally different from the reaction of ion-ion recombination and harpoon reaction. An exciplex molecule is formed simply by replacing an atom/ion of alkali metal from an alkali halide molecule by an excited atom/ion of rare gas.
An advantage of using alkali halides is that both the substitution reactions can simultaneously proceed at low pressures with comparable productivity. Moreover, both excited atoms and ions of rare gas are effectively used in the production of exciplex molecules in contrast to excimer lamps using other halogen-carriers. It is of importance because the ionization and excitation of rare gas consume most of the introduced energy. Since the reaction of ion-ion recombination and harpoon reaction dominate depending on the pressure of a gaseous mixture, the generation of rare gas ions is unprofitable at low pressures, while the excitation of rare gas is unreasonable at high pressures. A drawback of using alkali halides is high temperatures required for providing the necessary concentration of alkali halide molecules in a gaseous mixture. Despite this, the use of alkali halides as a halogen-carrier is especially promising in the development of exciplex lasers operating at low pressures.
Methods of excitation
One of the widely used ways to excite emission of excimer molecules is an electric discharge. There are a lot of discharge types used for pumping excimer lamps. Some examples are glow discharge, pulsed discharge, capacitive discharge, longitudinal and transverse discharges, volume discharge, spark discharge, and microhollow discharge.
, dielectric barrier discharge (DBD), a type of capacitive discharge, is the most common type used in commercial lamps. A benefit of the DBD excimer lamps is that the electrodes are not in direct contact with the active medium (plasma). Absence of interaction between the electrodes and the discharge eliminates electrode corrosion as well as contamination of the active medium by sputtered electrode material, which considerably increases the lifetime of DBD excimer lamps in comparison with others. Moreover, a dielectric barrier discharge ensures effective excitation of a gas mixture in a wide range of working pressures from a few torrs to more than one atmosphere. Excimer lamps can be made in any desired shape of the radiating surface, satisfying requirements of a specific task.
Benefits of excimer lamps
The main advantages of excimer lamps over other sources of UV and VUV radiation are as follows:
high average specific power of UV radiation (up to 1 Watt per cubic centimeter of active medium);
high energy of an emitted photon (from 3.5 to 11.5 eV);
quasimonochromatic radiation with the spectral full-width at half maximum from 2 to 15 nm;
high power spectral density of UV radiation;
choice of the wavelength of the spectral maximum of UV radiation for specific purposes (see table);
availability of multi-wave UV radiation owing to simultaneous excitation of several kinds of working excimer molecules;
absence of visible and IR radiation;
instant achievement of the operating mode;
low heating of radiating surface;
absence of mercury.
Applications
Light sources emitting in the UV spectral region are widely used in techniques involving photo-chemical processes, e.g., curing of inks, adhesives, varnishes and coatings, photolithography, UV induced growth of dielectrics, UV induced surface modification, and cleaning or material deposition. Incoherent sources of UV radiation have some advantages over laser sources because of their lower cost, a huge area of irradiation, and ease of use, especially when large-scale industrial processes are envisaged.
Mercury lamps (λ = 253.7 nm) are widely spread UV sources, but their production, use, and disposal of old lamps pose a threat to human health and environmental pollution. Comparing with commonly used mercury lamps, excimer lamps have a number of advantages. A specific feature of an excimer molecule is the absence of a strong bond in the ground electronic state. Thanks to this, high-intensity UV radiation can be extracted from a plasma without significant self-absorption. This makes possible to convert efficiently energy deposited to the active medium into UV radiation.
Excimer lamps are referred to cold sources of UV radiation since the radiating surface of excimer lamps remains at relatively low temperatures in contrast with traditional UV lamps like a mercury one. Because the medium does not need to be heated, excimer lamps reach their peak output almost immediately after they are turned on.
Rare gas and rare gas-halide excimer lamps generally radiate in the ultraviolet (UV) and vacuum-ultraviolet (VUV) spectral regions (see table). Their unique narrow-band emission characteristics, high quantum efficiency, and high-energy photons make them suitable for applications such as absorption spectroscopy, UV curing, UV coating, disinfection, ozone generation, destruction of gaseous organic waste, photo-etching and photo-deposition and more other applications.
Light sources emitting photons in the energy range of 3.5–10 eV find applications in many fields due to the ability of high-energy photons to cleave most chemical bonds and kill microbes destroying nucleic acids and disrupting their DNA. Examples of excimer lamp applications include purification and disinfection of drinking water, pool water, air, sewage purification, decontamination of industrial waste, photochemical synthesis and degradation of organic compounds in flue gases and water, photopolymerization of organic coatings and paints, and photo-enhanced chemical vapor deposition. In all cases UV photons excite species or cleave chemical bonds, resulting in the formation of radicals or other chemical reagents, which initiate a required reaction.
An excimer lamp has selective action. UV radiation of a given wavelength can selectively excite species or generate required radicals. Such lamps can be useful for photophysical and photochemical processing such as UV curing of paints, varnishes, and adhesives, cleansing and modifying surface properties, polymerization of lacquers and paints, and photo-degradation of a variety of pollutants. Photo-etching of polymers is possible using different wavelengths: 172 nm by xenon excimer, 222 nm by krypton chloride, and 308 nm by xenon chloride. Excimer UV sources can be used for microstructuring large-area polymer surfaces. XeCl-excimer lamps (308 nm) are especially suitable to get tan.
Fluorescence spectroscopy is one of the most common methods for detecting biomolecules. Biomolecules can be labeled with fluoroprobe, which then is excited by a short pulse of UV light, leading to re-emission in the visible spectral region. Detecting this re-emitted light, one can judge the density of labeled molecules. Lanthanide complexes are commonly used as fluoroprobes. Due to their long lifetime, they play an important role in Förster resonance energy transfer (FRET) analysis.
At present, excimer lamps are coming into use in ecology, photochemistry, photobiology, medicine, criminalistics, petrochemistry, physics, microelectronics, different engineering tasks, wide-ranging technologies, science, various branches of industry including the food industry, and many others.
Environmental contamination
Mercury lamps are the most common source of UV radiation due to their high efficiency. However, the use of mercury in these lamps poses disposal and environmental problems. On the contrary, excimer lamps based on rare gases are absolutely non-hazardous and excimer lamps containing halogen are more environmentally benign than mercury ones.
References
External links
"UV and VUV excilamps"
Types of lamp
Ultraviolet radiation | Excimer lamp | [
"Physics",
"Chemistry"
] | 3,977 | [
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Ultraviolet radiation"
] |
42,400,646 | https://en.wikipedia.org/wiki/Cell%20biophysics | Cell biophysics (or cellular biophysics) is a sub-field of biophysics that focuses on physical principles underlying cell function. Sub-areas of current interest include statistical models of intracellular signaling dynamics, intracellular transport, cell mechanics (including membrane and cytoskeletal mechanics), molecular motors, biological electricity and genetic network theory. The field has benefited greatly from recent advances in live-cell molecular imaging techniques that allow spatial and temporal measurement of macromolecules and macromolecular function. Specialized imaging methods like FRET, FRAP, photoactivation and single molecule imaging have proven useful for mapping macromolecular transport, dynamic conformational changes in proteins and macromolecular interactions. Super-resolution microscopy allows imaging of cell structures below the optical resolution of light. Combining novel experimental tools with mathematical models grounded in the physical sciences has enabled significant recent breakthroughs in the field. Multiple centers across the world are advancing the research area
References
Biophysics
Cell biology | Cell biophysics | [
"Physics",
"Biology"
] | 200 | [
"Cell biology",
"Applied and interdisciplinary physics",
"Biophysics"
] |
42,407,465 | https://en.wikipedia.org/wiki/Sakaguchi%20test | The Sakaguchi test is a chemical test used to detect presence of arginine in proteins. It is named after the Japanese food scientist and organic chemist, Shoyo Sakaguchi (1900–1995) who described the test in 1925. The Sakaguchi reagent used in the test consists of 1-Naphthol and a drop of sodium hypobromite. The guanidino (–C group in arginine reacts with the Sakaguchi reagent to form a red-coloured complex.
References
Protein methods
Chemical tests | Sakaguchi test | [
"Chemistry",
"Biology"
] | 115 | [
"Biochemistry methods",
"Protein methods",
"Protein biochemistry",
"Chemical tests",
"Analytical chemistry stubs"
] |
42,408,205 | https://en.wikipedia.org/wiki/Hopkins%E2%80%93Cole%20reaction | The Hopkins-Cole reaction, also known as the glyoxylic acid reaction, is a chemical test used for detecting the presence of tryptophan in proteins. A protein solution is mixed with Hopkins Cole reagent, which consists of glyoxylic acid. Concentrated sulfuric acid is slowly added to form two layers. A purple ring appears between the two layers if the test is positive for tryptophan. Nitrites, chlorates, nitrates and excess chlorides prevent the reaction from occurring.
The reaction was first reported by Frederick Gowland Hopkins and Sydney W. Cole in 1901, as part of their work on the first isolation of tryptophan itself.
References
Protein methods
Chemical tests | Hopkins–Cole reaction | [
"Chemistry",
"Biology"
] | 150 | [
"Biochemistry methods",
"Protein methods",
"Protein biochemistry",
"Chemical tests",
"Analytical chemistry stubs"
] |
42,410,104 | https://en.wikipedia.org/wiki/Acree%E2%80%93Rosenheim%20reaction | The Acree–Rosenheim reaction is a chemical test used for detecting the presence of tryptophan in proteins. A protein mixture is mixed with formaldehyde. Concentrated sulfuric acid is added to form two layers. A purple ring appears between the two layers if the test is positive for tryptophan.
The test was named after two greats in biochemistry, namely, Solomon Farley Acree (1875–1957), a distinguished American Biochemist at Johns Hopkins University, and Sigmund Otto Rosenheim (1871–1955), an Anglo-German Medical Chemist at the University of Manchester.
History
The method was originally investigated by Otto Rosenheim while examining and improving on the methods by German British chemist Otto Hehner
(1853–1924) that was used to detect formaldehyde in milk by adding sulphuric acid, which produced a blue ring. Acree noticed the reaction's similarities with the Adamkiewicz reaction and the reaction noticeably higher in casein. Based on this 1906 research, Acree investigated the reaction in 1907 in the general context of formaldehyde reactions with all forms of protein and was the first to find the importance of the tryptophan group in the reaction, which can be extracted from casein in milk.
Reaction
The reaction of tryptophan with formaldehyde.
References
Protein methods
Chemical tests | Acree–Rosenheim reaction | [
"Chemistry",
"Biology"
] | 274 | [
"Biochemistry methods",
"Protein methods",
"Protein biochemistry",
"Chemical tests",
"Analytical chemistry stubs"
] |
42,410,403 | https://en.wikipedia.org/wiki/Pauly%20reaction | The Pauly reaction is a chemical test used for detecting the presence of tyrosine or histidine in proteins. It is named after German chemist Hermann Pauly, who first described the reaction. When proteins containing either tyrosine or histidine are reacted with diazotized sulfanilic acid under alkaline conditions, a red color is formed by a coupling reaction.
References
Protein methods
Biochemistry detection reactions | Pauly reaction | [
"Chemistry",
"Biology"
] | 86 | [
"Biochemistry methods",
"Protein methods",
"Biochemistry detection reactions",
"Protein biochemistry",
"Biochemical reactions",
"Microbiology techniques",
"Analytical chemistry stubs"
] |
42,411,343 | https://en.wikipedia.org/wiki/Y%C3%BCksel%20Tohumculuk | Yuksel Seed Agriculture Industry and Trade Inc. Yuksel Seed was founded in 1990 by Mehmet Yuksel, an Agricultural Engineer (M.SC). It is a family firm based in Antalya, in which his siblings are also partners. It has foreign investments and companies in important vegetable producing countries such as the Netherlands, Spain, Poland, Chile, Mexico, Brazil, Morocco, China and Pakistan. Yuksel Seed, which exports seeds to more than 78 countries, has a very strong R&D and production infrastructure in vegetable and potato seeds. Yuksel Seed, a globally recognized brand in seed growing, is Turkey's global seed brand. Yüksel Tohum, which has twelve research, trial and production stations, six of which are abroad and six in Turkey, has 3800 decares of land and 1800 decares of modern greenhouses. When evaluated in many respects, Yüksel Tohum has become Turkey's largest seed brand.
References
External links
Yuksel Seeds Official Webpage
Agriculture companies of Turkey
Plant breeding
Companies based in Antalya
Turkish brands
Companies established in 1996
Biotechnology companies of Turkey | Yüksel Tohumculuk | [
"Chemistry"
] | 233 | [
"Plant breeding",
"Molecular biology"
] |
52,534,044 | https://en.wikipedia.org/wiki/15%CE%B2-Hydroxycyproterone%20acetate | 15β-Hydroxycyproterone acetate (15β-OH-CPA) is a steroidal antiandrogen and the major metabolite of cyproterone acetate (CPA). It is formed from CPA in the liver by hydroxylation via the cytochrome P450 enzyme CYP3A4. During therapy with CPA, 15β-OH-CPA circulates at concentrations that are approximately twice those of CPA. 15β-OH-CPA has similar or even greater antiandrogen activity compared to CPA. However, it has only about one-tenth of the activity of CPA as a progestogen. 15β-OH-CPA also shows some glucocorticoid activity, similarly to CPA and unesterified cyproterone.
See also
List of steroidal antiandrogens
References
Acetate esters
Antiandrogen esters
Cyclopentanols
Organochlorides
Conjugated dienes
Cyclopropanes
Cyproterone acetate
Enones
Glucocorticoids
Human drug metabolites
Pregnanes
Progestogens
Steroid esters
Steroidal antiandrogens | 15β-Hydroxycyproterone acetate | [
"Chemistry"
] | 255 | [
"Chemicals in medicine",
"Human drug metabolites"
] |
52,540,404 | https://en.wikipedia.org/wiki/Cycloheptatrienemolybdenum%20tricarbonyl | Cycloheptatrienemolybdenum tricarbonyl is the organomolybdenum compound with the formula (C7H8)Mo(CO)3. It is a red-orange solid that is soluble in nonpolar organic solvents. The compound has no practical value but is a prototypical complex of cycloheptatriene.
Synthesis, structure, and reactions
The compound is prepared by thermal reaction of the triene with molybdenum hexacarbonyl:
C7H8 + Mo(CO)6 → (C7H8)Mo(CO)3 + 3 CO
The compound is a piano stool complex, consisting of Mo(CO)3 bound to six carbon centers of the triene. The methylene group projects from the plane of the six coordinated carbon atoms.
The compound reacts with trityl salts to give the cycloheptatrienyl complex:
(C7H8)Mo(CO)3 + (C6H5)3C+ → [(C7H7)Mo(CO)3]+ + (C6H5)3CH
References
Organomolybdenum compounds
Carbonyl complexes
Cycloheptatrienyl complexes
Half sandwich compounds
Molybdenum(0) compounds | Cycloheptatrienemolybdenum tricarbonyl | [
"Chemistry"
] | 269 | [
"Organometallic chemistry",
"Half sandwich compounds"
] |
55,440,192 | https://en.wikipedia.org/wiki/Jacques%20Dubochet | Jacques Dubochet (born 8 June 1942) is a retired Swiss biophysicist. He is a former researcher at the European Molecular Biology Laboratory in Heidelberg, Germany, and an honorary professor of biophysics at the University of Lausanne in Switzerland.
In 2017, he received the Nobel Prize in Chemistry together with Joachim Frank and Richard Henderson "for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution". He received the Royal Photographic Society Progress Medal, alongside his colleagues Professor Joachim Frank and Dr Richard Henderson, in 2018 for 'an important advance in the scientific or technological development of photography or imaging in the widest sense'.
Career
Dubochet started to study physics at the École polytechnique de l'Université de Lausanne (now École polytechnique fédérale de Lausanne) in 1962 and obtained his degree in physical engineering in 1967. He obtained a Certificate of Molecular Biology at University of Geneva in 1969 and then began to study electron microscopy of DNA. In 1973, he completed his thesis in biophysics at University of Geneva and University of Basel.
From 1978 to 1987, Dubochet was group leader at the European Molecular Biology Laboratory in Heidelberg, then part of West Germany. From 1987 to 2007, he was professor at the University of Lausanne. In 2007, at 65 years old, he retired and became an honorary professor at the University of Lausanne.
During his career, Dubochet developed technologies in cryo-electron microscopy, cryo-electron tomography and cryo-electron microscopy of vitreous sections. These technologies are used to image individual biological structures such as protein complexes or virus particles. At Lausanne he took part in initiatives to make scientists more aware of social issues.
In 2014, Dubochet received EMBL's Lennart Philipson Award. Describing his career in 2015, Professor Gareth Griffiths, his colleague at EMBL explained: "Jacques had a vision. He found a way of freezing thin films of water so fast that crystals had no time to form [that could damage samples] [...] over time the technique has become increasingly important to life science research, and it is clear today it is Nobel Prize-worthy."
When asked by his university how he would like his Nobel Prize to be recognised by the institution he asked for a parking space for his bicycle which was duly given. He had cycled to his lab almost every day for 30 years.
At the end of November 2021, the Dubochet Center for Imaging (DCI), which bears his name, was launched by the Swiss Federal Institute of Technology in Lausanne, the University of Lausanne and the University of Geneva. Just a few weeks later, the DCI was able to make a significant contribution to deciphering the Omicron variant of the COVID-19 virus.
Personal life
Dubochet is married with two children. He has dyslexia.
In the 1970s, for the second meeting with his future wife, they went to protest against the Kaiseraugst nuclear power plant construction project.
Dubochet is a member of the Social Democratic Party of Switzerland, and a member of the municipal parliament of Morges, where he holds a seat on the supervisory committee. He is also part of the climate movement as a member of the Grandparents for Future and emphasized the urgency of saving our societies.
Bibliography
Jacques Dubochet, Parcours, Éditions Rosso, 2018, 216 pages ().
Notes and references
External links
Official page
Blog
including the Nobel Lecture on 8 December 2017 Early cryo-electron microscopy
1942 births
20th-century Swiss biologists
20th-century Swiss physicists
21st-century Swiss biologists
21st-century Swiss physicists
École Polytechnique Fédérale de Lausanne alumni
Living people
Nobel laureates in Chemistry
Scientists with dyslexia
People from Aigle
Swiss biophysicists
Swiss Nobel laureates
University of Basel alumni
University of Geneva alumni
Academic staff of the University of Lausanne
Social Democratic Party of Switzerland politicians
Crystallographers | Jacques Dubochet | [
"Chemistry",
"Materials_science"
] | 811 | [
"Crystallographers",
"Crystallography"
] |
55,443,671 | https://en.wikipedia.org/wiki/Monochromatization | Monochromatization in the context of accelerator physics is a theoretical principle used to increase center-of-mass energy resolution in high-luminosity particle collisions. The decrease of the collision energy spread can be accomplished without reducing the inherent energy spread of either of the two colliding beams, introducing opposite correlations between spatial position and energy at the interaction point (IP). In beam-optical terms, this can be accomplished through a non-zero dispersion function for both beams of opposite sign at the IP. The dispersion is determined by the respective lattice.
History
Monochromatization is a technique which has been proposed since a long time for reducing the centre-of-mass energy spread at e−e+ colliders, but this has never been used in any operational collider. This technique was first proposed by 1975 by A. Renieri to improve energy resolution of Italian collider Adone.
Implementation of a monochromatization scheme has been explored for several past colliders such as
ADONE (National Institute of Nuclear Physics)
SPEAR (SLAC National Accelerator Laboratory)
LEP (CERN)
but until now such a scheme has never been applied, or tested, in any operating collider. Nevertheless, studies for the FCC-ee are under development.
References
Mass spectrometry
Experimental physics | Monochromatization | [
"Physics",
"Chemistry"
] | 276 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Experimental physics",
"Matter"
] |
55,449,897 | https://en.wikipedia.org/wiki/Quantum%20Field%20Theory%20in%20a%20Nutshell | Quantum Field Theory in a Nutshell is a textbook by Anthony Zee covering quantum field theory. The book has been adopted by many universities, including Harvard University, Princeton University, the University of California, Berkeley, the California Institute of Technology, Columbia University, Stanford University, and Brown University, among others.
Response
Stephen Barr said about the book, "Like the famous Feynman Lectures on Physics, this book has the flavor of a good blackboard lecture". Michael Peskin's review in Classical and Quantum Gravity said, "This is quantum field theory taught at the knee of an eccentric uncle; one who loves the grandeur of his subject, has a keen eye for a slick argument, and is eager to share his repertoire of anecdotes about Feynman, Fermi, and all of his heroes [...] This [book] can help [students] love the subject and race to its frontier". David Tong called it a "charming book, where emphasis is placed on physical understanding and the author isn’t afraid to hide the ugly truth when necessary. It contains many gems". Zvi Bern wrote, "Zee has an infectious enthusiasm and a remarkable talent for slicing through technical mumbo jumbo to arrive at the heart of a problem".
References
Physics textbooks
Quantum field theory | Quantum Field Theory in a Nutshell | [
"Physics"
] | 266 | [
"Quantum field theory",
"Quantum mechanics",
"Works about quantum mechanics"
] |
55,450,123 | https://en.wikipedia.org/wiki/Einstein%20Gravity%20in%20a%20Nutshell | Einstein Gravity in a Nutshell is a textbook by Anthony Zee.
Response
Michael Berg said in a review in the Mathematical Association of America, "I must admit that, as its nutshell predecessor, Einstein Gravity in a Nutshell is very appealing to me, and I am certainly won over by Zee’s chatty but on-the-money style". Luboš Motl said about the book on his blog The Reference Frame: "Anthony is more playful and less formal but there are aspects in which he gets further than any other introductory textbook of GR. The book is full of notes, a long index, and simply clever exercises. The illustrations are pretty and professional [...] I recommend you once again to try the book". Pedro G. Ferreira, professor at the University of Oxford called it "a remarkably complete and thorough textbook on general relativity, written in a refreshing and engaging style. Zee leads us through all the major intellectual steps that make what is surely one of the most profound and beautiful theories of all time. The book is enjoyable and informative in equal measure. Quite an achievement."
References
See also
Carroll, Sean M. Spacetime and Geometry : An Introduction to General Relativity. Addison Wesley, 2004. .
Wheeler, John; Misner, Charles W; Thorne, Kip. Gravitation. W.H. Freeman and Company, 1973. .
Wald, Robert M. General Relativity. University of Chicago Press, 1984. .
Physics textbooks
General relativity | Einstein Gravity in a Nutshell | [
"Physics"
] | 302 | [
"General relativity",
"Theory of relativity"
] |
32,590,824 | https://en.wikipedia.org/wiki/Continuous%20geometry | In mathematics, continuous geometry is an analogue of complex projective geometry introduced by , where instead of the dimension of a subspace being in a discrete set , it can be an element of the unit interval . Von Neumann was motivated by his discovery of von Neumann algebras with a dimension function taking a continuous range of dimensions, and the first example of a continuous geometry other than projective space was the projections of the hyperfinite type II factor.
Definition
Menger and Birkhoff gave axioms for projective geometry in terms of the lattice of linear subspaces of projective space. Von Neumann's axioms for continuous geometry are a weakened form of these axioms.
A continuous geometry is a lattice L with the following properties
L is modular.
L is complete.
The lattice operations ∧, ∨ satisfy a certain continuity property,
, where A is a directed set and if then , and the same condition with ∧ and ∨ reversed.
Every element in L has a complement (not necessarily unique). A complement of an element a is an element b with , , where 0 and 1 are the minimal and maximal elements of L.
L is irreducible: this means that the only elements with unique complements are 0 and 1.
Examples
Finite-dimensional complex projective space, or rather its set of linear subspaces, is a continuous geometry, with dimensions taking values in the discrete set
The projections of a finite type II von Neumann algebra form a continuous geometry with dimensions taking values in the unit interval .
showed that any orthocomplemented complete modular lattice is a continuous geometry.
If V is a vector space over a field (or division ring) F, then there is a natural map from the lattice PG(V) of subspaces of V to the lattice of subspaces of that multiplies dimensions by 2. So we can take a direct limit of
This has a dimension function taking values all dyadic rationals between 0 and 1. Its completion is a continuous geometry containing elements of every dimension in . This geometry was constructed by , and is called the continuous geometry over F
Dimension
This section summarizes some of the results of . These results are similar to, and were motivated by, von Neumann's work on projections in von Neumann algebras.
Two elements a and b of L are called perspective, written , if they have a common complement. This is an equivalence relation on L; the proof that it is transitive is quite hard.
The equivalence classes A, B, ... of L have a total order on them defined by if there is some a in A and b in B with . (This need not hold for all a in A and b in B.)
The dimension function D from L to the unit interval is defined as follows.
If equivalence classes A and B contain elements a and b with then their sum is defined to be the equivalence class of . Otherwise the sum is not defined. For a positive integer n, the product nA is defined to be the sum of n copies of A, if this sum is defined.
For equivalence classes A and B with A not {0} the integer is defined to be the unique integer such that with .
For equivalence classes A and B with A not {0} the real number is defined to be the limit of as C runs through a minimal sequence: this means that either C contains a minimal nonzero element, or an infinite sequence of nonzero elements each of which is at most half the preceding one.
D(a) is defined to be , where {a} and {1} are the equivalence classes containing a and 1.
The image of D can be the whole unit interval, or the set of numbers for some positive integer n. Two elements of L have the same image under D if and only if they are perspective, so it gives an injection from the equivalence classes to a subset of the unit interval. The dimension function D has the properties:
If then
D(a ∨ b) + D(a ∧ b) = D(a) + D(b)
if and only if , and if and only if
Coordinatization theorem
In projective geometry, the Veblen–Young theorem states that a projective geometry of dimension at least 3 is isomorphic to the projective geometry of a vector space over a division ring. This can be restated as saying that the subspaces in the projective geometry correspond to the principal right ideals of a matrix algebra over a division ring.
Neumann generalized this to continuous geometries, and more generally to complemented modular lattices, as follows . His theorem states that if a complemented modular lattice L has order at least 4, then the elements of L correspond to the principal right ideals of a von Neumann regular ring. More precisely if the lattice has order n then the von Neumann regular ring can be taken to be an n by n matrix ring Mn(R) over another von Neumann regular ring R. Here a complemented modular lattice has order n if it has a homogeneous basis of n elements, where a basis is n elements a1, ..., an such that if , and , and a basis is called homogeneous if any two elements are perspective. The order of a lattice need not be unique; for example, any lattice has order 1. The condition that the lattice has order at least 4 corresponds to the condition that the dimension is at least 3 in the Veblen–Young theorem, as a projective space has dimension at least 3 if and only if it has a set of at least 4 independent points.
Conversely, the principal right ideals of a von Neumann regular ring form a complemented modular lattice .
Suppose that R is a von Neumann regular ring and L its lattice of principal right ideals, so that L is a complemented modular lattice. Neumann showed that L is a continuous geometry if and only if R is an irreducible complete rank ring.
References
Projective geometry
Von Neumann algebras
Lattice theory | Continuous geometry | [
"Mathematics"
] | 1,202 | [
"Fields of abstract algebra",
"Order theory",
"Lattice theory"
] |
32,593,444 | https://en.wikipedia.org/wiki/Tikhonov%27s%20theorem%20%28dynamical%20systems%29 | In applied mathematics, Tikhonov's theorem on dynamical systems is a result on stability of solutions of systems of differential equations. It has applications to chemical kinetics. The theorem is named after Andrey Nikolayevich Tikhonov.
Statement
Consider this system of differential equations:
Taking the limit as , this becomes the "degenerate system":
where the second equation is the solution of the algebraic equation
Note that there may be more than one such function .
Tikhonov's theorem states that as the solution of the system of two differential equations above approaches the solution of the degenerate system if is a stable root of the "adjoined system"
References
Differential equations
Perturbation theory
Theorems in dynamical systems | Tikhonov's theorem (dynamical systems) | [
"Physics",
"Mathematics"
] | 151 | [
"Theorems in dynamical systems",
"Mathematical theorems",
"Mathematical objects",
"Quantum mechanics",
"Equations",
"Differential equations",
"Mathematical problems",
"Perturbation theory",
"Dynamical systems"
] |
32,594,594 | https://en.wikipedia.org/wiki/Superconducting%20nanowire%20single-photon%20detector | The superconducting nanowire single-photon detector (SNSPD or SSPD) is a type of optical and near-infrared single-photon detector based on a current-biased superconducting nanowire. It was first developed by scientists at Moscow State Pedagogical University and at the University of Rochester in 2001. The first fully operational prototype was demonstrated in 2005 by the National Institute of Standards and Technology (Boulder), and BBN Technologies as part of the DARPA Quantum Network.
As of 2023, a superconducting nanowire single-photon detector is the fastest single-photon detector (SPD) for photon counting.
It is a key enabling technology for quantum optics and optical quantum technologies. SNSPDs are available with very high detection efficiency, very low dark count rate and very low timing jitter, compared to other types of single-photon detectors. SNSPDs are covered by International Electrotechnical Commission (IEC) international standards. As of 2023, commercial SNSPD devices are available in multichannel systems in a price range of 100,000 euros.
It was recently discovered that superconducting wires as wide as 1.5 μm can detect single infra-red photons. This is important because optical lithography rather than electron lithography can be used in their construction. This reduces the cost for applications that require large photodetector areas. One application is in dark matter detection experiments, where the target is a scintillating GaAs crystal. GaAs suitably doped with silicon and boron is a luminous cryogenic scintillator that has no apparent afterglow and is available commercially in the form of large, high-quality crystals.
Principle of operation
The SNSPD consists of a thin (≈ 5 nm) and narrow (≈ 100 nm) superconducting nanowire. The length is typically hundreds of micrometers, and the nanowire is patterned in a compact meander geometry to create a square or circular pixel with high detection efficiency. The nanowire is cooled well below its superconducting critical temperature and biased with a DC current that is close to but less than the superconducting critical current of the nanowire. A photon incident on the nanowire breaks Cooper pairs and reduces the local critical current below that of the bias current. This results in the formation of a localized non-superconducting region, or hotspot, with finite electrical resistance. This resistance is typically larger than the 50 ohm input impedance of the readout amplifier, and hence most of the bias current is shunted to the amplifier. This produces a measurable voltage pulse that is approximately equal to the bias current multiplied by 50 ohms. With most of the bias current flowing through the amplifier, the non-superconducting region cools and returns to the superconducting state. The time for the current to return to the nanowire is typically set by the inductive time constant of the nanowire, equal to the kinetic inductance of the nanowire divided by the impedance of the readout circuit. Proper self-resetting of the device requires that this inductive time constant be slower than the intrinsic cooling time of the nanowire hotspot.
While the SNSPD does not match the intrinsic energy or photon-number resolution of the superconducting transition edge sensor, the SNSPD is significantly faster than conventional transition edge sensors and operates at higher temperatures. A degree of photon-number resolution can be achieved in SNSPD arrays, through time-binning or advanced readout schemes. Most SNSPDs are made of sputtered niobium nitride (NbN), which offers a relatively high superconducting critical temperature (≈ 10 K) which enables SNSPD operation in the temperature range 1 K to 4 K (compatible with liquid helium or modern closed-cycle cryocoolers). The intrinsic thermal time constants of NbN are short, giving very fast cooling time after photon absorption (<100 picoseconds).
The absorption in the superconducting nanowire can be boosted by a variety of strategies: integration with an optical cavity, integration with a photonic waveguide or addition of nanoantenna structures. SNSPD cavity devices in NbN, NbTiN, WSi & MoSi have demonstrated fibre-coupled device detection efficiencies greater than 98% at 1550 nm wavelength with count rates in the tens of MHz.
The detection efficiencies are optimized for a specific wavelength range in each detector. They vary widely, however, due to highly localized regions of the nanowires where the effective cross-sectional area for superconducting current is reduced.
SNSPD devices have also demonstrated exceptionally low jitter – the uncertainty in the photon arrival time – as low as 3 picoseconds at visible wavelengths. Timing jitter increases as photon energy drops and has been verified out to 3.5 micrometres wavelength. Timing jitter is an extremely important property for time-correlated single-photon counting (TCSPC) applications. Furthermore, SNSPDs have extremely low rates of dark counts, i.e. the occurrence of voltage pulses in the absence of a detected photon. In addition, the deadtime (time interval following a detection event during which the detector is not sensitive) is on the order of a few nanoseconds, this short deadtime translates into very high saturation count rates and enables antibunching measurements with a single detector.
For the detection of longer wavelength photons, however, the detection efficiency of standard SNSPDs decreases significantly. Recent efforts to improve the detection efficiency at near-infrared and mid-infrared wavelengths include studies of narrower (20 nm and 30 nm wide) NbN nanowires as well as extensive studies of alternative superconducting materials with lower superconducting critical temperatures than NbN (tungsten silicide, niobium silicide, molybdenum silicide and tantalum nitride). Single photon sensitivity up to 10 micrometer wavelength has recently been demonstrated in a tungsten silicide SNSPD. Alternative thin film deposition techniques such as atomic layer deposition are of interest for extending the spectral range and scalability of SNSPDs to large areas. High temperature superconductors have been investigated for SNSPDs with some encouraging recent reports. SNSPDs have been created from magnesium diboride with some single photon sensitivity in the visible and near infrared.
There is considerable interest and effort in scaling up SNSPDs to large multipixel arrays and cameras. A kilopixel SNSPD array has recently been reported. A key challenge is readout, which can be addressed via multiplexing or digital readout using superconducting single flux quantum logic.
Applications
Many of the initial application demonstrations of SNSPDs have been in the area of quantum information, such as quantum key distribution and optical quantum computing. Other current and emerging applications include imaging of infrared photoemission for defect analysis in CMOS circuitry, single photon emitter characterization, LIDAR, on-chip quantum optics, optical neuromorphic computing, fibre optic temperature sensing, optical time domain reflectometry, readout for ion trap qubits, quantum plasmonics, single electron detection, single α and β particle detection, singlet oxygen luminescence detection, deep space optical communication, dark matter searches and exoplanet detection. A number of companies worldwide are successfully commercializing complete single-photon detection systems based on superconducting nanowires, including Single Quantum, Photon Spot, Scontel, Quantum Opus, ID Quantique, PhoTec and Pixel Photonics. Wider adoption of SNSPD technology is closely linked to advances in cryocoolers for 4 K and below, and SNSPDs have recently been demonstrated in miniaturized systems.
References
Particle detectors
Photodetectors
Radiometry
Sensors
Superconducting detectors
Quantum optics
Superconductivity
Optoelectronics
Photonics
Optical metrology
Engineering
Single-photon detectors | Superconducting nanowire single-photon detector | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 1,676 | [
"Telecommunications engineering",
"Physical quantities",
"Quantum optics",
"Superconductivity",
"Quantum mechanics",
"Measuring instruments",
"Particle detectors",
"Materials science",
"Superconducting detectors",
"Condensed matter physics",
"Sensors",
"Electrical resistance and conductance",
... |
53,909,617 | https://en.wikipedia.org/wiki/Zhan%20catalyst | A Zhan catalyst is a type of ruthenium-based organometallic complex used in olefin metathesis. This class of chemicals is named after the chemist who first synthesized them, Zheng-Yun J. Zhan.
These catalysts are ruthenium complexes with functionally substituted alkoxybenzylidene carbene ligands, which can be chemically bonded to the surface of resins, PEG chains, and polymers. Like the structurally similar Hoveyda-Grubbs catalyst, they contain an isopropoxystyrene moiety, but include an extra electron-withdrawing sulfonamide group attached to the carbon para to the phenol oxygen. Of the three catalysts, Zhan Catalyst-1B and -1C both contain a dimethylsulfonamide moiety attached to the aryl ring, while Zhan Catalyst-II is connected to a resin via a sulfonamide linker.
History
The Zhan catalysts were inspired by previous work in the olefin metathesis field. Robert H. Grubbs first reported the first and second generation of Ru catalysts in 1992, with good metathesis activity. However, the catalysts containing the tricyclohexylphospine ligand were unstable to air and water, and the catalytic activity is not good enough for some multiple substituted olefin substrates.
In 1999, Amir H. Hoveyda showed that alkoxybenzylidene ligand based Ru catalysts offered higher activity and better stability than their Grubbs counterparts without these ligands. Later, Grela (2002) and Blechert (2003) further improved catalyst activity by incorporating substitution to Hoveyda’s alkoxybenzylidene ligands. Zhan’s catalysts were first reported in 2007, and include electron-withdrawing groups like dimethylsulfonamide on the aryl ring. Zhan's second generation catalysts are also tethered to a resin or PEG-linked support via the sulfonamide group on the isopropoxystyrene.
As with other Grubbs-type catalysts with modified chelating benzylidenes, after one catalytic turnover, the chelate is no longer associated with the propagating catalyst, meaning that the initiate rate, the rate of o-alkoxystyrene rechelation, and the rates of various catalyst decomposition events are the factors that differ between the Zhan catalysts and the parent Hoveyda–Grubbs catalysts. A mechanistic study by Plenio and coworkers in 2012 suggested that the Zhan compounds, like other Hoveyda-type catalysts, initiate by competing dissociative and interchange mechanisms, with the relative activation energies being a function of catalyst structure, olefin identity, and reaction conditions. However, nobody had been able to rigorously establish through experimentation how the various changes to the structure affected catalytic activity of the complex. Engle, Luo, Houk, Grubbs, and coworkers developed a model that could rationalize initiation rates of ruthenium olefin metathesis catalysts with chelated benzylidenes, using a combination of organometallic synthesis, reaction kinetics, NMR spectroscopy, X-ray crystallography, and DFT calculations.
Preparation
In order to make the catalysts, the pre-complex is treated with CuCl and the isopropoxystyrene ligand.
The isopropoxystyrene ligand is prepared using an ortho-vinylation of the phenol with ethyne, using conditions first proposed by Masahiko Yamaguchi in 1998. Here, SnCl4 and Bu3N were added to ethyne to generate stannylacetylene, which is the active vinylating species in this C–C bond formation. After coupling, the phenol can be alkylated using i-PrBr and a base.
Recycling
The Zhan catalysts can be recovered and recycled by simple precipitation or filtration. Zhan Catalyst-1B and -1C are soluble in dichloromethane, dichloroethane, chloroform, ether, and other solvents, but insoluble in methanol, ethanol, and other alcohols. Zhan Catalyst-II is linked to a resin- and PEG-linked support, offering a great advantage in recyclable utility, and leaving little or no trace of metal contamination within the product of olefin metathesis reactions. These catalysts can then be reused.
References
Organoruthenium compounds
Catalysts
Sulfonamides
B
Ruthenium(II) compounds | Zhan catalyst | [
"Chemistry"
] | 965 | [
"Catalysis",
"Catalysts",
"Chemical kinetics"
] |
50,078,940 | https://en.wikipedia.org/wiki/Cyclorotor | A cyclorotor, cycloidal rotor, cycloidal propeller or cyclogiro, is a fluid propulsion device that converts shaft power into the acceleration of a fluid using a rotating axis perpendicular to the direction of fluid motion. It uses several blades with a spanwise axis parallel to the axis of rotation and perpendicular to the direction of fluid motion. These blades are cyclically pitched twice per revolution to produce force (thrust or lift) in any direction normal to the axis of rotation. Cyclorotors are used for propulsion, lift, and control on air and water vehicles. An aircraft using cyclorotors as the primary source of lift, propulsion, and control is known as a cyclogyro or cyclocopter. A unique aspect is that it can change the magnitude and direction of thrust without the need of tilting any aircraft structures. The patented application, used on ships with particular actuation mechanisms both mechanical or hydraulic, is named after German company Voith Turbo.
Operating principle
Cyclorotors produce thrust by combined action of a rotation of a fixed point of the blades around a centre and the oscillation of the blades that changes their angle-of-attack over time. The joint action of the advancement produced by the orbital motion and pitch angle variation generates a higher thrust at low speed than any other propeller. In hover, the blades are actuated to a positive pitch (outward from the centre of the rotor) on the upper half of their revolution and a negative pitch (inward towards the axis of rotation) over the lower half inducing a net upward aerodynamic force and opposite fluid downwash. By varying the phase of this pitch motion the force can be shifted to any perpendicular angle or even downward. Before blade stall, increasing the amplitude of the pitching kinematics will magnify thrust.
History
The origin of the rotocycloid propeller are Russian and relates to the aeronautic domain. Sverchkov's "Samoljot" (St. Petersburg, 1909) or "wheel orthopter" was the first vehicle expressly thought to have used this type of propulsion. Its scheme came near to cyclogiro, but it's difficult to classify it precisely. It had three flat surfaces and a rudder; the rear edge of one of surfaces could be bent, replacing the action of an elevator. Lift and thrust had to be created by paddle wheels consisting of 12 blades, established in pairs under a 120° angle. The blades of a concave shape were changing an angle of incidence by the means of eccentrics and springs. In a bottom of the craft 10 hp engine was arranged. Transmission was ensured by a belt. Empty weight was about 200 kg. "Samoljot" was constructed by the military engineer E.P. Sverchkov with the grants of the Main Engineering Agency in St. Petersburg in 1909, was demonstrated at the Newest Inventions Exhibition and won a medal. Otherwise, it could not pass the preliminary tests without flying.
In 1914, Russian inventor and scientist A.N. Lodygin addressed the Russian government with the project of the cyclogiro-like aircraft, his scheme was similar to Sverchkov's "Samoljot". The project was not carried out.
In 1933, experiments in Germany by Adolf Rohrbach resulted in a paddle-wheel wing arrangement. Oscillating winglets went from positive to negative angles of attack during each revolution to create lift, and their eccentric mounting would, in theory, produce nearly any combination of horizontal and vertical forces. The DVL evaluated Rohrbach's design, but the foreign aviation journals of the time cast doubt on the soundness of the design which meant that funding for the project could not be raised, even with a latter proposal as a Luftwaffe transport aircraft. There appears to be no evidence that this design was ever built, let alone flown. Based on Rohrbach's paddle-wheel research, however, Platt in the US designed by 1933 his own independent Cyclogyro. His paddle-wheel wing arrangement was awarded a US patent (which was only one of many similar patents on file), and underwent extensive wind-tunnel testing at MIT in 1927. Despite this, there is no evidence Platt's aircraft was ever built.
The first operative cycloid propulsion was developed at Voith. Its origins date to the decision of the Voith company to focus on the business of transmission gear assemblies for turbines. The famous Voight propeller was based on its fluid-dynamics know-how gained from previous turbine projects. It was invented by Ernst Schneider, and enhanced by Voith. It was launched with name of Voith-Schneider Propeller (VSP) for commercial vessels. This new marine drive could significantly improve the manoeuvrability of a ship as demonstrated in the successful sea trials on the test boat Torqueo, in 1937. The first Voith Schneider Propellers were put into operation in the narrow canals of Venice, Italy. During the 1937 World Fair in Paris, Voith was awarded the grand prize – three times – for its exhibition of Voith Schneider Propellers and Voith turbo-transmissions. A year later, two of Paris' fire-fighting boats started operating with the new VSP system.
Design advantages and challenges
Rapid thrust vectoring
Cyclorotors provide a high degree of control. Traditional propellers, rotors, and jet engines produce thrust only along their axis of rotation and require rotation of the entire device to alter the thrust direction. This rotation requires large forces and comparatively long time scales since the propeller inertia is considerable, and the rotor gyroscopic forces resist rotation. For many practical applications (helicopters, airplanes, ships) this requires rotating the entire vessel. In contrast, cyclorotors need only to vary the blade pitch motions. Since there is little inertia associated with blade pitch change, thrust vectoring in the plane perpendicular to the axis of rotation is rapid.
High advance ratio thrust and symmetric lift
Cyclorotors can produce lift and thrust at high advance ratios, which, in theory, would enable a cyclogyro aircraft to fly at subsonic speeds well exceeding those of single rotor helicopters.
Single rotor helicopters are limited in forward speed by a combination of retreating blade stall and sonic blade tip constraints. As helicopters fly forward, the tip of the advancing blade experiences a wind velocity that is the sum of the helicopter forward speed and rotor rotational speed. This value cannot exceed the speed of sound if the rotor is to be efficient and quiet. Slowing the rotor rotational speed avoids this problem, but presents another. In the traditional method of the composition of velocity it is easy to understand that the velocity experienced by the retreating blade has a value that is produced by the vector composition of the velocity of blade rotation and the freestream velocity. In this condition it is evident that in presence of a sufficiently high advance ratio the velocity of air on the retreating blade is low. The flapping movement of the blade changes the angle of attack. It is then possible for the blade to reach the stall condition. In this case it is necessary that the stalling blade increases the pitch angle to keep some lift capability. This risk puts constraints on the design of the system. An accurate choice of the wing profile is necessary and careful dimensioning of the radius of the rotor for the specified speed range.
Slow speed cyclorotors bypass this problem through a horizontal axis of rotation and operating at a comparatively low blade tip speed. For higher speeds, which may become necessary for industrial applications, it seems necessary to adopt more sophisticated strategies and solutions. A solution is the independent actuation of the blades which have been recently patented and successfully tested for naval use by use on hydraulic actuation system. The horizontal axis of rotation always provides an advancement of the upper blades, that produce always a positive lift by the full rotor. These characteristics could help overcome two issues of helicopters: their low energy efficiency and the advance ratio limitation.
Unsteady aerodynamics
The advancement of the blades and oscillations are the two dynamic actions which are produced by a cyclorotor. It is evident that the wing-blades of a cyclorotor operates in different way than a traditional aircraft wing or a traditional helicopter wing. The blades of a cyclorotor oscillate by rotation around a point that rotating describes an ideal circumference. The combination of the advancement motion of the centre of rotation of the blade and the oscillation of the blade (it is a movement somehow similar to the pendulum), which continue to vary its pitch generate a complex set of aerodynamic phenomena:
the delay of the blade stall;
an increase of the maximum blade lift coefficient at low Reynolds numbers.
The two effects are evidently correlated with a general increase of the thrust produced.
If compared to a helicopter or any other propeller, it is evident that the same blade section in a rotocycloid produces much more thrust at the same Reynolds number. This effect can be explained by considering the traditional behavior of a propeller.
At low Reynolds numbers there is little turbulence and laminar flow conditions can be reached. Considering a traditional wing profile it is evident that those conditions minimize the speed differences between upper and lower face of the wing. It is then evident that both lift and stall speed are reduced. A consequence is a reduction of angle of attack at which stall conditions are reached.
In this regime, conventional propellers and rotors must use larger blade area and rotate faster to achieve the same propulsive forces and lose more energy to blade drag. It is then evident that a cyclorotor is much more energy efficient than any other propeller.
Actual cyclorotors bypass this problem by quickly increasing and then decreasing blade angle of attack, which temporarily delays stall and achieves a high lift coefficient. This unsteady lift makes cyclorotors more efficient at small scales, low velocities, and high altitudes than traditional propellers.
It is otherwise evident that many living beings, such as birds, and some insects, are still much more efficient, because they can change not only the pitch but also the shape of their wings, or they can change the property of the boundary layer such as sharkskin.
Some research tries to acquire the same level of efficiency of the natural examples of wings or surfaces. One direction is to introduce morphing wing concepts. Another relates to the introduction of boundary layer control mechanisms, such as dielectric barrier discharge.
Noise
During experimental evaluation, cyclorotors produced little aerodynamic noise. This is likely due to the lower blade tip speeds, which produce lower intensity turbulence following the blades.
Hovering thrust efficiency
In small-scale tests, cyclorotors achieved a higher power loading than comparable scale traditional rotors at the same disk loading. This is attributed to utilizing unsteady lift and consistent blade aerodynamic conditions. The rotational component of velocity on propellers increases from root to tip and requires blade chord, twist, airfoil, etc., to be varied along the blade. Since the cyclorotor blade span is parallel to the axis of rotation, each spanwise blade section operates at similar velocities and the entire blade can be optimized.
Structural considerations
Cyclorotor blades require support structure for their positioning parallel to the rotor axis of rotation. This structure, sometimes referred to as "spokes," adds to the parasite drag and weight of the rotor. Cyclorotor blades are also centrifugally loaded in bending (as opposed to the axial loading on propellers), which requires blades with an extremely high strength to weight ratio or intermediate blade support spokes. Early 20th century cyclorotors featured short blade spans, or additional support structure to circumvent this problem.
Blade pitch considerations
Cyclorotors require continuously actuated blade pitch. The relative flow angle experienced by the blades as they rotate about the rotor varies substantially with advance ratio and rotor thrust. To operate most efficiently a blade pitch mechanism should adjust for these diverse flow angles. High rotational velocities makes it difficult to implement an actuator based mechanism, which calls for a fixed or variable shape track for pitch control, mounted parallel to blade trajectory, onto which are placed blade's followers such as rollers or airpads - the pitch control track shape reliably determines blade's pitch along the orbit regardless of the blade's RPM. While the pitching motions used in hover are not optimized for forward flight, in experimental evaluation they were found to provide efficient flight up to an advance ratio near one.
Applications
Wind turbines
Wind turbines are a potential application of cyclorotors. They are named in this case variable-pitch vertical-axis wind turbines, with large benefits with respect to traditional VAWTs. This kind of turbine is stated to overcome most of the traditional limitations of traditional Darrieus VAWTs.
Ship propulsion and control
The most widespread application of cyclorotors is for ship propulsion and control. In ships the cyclorotor is mounted with the axis of rotation vertical so that thrust can quickly be vectored in any direction parallel to the plane of the water surface. In 1922, Frederick Kirsten fitted a pair of cyclorotors to a 32 ft boat in Washington, which eliminated the need for a rudder and provided extreme manoeuvrability. While the idea floundered in the United States after the Kirsten-Boeing Propeller Company lost a US Navy research grant, the Voith-Schneider propeller company successfully commercially employed the propeller. This Voith-Schneider propeller was fitted to more than 100 ships prior to the outbreak of the Second World War. Today, the same company sells the same propeller for highly manoeuvrable watercraft. It is applied on offshore drilling ships, tugboats, and ferries.
Aircraft
Cyclogyros
A cyclogyro is a vertical takeoff and landing aircraft using a cyclorotor as a rotor wing for lift and often also for propulsion and control. Advances in cyclorotor aerodynamics made the first untethered model cyclogyro flight possible in 2011 at the Northwestern Polytechnic Institute in China. Since then, universities and companies have successfully flown small-scale cyclogyros in several configurations.
The performance of traditional rotors is severely deteriorated at low Reynolds Numbers by low angle-of-attack blade stall. Current hover-capable MAVs can stay aloft for only minutes. Cyclorotor MAVs (very small scale cyclogyros) could utilize unsteady lift to extend endurance. The smallest cyclogyro flown to date weighs only 29 grams and was developed by the advanced vertical flight laboratory at Texas A&M university.
Commercial cyclogyro UAVs are being developed by D-Daelus, Pitch Aeronautics, and CycloTech.
Airship propulsion and control
A large exposed area makes airships susceptible to gusts and difficult to takeoff, land, or moor in windy conditions. Propelling airships with cyclorotors could enable flight in more severe atmospheric conditions by compensating for gusts with rapid thrust vectoring. Following this idea, the US Navy seriously considered fitting of six primitive Kirsten-Boeing cyclorotors to the airship. The Shenandoah crashed while transiting a squall line on 3 September 1925 before any possible installation and testing. No large scale tests have been attempted since, but a cyclorotor airship demonstrated improved performance over a traditional airship configuration in a test.
See also
References
External links
https://www.cyclotech.at/
Aerodynamics
Propulsion
Propellers | Cyclorotor | [
"Chemistry",
"Engineering"
] | 3,164 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics"
] |
50,081,218 | https://en.wikipedia.org/wiki/Jetboard%20%28Hydroflight%20Sports%29 | In hydroflight sports, a jetboard is a device that uses water propulsion as its means of flying above the surface of any body of water. In jetboarding, the athlete is standing in wakeboard-style boots/bindings which are attached to a board or independent base plates with jets extending downward from under the feet. The aim in jetboarding is to perform tricks such as dolphin dives, spins and backflips (even multiples), and combinations of two or more tricks.
Competitive jetboarding began in 2012 with the first Flyboard World Cup, a freestyle trick jetboard competition in which athletes are critiqued by a panel of judges. The world's first open competitive event for hydroflight jetboards, Session One, was held at LTS Training school in Pompano Beach Florida in 2016.
References
External links
Jetboard France
Powerski
Dolphin Board
Flyboard
X-Jets Jetblade
Defy Jetdeck
Flydive X-Board (archived)
Hydroflight
Water sports equipment
Personal water craft
Ultralight aircraft
Aircraft configurations | Jetboard (Hydroflight Sports) | [
"Engineering"
] | 211 | [
"Aircraft configurations",
"Aerospace engineering"
] |
50,081,323 | https://en.wikipedia.org/wiki/MOF-5 | MOF-5 or IRMOF-1 is a cubic metal–organic framework compound with the formula Zn4O(BDC)3, where BDC2− = 1,4-benzodicarboxylate (MOF-5). It was first synthesized by graduate students and post doctoral scholars in the lab of Omar M. Yaghi. MOF-5 is notable for exhibiting one of the highest surface area to volume ratios among metal–organic frameworks, at 2200 m2/cm3. Additionally, it was the first metal–organic framework studied for hydrogen gas storage.
References
Zinc complexes
Metal-organic frameworks | MOF-5 | [
"Chemistry",
"Materials_science",
"Engineering"
] | 134 | [
"Porous polymers",
"Materials science stubs",
"Metal-organic frameworks",
"Materials science"
] |
50,083,594 | https://en.wikipedia.org/wiki/ST2-PT | ST2-PT (Single Transition-to-single Transition Polarization Transfer) is a method of sensitivity enhancement in NMR spectroscopy, developed by K.V. Pervushin, G. Wider, and K. Wüthrich in 1998.
This method affords a sensitivity enhancement for kinetically stable amide 15N–1H groups in proteins.
References
Nuclear magnetic resonance
Nuclear magnetic resonance experiments | ST2-PT | [
"Physics",
"Chemistry"
] | 82 | [
"Nuclear magnetic resonance",
"Nuclear magnetic resonance experiments",
"Nuclear chemistry stubs",
"Nuclear magnetic resonance stubs",
"Nuclear physics"
] |
50,090,755 | https://en.wikipedia.org/wiki/Bluetooth%20Low%20Energy%20beacon | Bluetooth beacons are hardware transmitters — a class of Bluetooth Low Energy (LE) devices that broadcast their identifier to nearby portable electronic devices. The technology enables smartphones, tablets and other devices to perform actions when in close proximity to a beacon.
Bluetooth beacons use Bluetooth Low Energy proximity sensing to transmit a universally unique identifier picked up by a compatible app or operating system. The identifier and several bytes sent with it can be used to determine the device's physical location, track customers, or trigger a location-based action on the device such as a check-in on social media or a push notification.
One application is distributing messages at a specific point of interest, for example a store, a bus stop, a room or a more specific location like a piece of furniture or a vending machine. This is similar to previously used geopush technology based on GPS, but with a much reduced impact on battery life and much extended precision.
Another application is an indoor positioning system, which helps smartphones determine their approximate location or context. With the help of a Bluetooth beacon, a smartphone's software can approximately find its relative location to a Bluetooth beacon in a store. Brick and mortar retail stores use the beacons for mobile commerce, offering customers special deals through mobile marketing, and can enable mobile payments through point of sale systems.
Bluetooth beacons differ from some other location-based technologies as the broadcasting device (beacon) is only a 1-way transmitter to the receiving smartphone or receiving device, and necessitates a specific app installed on the device to interact with the beacons. Thus only the installed app, and not the Bluetooth beacon transmitter, can track users.
Bluetooth beacon transmitters come in a variety of form factors, including small coin cell devices, USB sticks, and generic Bluetooth 4.0 capable USB dongles.
History and development
The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Dr. Nils Rydbeck CTO at Ericsson Mobile in Lund and Dr. Johan Ullman. The purpose was to develop wireless headsets, according to two inventions by Johan Ullman, SE 8902098–6, issued 1989-06-12 and SE 9202239, issued 1992-07-24. Since its creation the Bluetooth standard has gone through many generations each adding different features. Bluetooth 1.2 allowed for faster speed up to ≈700 kbit/s. Bluetooth 2.0 improved on this for speeds up to 3 Mbit/s. Bluetooth 2.1 improved device pairing speed and security. Bluetooth 3.0 again improved transfer speed up to 24 Mbit/s. In 2010 Bluetooth 4.0 (Low Energy) was released with its main focus being reduced power consumption. Before Bluetooth 4.0 the majority of connections using Bluetooth were two way, both devices listen and talk to each other. Although this two way communication is still possible with Bluetooth 4.0, one way communication is also possible. This one way communication allows a bluetooth device to transmit information but not listen for it. These one way "beacons" do not require a paired connection like previous Bluetooth devices so they have new useful applications.
Design
Battery powered
Bluetooth beacons operate using the Bluetooth 4.0 Low Energy standard so battery powered devices are possible. Battery life of devices varies depending on manufacturer. The Bluetooth LE protocol is significantly more power efficient than Bluetooth Classic. Several chipsets makers, including Texas Instruments and Nordic Semiconductor now supply chipsets optimized for iBeacon use. Power consumption depends on iBeacon configuration parameters of advertising interval and transmit power. Battery life can range between 1–48 months. Apple's recommended setting of 100 ms advertising interval with a coin cell battery provides for 1–3 months of life, which increases to 2–3 years as advertising interval is increased to 900 ms.
Battery consumption of the phones is a factor that must be taken into account when deploying beacon enabled apps. A recent report has shown that
older phones tend to draw more battery power in the vicinity of iBeacons, while the newer phones can be more efficient in the same environment. In addition to the time spent by the phone scanning, number of scans and number of beacons in the vicinity are also significant factors for battery drain.
An energy efficient iBeacon application needs to consider these aspects in order to strike a good balance between app responsiveness and battery consumption.
USB powered
Bluetooth beacons can also come in the form of USB dongles. These small USB beacons can be powered by a standard USB port which makes them ideal for long term permanent installations.
Uses
Advertising
Bluetooth beacons can be used to send a packet of information that contains a Universally Unique Identifier (UUID). This UUID is used to trigger events specific to that beacon. In the case of Apple's iBeacon the UUID will be recognized by an app on the user device that will trigger an event. This event is fully customizable by the app developer but in the case of advertising the event might be a push notification with an ad. However, with a UID based system the users device must connect to an online server which is capable of understanding the beacons UUID. Once the UUID is sent to the server the appropriate message action is sent to a users device.
Other methods of advertising are also possible with beacons, URIBeacon and Google's Eddystone allow for a URI transmission mode that unlike iBeacons UID doesn't require an outside server for recognition. The URI beacons transmit a URI which could be a link to a webpage and the user will see that URI directly on their phone.
Notification and interaction
Beacons can be associated with the artpieces in a museum to encourage further interaction. For example, a notification can be sent to user's mobile device when user is in the proximity to a particular artpiece. By sending user the notification, user is alerted with the artpiece in his proximity, and if user indicates their further interest, a specific app can be installed to interact with the encountered artpiece.
In general, a native app is needed for a mobile device to interact with the beacon if the beacon uses iBeacon protocol; whereas if Eddystone is employed, user can interact with the artpiece through a physical web URL broadcast by the Eddystone.
Indoor navigation
Indoor positioning with beacons falls into three categories. Implementations with many beacons per room, implementations with one beacon per room, and implementations with a few beacons per building. Indoor navigation with Bluetooth is still in its infancy but attempts have been made to find a working solution.
Many beacons per room
With multiple beacons per room trilateration can be used to estimate a users' position to within about 2 meters. Bluetooth beacons are capable of transmitting their Received Signal Strength Indicator (RSSI) value in addition to other data. This RSSI value is calibrated by the manufacturer of the beacon to be the signal strength of the beacon at a known distance, typically one meter. Using the known output signal strength of the beacon and the signal strength observed by the receiving device an approximation can be made about the distance between the beacon and the device. However this approximation is not very reliable, so for more accurate position tracking other methods are preferred. Since its release in 2010 many studies have been connected using Bluetooth beacons for tracking. A few methods have been tested to find the best way of combining the RSSI values for tracking. Neural networks have been proposed as a good way of reducing the error in estimation. A Stigmergic approach has also been tested, this method uses an intensity map to estimate a users location.
Bluetooth LE specification 5.1 added further more precise methods for position determination using multiple beacons.
One beacon per room
With only one beacon per room, a user can use their known room position in conjunction with a virtual map of all the rooms in a building to navigate a building. A building with many separate rooms may need a different beacon configuration for navigation. With one beacon in each room a user can use an app to know the room they are in, and a simple shortest path algorithm can be used to give them the best route to the room they are looking for. This configuration requires a digital map of the building but attempts have been made to make this map creation easier.
Few beacons per building
Beacons can be used in conjunction with pedestrian dead reckoning techniques to add checkpoints to a large open space. PDR uses a known last location in conjunction with direction and speed information provided by the user to estimate a person's location. This technique can be used to estimate a person's location as they walk through a building. Using Bluetooth beacons as checkpoints the user's location can be recalculated to reduce error. In this way a few Bluetooth beacons can be used to cover a large area like a mall.
Healthcare
Using the device tracking capabilities of Bluetooth beacons, in-home patient monitoring is possible. Using bluetooth beacons a person's movements and activities can be tracked in their home. Bluetooth beacons are a good alternative to in house cameras due to their increased level of privacy. Additionally bluetooth beacons can be used in hospitals or other workplaces to ensure workers meet certain standards. For example, a beacon may be placed at a hand sanitizer dispenser in a hospital – the beacons can help ensure employees are using the station regularly.
Tracker
One use of beacons is as a "key finder" where a beacon is attached to, for example, a keyring and a smartphone app can be used to track the last time the device came in range.
Another similar use is to track pets, objects (e.g. baggage) or people. The precision and range of BLE doesn't match GPS, but beacons are significantly less expensive. Several commercial and free solutions exist, which are based on proximity detection, not precise positioning. For example, Nivea launched the "kid-tracker" campaign in Brazil back in 2014.
Beacon protocols
iBeacon
In mid-2013, Apple introduced iBeacons and experts wrote about how it is designed to help the retail industry by simplifying payments and enabling on-site offers. On December 6, 2013, Apple activated iBeacons across its 254 US retail stores. McDonald's has used the devices to give special offers to consumers in its fast-food stores. As of May 2014, different hardware iBeacons can be purchased for as little as $5 per device to more than $30 per device. Each of these different iBeacons have varying default settings for their default transmit power and iBeacon advertisement frequency. Some hardware iBeacons advertise at as low as 1 Hz while others can be as fast as 10 Hz.
AltBeacon
AltBeacon is an open source alternative to iBeacon created by Radius Networks.
URIBeacon
URIBeacons are different from iBeacons and AltBeacons because rather than broadcasting an identifier, they send an URL which can be understood immediately.
Eddystone
Eddystone is Google's standard for Bluetooth beacons. It supports three types of packets, Eddystone-UID, Eddystone-URL, and Eddystone-TLM. Eddystone-UID functions in a very similar way to Apple's iBeacon, however, it supports additional telemetry data with Eddystone-TLM. The telemetry information is sent along with the UID data. The beacon information available includes battery voltage, beacon temperature, number of packets sent since last startup, and beacon uptime. Using the Eddystone protocol, Google had built the now discontinued Google Nearby that allowed Android users to receive beacon notifications without an app.
Comparable technologies
Although the near-field communication (NFC) environment is very different and has many non-overlapping applications, it is still compared with iBeacons.
NFC range is up to 20 cm (7.87 inches) but the optimal range is < 4 cm (1.57 inches). iBeacons have a significantly larger range.
NFC can be either passive or active. When using passive mode, the power is sent from the reader device. Although Passif (bought by Apple Inc.) has worked on reducing the energy consumption, a battery pack is still needed inside iBeacon tags at this time.
Most Android smart devices ship with both Bluetooth 4.0 LE and NFC support. On September 19, 2014, also Apple released the iPhone 6 and iPhone 6 plus, supporting the NFC standard, but only limited to payments
See also
Eddystone
Facebook Bluetooth Beacon
Electric beacon
Pseudolite
Nearables
iBeacon
References
Bluetooth
Radio-frequency identification
Indoor positioning system
Radio geopositioning | Bluetooth Low Energy beacon | [
"Technology",
"Engineering"
] | 2,652 | [
"Radio electronics",
"Radio geopositioning",
"Wireless locating",
"Wireless networking",
"Indoor positioning system",
"Radio-frequency identification",
"Bluetooth"
] |
50,091,659 | https://en.wikipedia.org/wiki/Modeling%20of%20polymer%20crystals | Polymer crystals have different properties than simple atomic crystals. They possess high density and long range order. They do not possess isotropy, and therefore are anisotropic in nature, which means they show anisotropy and limited conformation space. However, just as atomic crystals have lattices, polymer crystals also exhibit a periodic structure called a lattice, which describes the repetition of the unit cells in the space. The simulation of polymer crystals is complex and not taken from only one state but from solid-state and fluid-state physics as well. Polymer crystals have unit cells that consist of tens of atoms, while the molecules themselves comprise 104 To 106 atoms.
Computational methods
There are two methods for the study of polymer crystals: 1) optimization methods and 2) sampling methods. Optimization methods have some advantages over the sampling method, such as the localization of crystals in phase space. Sampling methods generally cannot localize the crystals, and thus there is no need of the assumptions of localization. Optimization methods include molecular mechanics and lattice dynamics and sampling methods include the Monte Carlo method and molecular dynamics. A brief discussion regarding the methods are as follows:
Optimization method: In this method, we use the optimization technique and optimize the polymer crystal. For this we consider an ideal case where the crystal is free of disorder (this is assumption). Now we have to express the relevant part of the energy surface which can be approximated by using Taylor series expansion to an arbitrary accuracy in small displacements about the local minimum energy structure. Here in optimization method, we introduce the wave vector and frequency of oscillation term because optimization involves the localization of crystals. We find elastic stiffness moduli here with which modeling is done.
Sampling method: There is no localization of crystals and this sampling method also remove the restriction of approximation of the lattice. There are many disadvantages of this method: a) The Monte Carlo method and molecular dynamics method must use very small polymer crystals for the simulation. These method simulate, approximate the polymers of the order of 103 to 104 with current generation computers. This is the boundary condition and atoms outside of the simulation box have to be in phase with the atom inside the box. b) Due to heavy computational burden, simple interatomic models are more prevalent in Monte Carlo method and molecular dynamics.
There is a variety of methods for studying polymer crystals by molecular simulation. It is especially important in polymer crystals to be cognizant of the limitations imposed by either the assumptions on which a method is based or the robustness of the simulation method.
See also
Polymer engineering
Crystal system
Crystal structure
Liquid crystal
Crystallography
Conductive polymer
Crystallization of polymers
Path integrals in polymer science
Multiscale modeling
References
Polymers | Modeling of polymer crystals | [
"Chemistry",
"Materials_science"
] | 550 | [
"Polymers",
"Polymer chemistry"
] |
57,264,039 | https://en.wikipedia.org/wiki/Einstein%27s%20thought%20experiments | A hallmark of Albert Einstein's career was his use of visualized thought experiments () as a fundamental tool for understanding physical issues and for elucidating his concepts to others. Einstein's thought experiments took diverse forms. In his youth, he mentally chased beams of light. For special relativity, he employed moving trains and flashes of lightning to explain his most penetrating insights. For general relativity, he considered a person falling off a roof, accelerating elevators, blind beetles crawling on curved surfaces and the like. In his debates with Niels Bohr on the nature of reality, he proposed imaginary devices that attempted to show, at least in concept, how the Heisenberg uncertainty principle might be evaded. In a profound contribution to the literature on quantum mechanics, Einstein considered two particles briefly interacting and then flying apart so that their states are correlated, anticipating the phenomenon known as quantum entanglement.
Introduction
A thought experiment is a logical argument or mental model cast within the context of an imaginary (hypothetical or even counterfactual) scenario. A scientific thought experiment, in particular, may examine the implications of a theory, law, or set of principles with the aid of fictive and/or natural particulars (demons sorting molecules, cats whose lives hinge upon a radioactive disintegration, men in enclosed elevators) in an idealized environment (massless trapdoors, absence of friction). They describe experiments that, except for some specific and necessary idealizations, could conceivably be performed in the real world.
As opposed to physical experiments, thought experiments do not report new empirical data. They can only provide conclusions based on deductive or inductive reasoning from their starting assumptions. Thought experiments invoke particulars that are irrelevant to the generality of their conclusions. It is the invocation of these particulars that give thought experiments their experiment-like appearance. A thought experiment can always be reconstructed as a straightforward argument, without the irrelevant particulars. John D. Norton, a well-known philosopher of science, has noted that "a good thought experiment is a good argument; a bad thought experiment is a bad argument."
When effectively used, the irrelevant particulars that convert a straightforward argument into a thought experiment can act as "intuition pumps" that stimulate readers' ability to apply their intuitions to their understanding of a scenario. Thought experiments have a long history. Perhaps the best known in the history of modern science is Galileo's demonstration that falling objects must fall at the same rate regardless of their masses. This has sometimes been taken to be an actual physical demonstration, involving his climbing up the Leaning Tower of Pisa and dropping two heavy weights off it. In fact, it was a logical demonstration described by Galileo in Discorsi e dimostrazioni matematiche (1638).
Einstein had a highly visual understanding of physics. His work in the patent office "stimulated [him] to see the physical ramifications of theoretical concepts." These aspects of his thinking style inspired him to fill his papers with vivid practical detail making them quite different from, say, the papers of Lorentz or Maxwell. This included his use of thought experiments.
Special relativity
Pursuing a beam of light
Late in life, Einstein recalled
Einstein's recollections of his youthful musings are widely cited because of the hints they provide of his later great discovery. However, Norton has noted that Einstein's reminiscences were probably colored by a half-century of hindsight. Norton lists several problems with Einstein's recounting, both historical and scientific:
1. At 16 years old and a student at the Gymnasium in Aarau, Einstein would have had the thought experiment in late 1895 to early 1896. But various sources note that Einstein did not learn Maxwell's theory until 1898, in university.
2. A 19th century aether theorist would have had no difficulties with the thought experiment. Einstein's statement, "...there seems to be no such thing...on the basis of experience," would not have counted as an objection, but would have represented a mere statement of fact, since no one had ever traveled at such speeds.
3. An aether theorist would have regarded "...nor according to Maxwell's equations" as simply representing a misunderstanding on Einstein's part. Unfettered by any notion that the speed of light represents a cosmic limit, the aether theorist would simply have set velocity equal to c, noted that yes indeed, the light would appear to be frozen, and then thought no more of it.
Rather than the thought experiment being at all incompatible with aether theories (which it is not), the youthful Einstein appears to have reacted to the scenario out of an intuitive sense of wrongness. He felt that the laws of optics should obey the principle of relativity. As he grew older, his early thought experiment acquired deeper levels of significance: Einstein felt that Maxwell's equations should be the same for all observers in inertial motion. From Maxwell's equations, one can deduce a single speed of light, and there is nothing in this computation that depends on an observer's speed. Einstein sensed a conflict between Newtonian mechanics and the constant speed of light determined by Maxwell's equations.
Regardless of the historical and scientific issues described above, Einstein's early thought experiment was part of the repertoire of test cases that he used to check on the viability of physical theories. Norton suggests that the real importance of the thought experiment was that it provided a powerful objection to emission theories of light, which Einstein had worked on for several years prior to 1905.
Magnet and conductor
In the very first paragraph of Einstein's seminal 1905 work introducing special relativity, he writes:
This opening paragraph recounts well-known experimental results obtained by Michael Faraday in 1831. The experiments describe what appeared to be two different phenomena: the motional EMF generated when a wire moves through a magnetic field (see Lorentz force), and the transformer EMF generated by a changing magnetic field (due to the Maxwell–Faraday equation). James Clerk Maxwell himself drew attention to this fact in his 1861 paper On Physical Lines of Force. In the latter half of Part II of that paper, Maxwell gave a separate physical explanation for each of the two phenomena.
Although Einstein calls the asymmetry "well-known", there is no evidence that any of Einstein's contemporaries considered the distinction between motional EMF and transformer EMF to be in any way odd or pointing to a lack of understanding of the underlying physics. Maxwell, for instance, had repeatedly discussed Faraday's laws of induction, stressing that the magnitude and direction of the induced current was a function only of the relative motion of the magnet and the conductor, without being bothered by the clear distinction between conductor-in-motion and magnet-in-motion in the underlying theoretical treatment.
Yet Einstein's reflection on this experiment represented the decisive moment in his long and tortuous path to special relativity. Although the equations describing the two scenarios are entirely different, there is no measurement that can distinguish whether the magnet is moving, the conductor is moving, or both.
In a 1920 review on the Fundamental Ideas and Methods of the Theory of Relativity (unpublished), Einstein related how disturbing he found this asymmetry:
Einstein needed to extend the relativity of motion that he perceived between magnet and conductor in the above thought experiment to a full theory. For years, however, he did not know how this might be done. The exact path that Einstein took to resolve this issue is unknown. We do know, however, that Einstein spent several years pursuing an emission theory of light, encountering difficulties that eventually led him to give up the attempt.
That decision ultimately led to his development of special relativity as a theory founded on two postulates. Einstein's original expression of these postulates was:
"The laws governing the changes of the state of any physical system do not depend on which one of two coordinate systems in uniform translational motion relative to each other these changes of the state are referred to.
Each ray of light moves in the coordinate system "at rest" with the definite velocity V independent of whether this ray of light is emitted by a body at rest or a body in motion."
In their modern form:
1. The laws of physics take the same form in all inertial frames.
2. In any given inertial frame, the velocity of light c is the same whether the light be emitted by a body at rest or by a body in uniform motion. [Emphasis added by editor]
Einstein's wording of the first postulate was one with which nearly all theorists of his day could agree. His second postulate expresses a new idea about the character of light. Modern textbooks combine the two postulates. One popular textbook expresses the second postulate as, "The speed of light in free space has the same value c in all directions and in all inertial reference frames."
Trains, embankments, and lightning flashes
The topic of how Einstein arrived at special relativity has been a fascinating one to many scholars: A lowly, twenty-six year old patent officer (third class), largely self-taught in physics and completely divorced from mainstream research, nevertheless in the year 1905 produced four extraordinary works (Annus Mirabilis papers), only one of which (his paper on Brownian motion) appeared related to anything that he had ever published before.
Einstein's paper, On the Electrodynamics of Moving Bodies, is a polished work that bears few traces of its gestation. Documentary evidence concerning the development of the ideas that went into it consist of, quite literally, only two sentences in a handful of preserved early letters, and various later historical remarks by Einstein himself, some of them known only second-hand and at times contradictory.
In regards to the relativity of simultaneity, Einstein's 1905 paper develops the concept vividly by carefully considering the basics of how time may be disseminated through the exchange of signals between clocks. In his popular work, Relativity: The Special and General Theory, Einstein translates the formal presentation of his paper into a thought experiment using a train, a railway embankment, and lightning flashes. The essence of the thought experiment is as follows:
Observer M stands on an embankment, while observer M rides on a rapidly traveling train. At the precise moment that M and M coincide in their positions, lightning strikes points A and B equidistant from M and M.
Light from these two flashes reach M at the same time, from which M concludes that the bolts were synchronous.
The combination of Einstein's first and second postulates implies that, despite the rapid motion of the train relative to the embankment, M measures exactly the same speed of light as does M. Since M was equidistant from A and B when lightning struck, the fact that M receives light from B before light from A means that to M, the bolts were not synchronous. Instead, the bolt at B struck first.
A routine supposition among historians of science is that, in accordance with the analysis given in his 1905 special relativity paper and in his popular writings, Einstein discovered the relativity of simultaneity by thinking about how clocks could be synchronized by light signals. The Einstein synchronization convention was originally developed by telegraphers in the middle 19th century. The dissemination of precise time was an increasingly important topic during this period. Trains needed accurate time to schedule use of track, cartographers needed accurate time to determine longitude, while astronomers and surveyors dared to consider the worldwide dissemination of time to accuracies of thousandths of a second. Following this line of argument, Einstein's position in the patent office, where he specialized in evaluating electromagnetic and electromechanical patents, would have exposed him to the latest developments in time technology, which would have guided him in his thoughts towards understanding the relativity of simultaneity.
However, all of the above is supposition. In later recollections, when Einstein was asked about what inspired him to develop special relativity, he would mention his riding a light beam and his magnet and conductor thought experiments. He would also mention the importance of the Fizeau experiment and the observation of stellar aberration. "They were enough", he said. He never mentioned thought experiments about clocks and their synchronization.
The routine analyses of the Fizeau experiment and of stellar aberration, that treat light as Newtonian corpuscles, do not require relativity. But problems arise if one considers light as waves traveling through an aether, which are resolved by applying the relativity of simultaneity. It is entirely possible, therefore, that Einstein arrived at special relativity through a different path than that commonly assumed, through Einstein's examination of Fizeau's experiment and stellar aberration.
We therefore do not know just how important clock synchronization and the train and embankment thought experiment were to Einstein's development of the concept of the relativity of simultaneity. We do know, however, that the train and embankment thought experiment was the preferred means whereby he chose to teach this concept to the general public.
Relativistic center-of-mass theorem
Einstein proposed the equivalence of mass and energy in his final Annus Mirabilis paper. Over the next several decades, the understanding of energy and its relationship with momentum were further developed by Einstein and other physicists including Max Planck, Gilbert N. Lewis, Richard C. Tolman, Max von Laue (who in 1911 gave a comprehensive proof of from the stress–energy tensor), and Paul Dirac (whose investigations of negative solutions in his 1928 formulation of the energy–momentum relation led to the 1930 prediction of the existence of antimatter).
Einstein's relativistic center-of-mass theorem of 1906 is a case in point. In 1900, Henri Poincaré had noted a paradox in modern physics as it was then understood: When he applied well-known results of Maxwell's equations to the equality of action and reaction, he could describe a cyclic process which would result in creation of a reactionless drive, i.e. a device which could displace its center of mass without the exhaust of a propellant, in violation of the conservation of momentum. Poincaré resolved this paradox by imagining electromagnetic energy to be a fluid having a given density, which is created and destroyed with a given momentum as energy is absorbed and emitted. The motions of this fluid would oppose displacement of the center of mass in such fashion as to preserve the conservation of momentum.
Einstein demonstrated that Poincaré's artifice was superfluous. Rather, he argued that mass-energy equivalence was a necessary and sufficient condition to resolve the paradox. In his demonstration, Einstein provided a derivation of mass-energy equivalence that was distinct from his original derivation. Einstein began by recasting Poincaré's abstract mathematical argument into the form of a thought experiment:
Einstein considered (a) an initially stationary, closed, hollow cylinder free-floating in space, of mass and length , (b) with some sort of arrangement for sending a quantity of radiative energy (a burst of photons) from the left to the right. The radiation has momentum Since the total momentum of the system is zero, the cylinder recoils with a speed (c) The radiation hits the other end of the cylinder in time (assuming ), bringing the cylinder to a stop after it has moved through a distance
(d) The energy deposited on the right wall of the cylinder is transferred to a massless shuttle mechanism (e) which transports the energy to the left wall (f) and then returns to re-create the starting configuration of the system, except with the cylinder displaced to the left. The cycle may then be repeated.
The reactionless drive described here violates the laws of mechanics, according to which the center of mass of a body at rest cannot be displaced in the absence of external forces. Einstein argued that the shuttle cannot be massless while transferring energy from the right to the left. If energy possesses the inertia the contradiction disappears.
Modern analysis suggests that neither Einstein's original 1905 derivation of mass-energy equivalence nor the alternate derivation implied by his 1906 center-of-mass theorem are definitively correct. For instance, the center-of-mass thought experiment regards the cylinder as a completely rigid body. In reality, the impulse provided to the cylinder by the burst of light in step (b) cannot travel faster than light, so that when the burst of photons reaches the right wall in step (c), the wall has not yet begun to move. Ohanian has credited von Laue (1911) as having provided the first truly definitive derivation of .
Impossibility of faster-than-light signaling
In 1907, Einstein noted that from the composition law for velocities, one could deduce that there cannot exist an effect that allows faster-than-light signaling.
Einstein imagined a strip of material that allows propagation of signals at the faster-than-light speed of (as viewed from the material strip). Imagine two observers, A and B, standing on the x-axis and separated by the distance . They stand next to the material strip, which is not at rest, but rather is moving in the negative x-direction with speed . A uses the strip to send a signal to B. From the velocity composition formula, the signal propagates from A to B with speed . The time required for the signal to propagate from A to B is given by
The strip can move at any speed . Given the starting assumption , one can always set the strip moving at a speed such that .
In other words, given the existence of a means of transmitting signals faster-than-light, scenarios can be envisioned whereby the recipient of a signal will receive the signal before the transmitter has transmitted it.
About this thought experiment, Einstein wrote:
General relativity
Falling painters and accelerating elevators
In his unpublished 1920 review, Einstein related the genesis of his thoughts on the equivalence principle:
The realization "startled" Einstein, and inspired him to begin an eight-year quest that led to what is considered to be his greatest work, the theory of general relativity. Over the years, the story of the falling man has become an iconic one, much embellished by other writers. In most retellings of Einstein's story, the falling man is identified as a painter. In some accounts, Einstein was inspired after he witnessed a painter falling from the roof of a building adjacent to the patent office where he worked. This version of the story leaves unanswered the question of why Einstein might consider his observation of such an unfortunate accident to represent the happiest thought in his life.
Einstein later refined his thought experiment to consider a man inside a large enclosed chest or elevator falling freely in space. While in free fall, the man would consider himself weightless, and any loose objects that he emptied from his pockets would float alongside him. Then Einstein imagined a rope attached to the roof of the chamber. A powerful "being" of some sort begins pulling on the rope with constant force. The chamber begins to move "upwards" with a uniformly accelerated motion. Within the chamber, all of the man's perceptions are consistent with his being in a uniform gravitational field. Einstein asked, "Ought we to smile at the man and say that he errs in his conclusion?" Einstein answered no. Rather, the thought experiment provided "good grounds for extending the principle of relativity to include bodies of reference which are accelerated with respect to each other, and as a result we have gained a powerful argument for a generalised postulate of relativity."
Through this thought experiment, Einstein addressed an issue that was so well known, scientists rarely worried about it or considered it puzzling: Objects have "gravitational mass," which determines the force with which they are attracted to other objects. Objects also have "inertial mass," which determines the relationship between the force applied to an object and how much it accelerates. Newton had pointed out that, even though they are defined differently, gravitational mass and inertial mass always seem to be equal. But until Einstein, no one had conceived a good explanation as to why this should be so. From the correspondence revealed by his thought experiment, Einstein concluded that "it is impossible to discover by experiment whether a given system of coordinates is accelerated, or whether...the observed effects are due to a gravitational field." This correspondence between gravitational mass and inertial mass is the equivalence principle.
An extension to his accelerating observer thought experiment allowed Einstein to deduce that "rays of light are propagated curvilinearly in gravitational fields."
Early applications of the equivalence principle
Einstein's formulation of special relativity was in terms of kinematics (the study of moving bodies without reference to forces). Late in 1907, his former mathematics professor, Hermann Minkowski, presented an alternative, geometric interpretation of special relativity in a lecture to the Göttingen Mathematical society, introducing the concept of spacetime. Einstein was initially dismissive of Minkowski's geometric interpretation, regarding it as überflüssige Gelehrsamkeit (superfluous learnedness).
As with special relativity, Einstein's early results in developing what was ultimately to become general relativity were accomplished using kinematic analysis rather than geometric techniques of analysis.
In his 1907 Jahrbuch paper, Einstein first addressed the question of whether the propagation of light is influenced by gravitation, and whether there is any effect of a gravitational field on clocks. In 1911, Einstein returned to this subject, in part because he had realized that certain predictions of his nascent theory were amenable to experimental test.
By the time of his 1911 paper, Einstein and other scientists had offered several alternative demonstrations that the inertial mass of a body increases with its energy content: If the energy increase of the body is , then the increase in its inertial mass is
Einstein asked whether there is an increase of gravitational mass corresponding to the increase in inertial mass, and if there is such an increase, is the increase in gravitational mass precisely the same as its increase in inertial mass? Using the equivalence principle, Einstein concluded that this must be so.
To show that the equivalence principle necessarily implies the gravitation of energy, Einstein considered a light source separated along the z-axis by a distance above a receiver in a homogeneous gravitational field having a force per unit mass of 1 A certain amount of electromagnetic energy is emitted by towards According to the equivalence principle, this system is equivalent to a gravitation-free system which moves with uniform acceleration in the direction of the positive z-axis, with separated by a constant distance from
In the accelerated system, light emitted from takes (to a first approximation) to arrive at But in this time, the velocity of will have increased by from its velocity when the light was emitted. The energy arriving at will therefore not be the energy but the greater energy given by
According to the equivalence principle, the same relation holds for the non-accelerated system in a gravitational field, where we replace by the gravitational potential difference between and so that
The energy arriving at is greater than the energy emitted by by the potential energy of the mass in the gravitational field. Hence corresponds to the gravitational mass as well as the inertial mass of a quantity of energy.
To further clarify that the energy of gravitational mass must equal the energy of inertial mass, Einstein proposed the following cyclic process: (a) A light source is situated a distance above a receiver in a uniform gravitational field. A movable mass can shuttle between and (b) A pulse of electromagnetic energy is sent from to The energy is absorbed by (c) Mass is lowered from to releasing an amount of work equal to (d) The energy absorbed by is transferred to This increases the gravitational mass of to a new value (e) The mass is lifted back to , requiring the input of work (e) The energy carried by the mass is then transferred to completing the cycle.
Conservation of energy demands that the difference in work between raising the mass and lowering the mass, , must equal or one could potentially define a perpetual motion machine. Therefore,
In other words, the increase in gravitational mass predicted by the above arguments is precisely equal to the increase in inertial mass predicted by special relativity.
Einstein then considered sending a continuous electromagnetic beam of frequency (as measured at ) from to in a homogeneous gravitational field. The frequency of the light as measured at will be a larger value given by
Einstein noted that the above equation seemed to imply something absurd: Given that the transmission of light from to is continuous, how could the number of periods emitted per second from be different from that received at It is impossible for wave crests to appear on the way down from to . The simple answer is that this question presupposes an absolute nature of time, when in fact there is nothing that compels us to assume that clocks situated at different gravitational potentials must be conceived of as going at the same rate. The principle of equivalence implies gravitational time dilation.
It is important to realize that Einstein's arguments predicting gravitational time dilation are valid for any theory of gravity that respects the principle of equivalence. This includes Newtonian gravitation. Experiments such as the Pound–Rebka experiment, which have firmly established gravitational time dilation, therefore do not serve to distinguish general relativity from Newtonian gravitation.
In the remainder of Einstein's 1911 paper, he discussed the bending of light rays in a gravitational field, but given the incomplete nature of Einstein's theory as it existed at the time, the value that he predicted was half the value that would later be predicted by the full theory of general relativity.
Non-Euclidean geometry and the rotating disk
By 1912, Einstein had reached an impasse in his kinematic development of general relativity, realizing that he needed to go beyond the mathematics that he knew and was familiar with.
Stachel has identified Einstein's analysis of the rigid relativistic rotating disk as being key to this realization. The rigid rotating disk had been a topic of lively discussion since Max Born and Paul Ehrenfest, in 1909, both presented analyses of rigid bodies in special relativity. An observer on the edge of a rotating disk experiences an apparent ("fictitious" or "pseudo") force called "centrifugal force". By 1912, Einstein had become convinced of a close relationship between gravitation and pseudo-forces such as centrifugal force:
In the accompanying illustration, A represents a circular disk of 10 units diameter at rest in an inertial reference frame. The circumference of the disk is times the diameter, and the illustration shows 31.4 rulers laid out along the circumference. B represents a circular disk of 10 units diameter that is spinning rapidly. According to a non-rotating observer, each of the rulers along the circumference is length-contracted along its line of motion. More rulers are required to cover the circumference, while the number of rulers required to span the diameter is unchanged. Note that we have not stated that we set A spinning to get B. In special relativity, it is not possible to set spinning a disk that is "rigid" in Born's sense of the term. Since spinning up disk A would cause the material to contract in the circumferential direction but not in the radial direction, a rigid disk would become fragmented from the induced stresses.
In later years, Einstein repeatedly stated that consideration of the rapidly rotating disk was of "decisive importance" to him because it showed that a gravitational field causes non-Euclidean arrangements of measuring rods.
Einstein realized that he did not have the mathematical skills to describe the non-Euclidean view of space and time that he envisioned, so he turned to his mathematician friend, Marcel Grossmann, for help. After researching in the library, Grossman found a review article by Ricci and Levi-Civita on absolute differential calculus (tensor calculus). Grossman tutored Einstein on the subject, and in 1913 and 1914, they published two joint papers describing an initial version of a generalized theory of gravitation. Over the next several years, Einstein used these mathematical tools to generalize Minkowski's geometric approach to relativity so as to encompass curved spacetime.
Quantum mechanics
Background: Einstein and the quantum
Many myths have grown up about Einstein's relationship with quantum mechanics. Freshman physics students are aware that Einstein explained the photoelectric effect and introduced the concept of the photon. But students who have grown up with the photon may not be aware of how revolutionary the concept was for his time. The best-known factoids about Einstein's relationship with quantum mechanics are his statement, "God does not play dice with the universe" and the indisputable fact that he just did not like the theory in its final form. This has led to the general impression that, despite his initial contributions, Einstein was out of touch with quantum research and played at best a secondary role in its development. Concerning Einstein's estrangement from the general direction of physics research after 1925, his well-known scientific biographer, Abraham Pais, wrote:
In hindsight, we know that Pais was incorrect in his assessment.
Einstein was arguably the greatest single contributor to the "old" quantum theory.
In his 1905 paper on light quanta, Einstein created the quantum theory of light. His proposal that light exists as tiny packets (photons) was so revolutionary, that even such major pioneers of quantum theory as Planck and Bohr refused to believe that it could be true. Bohr, in particular, was a passionate disbeliever in light quanta, and repeatedly argued against them until 1925, when he yielded in the face of overwhelming evidence for their existence.
In his 1906 theory of specific heats, Einstein was the first to realize that quantized energy levels explained the specific heat of solids. In this manner, he found a rational justification for the third law of thermodynamics (i.e. the entropy of any system approaches zero as the temperature approaches absolute zero): at very cold temperatures, atoms in a solid do not have enough thermal energy to reach even the first excited quantum level, and so cannot vibrate.
Einstein proposed the wave–particle duality of light. In 1909, using a rigorous fluctuation argument based on a thought experiment and drawing on his previous work on Brownian motion, he predicted the emergence of a "fusion theory" that would combine the two views. Basically, he demonstrated that the Brownian motion experienced by a mirror in thermal equilibrium with black-body radiation would be the sum of two terms, one due to the wave properties of radiation, the other due to its particulate properties.
Although Planck is justly hailed as the father of quantum mechanics, his derivation of the law of black-body radiation rested on fragile ground, since it required ad hoc assumptions of an unreasonable character. Furthermore, Planck's derivation represented an analysis of classical harmonic oscillators merged with quantum assumptions in an improvised fashion. In his 1916 theory of radiation, Einstein was the first to create a purely quantum explanation. This paper, well known for broaching the possibility of stimulated emission (the basis of the laser), changed the nature of the evolving quantum theory by introducing the fundamental role of random chance.
In 1924, Einstein received a short manuscript by an unknown Indian professor, Satyendra Nath Bose, outlining a new method of deriving the law of blackbody radiation. Einstein was intrigued by Bose's peculiar method of counting the number of distinct ways of putting photons into the available states, a method of counting that Bose apparently did not realize was unusual. Einstein, however, understood that Bose's counting method implied that photons are, in a deep sense, indistinguishable. He translated the paper into German and had it published. Einstein then followed Bose's paper with an extension to Bose's work which predicted Bose–Einstein condensation, one of the fundamental research topics of condensed matter physics.
While trying to develop a mathematical theory of light which would fully encompass its wavelike and particle-like aspects, Einstein developed the concept of "ghost fields". A guiding wave obeying Maxwell's classical laws would propagate following the normal laws of optics, but would not transmit any energy. This guiding wave, however, would govern the appearance of quanta of energy on a statistical basis, so that the appearance of these quanta would be proportional to the intensity of the interference radiation. These ideas became widely known in the physics community, and through Born's work in 1926, later became a key concept in the modern quantum theory of radiation and matter.
Therefore, Einstein before 1925 originated most of the key concepts of quantum theory: light quanta, wave–particle duality, the fundamental randomness of physical processes, the concept of indistinguishability, and the probability density interpretation of the wave equation. In addition, Einstein can arguably be considered the father of solid state physics and condensed matter physics. He provided a correct derivation of the blackbody radiation law and sparked the notion of the laser.
What of after 1925? In 1935, working with two younger colleagues, Einstein issued a final challenge to quantum mechanics, attempting to show that it could not represent a final solution. Despite the questions raised by this paper, it made little or no difference to how physicists employed quantum mechanics in their work. Of this paper, Pais was to write:
In contrast to Pais' negative assessment, this paper, outlining the EPR paradox, has become one of the most widely cited articles in the entire physics literature. It is considered the centerpiece of the development of quantum information theory, which has been termed the "third quantum revolution."
Wave–particle duality
All of Einstein's major contributions to the old quantum theory were arrived at via statistical argument. This includes his 1905 paper arguing that light has particle properties, his 1906 work on specific heats, his 1909 introduction of the concept of wave–particle duality, his 1916 work presenting an improved derivation of the blackbody radiation formula, and his 1924 work that introduced the concept of indistinguishability.
Einstein's 1909 arguments for the wave–particle duality of light were based on a thought experiment. Einstein imagined a mirror in a cavity containing particles of an ideal gas and filled with black-body radiation, with the entire system in thermal equilibrium. The mirror is constrained in its motions to a direction perpendicular to its surface.
The mirror jiggles from Brownian motion due to collisions with the gas molecules. Since the mirror is in a radiation field, the moving mirror transfers some of its kinetic energy to the radiation field as a result of the difference in the radiation pressure between its forwards and reverse surfaces. This implies that there must be fluctuations in the black-body radiation field, and hence fluctuations in the black-body radiation pressure. Reversing the argument shows that there must be a route for the return of energy from the fluctuating black-body radiation field back to the gas molecules.
Given the known shape of the radiation field given by Planck's law, Einstein could calculate the mean square energy fluctuation of the black-body radiation. He found the root mean square energy fluctuation in a small volume of a cavity filled with thermal radiation in the frequency interval between and to be a function of frequency and temperature:
where would be the average energy of the volume in contact with the thermal bath. The above expression has two terms, the second corresponding to the classical Rayleigh-Jeans law (i.e. a wavelike term), and the first corresponding to the Wien distribution law (which from Einstein's 1905 analysis, would result from point-like quanta with energy ). From this, Einstein concluded that radiation had simultaneous wave and particle aspects.
Bubble paradox
From 1905 to 1923, Einstein was virtually the only physicist who took light-quanta seriously. Throughout most of this period, the physics community treated the light-quanta hypothesis with "skepticism bordering on derision" and maintained this attitude even after Einstein's photoelectric law was validated. The citation for Einstein's 1922 Nobel Prize very deliberately avoided all mention of light-quanta, instead stating that it was being awarded for "his services to theoretical physics and especially for his discovery of the law of the photoelectric effect". This dismissive stance contrasts sharply with the enthusiastic manner in which Einstein's other major contributions were accepted, including his work on Brownian motion, special relativity, general relativity, and his numerous other contributions to the "old" quantum theory.
Various explanations have been given for this neglect on the part of the physics community. First and foremost was wave theory's long and indisputable success in explaining purely optical phenomena. Second was the fact that his 1905 paper, which pointed out that certain phenomena would be more readily explained under the assumption that light is particulate, presented the hypothesis only as a "heuristic viewpoint". The paper offered no compelling, comprehensive alternative to existing electromagnetic theory. Third was the fact that his 1905 paper introducing light quanta and his two 1909 papers that argued for a wave–particle fusion theory approached their subjects via statistical arguments that his contemporaries "might accept as theoretical exercise—crazy, perhaps, but harmless".
Most of Einstein's contemporaries adopted the position that light is ultimately a wave, but appears particulate in certain circumstances only because atoms absorb wave energy in discrete units.
Among the thought experiments that Einstein presented in his 1909 lecture on the nature and constitution of radiation was one that he used to point out the implausibility of the above argument. He
used this thought experiment to argue that atoms emit light as discrete particles rather than as continuous waves: (a) An electron in a cathode ray beam strikes an atom in a target. The intensity of the beam is set so low that we can consider one electron at a time as impinging on the target. (b) The atom emits a spherically radiating electromagnetic wave. (c) This wave excites an atom in a secondary target, causing it to release an electron of energy comparable to that of the original electron. The energy of the secondary electron depends only on the energy of the original electron and not at all on the distance between the primary and secondary targets. All the energy spread around the circumference of the radiating electromagnetic wave would appear to be instantaneously focused on the target atom, an action that Einstein considered implausible. Far more plausible would be to say that the first atom emitted a particle in the direction of the second atom.
Although Einstein originally presented this thought experiment as an argument for light having a particulate nature, it has been noted that this thought experiment, which has been termed the "bubble paradox", foreshadows the famous 1935 EPR paper. In his 1927 Solvay debate with Bohr, Einstein employed this thought experiment to illustrate that according to the Copenhagen interpretation of quantum mechanics that Bohr championed, the quantum wavefunction of a particle would abruptly collapse like a "popped bubble" no matter how widely dispersed the wavefunction. The transmission of energy from opposite sides of the bubble to a single point would occur faster than light, violating the principle of locality.
In the end, it was experiment, not any theoretical argument, that finally enabled the concept of the light quantum to prevail. In 1923, Arthur Compton was studying the scattering of high energy X-rays from a graphite target. Unexpectedly, he found that the scattered X-rays were shifted in wavelength, corresponding to inelastic scattering of the X-rays by the electrons in the target. His observations were totally inconsistent with wave behavior, but instead could only be explained if the X-rays acted as particles. This observation of the Compton effect rapidly brought about a change in attitude, and by 1926, the concept of the "photon" was generally accepted by the physics community.
Einstein's light box
Einstein did not like the direction in which quantum mechanics had turned after 1925. Although excited by Heisenberg's matrix mechanics, Schroedinger's wave mechanics, and Born's clarification of the meaning of the Schroedinger wave equation (i.e. that the absolute square of the wave function is to be interpreted as a probability density), his instincts told him that something was missing. In a letter to Born, he wrote:
The Solvay Debates between Bohr and Einstein began in dining-room discussions at the Fifth Solvay International Conference on Electrons and Photons in 1927. Einstein's issue with the new quantum mechanics was not just that, with the probability interpretation, it rendered invalid the notion of rigorous causality. After all, as noted above, Einstein himself had introduced random processes in his 1916 theory of radiation. Rather, by defining and delimiting the maximum amount of information obtainable in a given experimental arrangement, the Heisenberg uncertainty principle denied the existence of any knowable reality in terms of a complete specification of the momenta and description of individual particles, an objective reality that would exist whether or not we could ever observe it.
Over dinner, during after-dinner discussions, and at breakfast, Einstein debated with Bohr and his followers on the question whether quantum mechanics in its present form could be called complete. Einstein illustrated his points with increasingly clever thought experiments intended to prove that position and momentum could in principle be simultaneously known to arbitrary precision. For example, one of his thought experiments involved sending a beam of electrons through a shuttered screen, recording the positions of the electrons as they struck a photographic screen. Bohr and his allies would always be able to counter Einstein's proposal, usually by the end of the same day.
On the final day of the conference, Einstein revealed that the uncertainty principle was not the only aspect of the new quantum mechanics that bothered him. Quantum mechanics, at least in the Copenhagen interpretation, appeared to allow action at a distance, the ability for two separated objects to communicate at speeds greater than light. By 1928, the consensus was that Einstein had lost the debate, and even his closest allies during the Fifth Solvay Conference, for example Louis de Broglie, conceded that quantum mechanics appeared to be complete.
At the Sixth Solvay International Conference on Magnetism (1930), Einstein came armed with a new thought experiment. This involved a box with a shutter that operated so quickly, it would allow only one photon to escape at a time. The box would first be weighed exactly. Then, at a precise moment, the shutter would open, allowing a photon to escape. The box would then be re-weighed. The well-known relationship between mass and energy would allow the energy of the particle to be precisely determined. With this gadget, Einstein believed that he had demonstrated a means to obtain, simultaneously, a precise determination of the energy of the photon as well as its exact time of departure from the system.
Bohr was shaken by this thought experiment. Unable to think of a refutation, he went from one conference participant to another, trying to convince them that Einstein's thought experiment could not be true, that if it were true, it would literally mean the end of physics. After a sleepless night, he finally worked out a response which, ironically, depended on Einstein's general relativity. Consider the illustration of Einstein's light box:
1. After emitting a photon, the loss of weight causes the box to rise in the gravitational field.
2. The observer returns the box to its original height by adding weights until the pointer points to its initial position. It takes a certain amount of time for the observer to perform this procedure. How long it takes depends on the strength of the spring and on how well-damped the system is. If undamped, the box will bounce up and down forever. If over-damped, the box will return to its original position sluggishly (See Damped spring-mass system).
3. The longer that the observer allows the damped spring-mass system to settle, the closer the pointer will reach its equilibrium position. At some point, the observer will conclude that his setting of the pointer to its initial position is within an allowable tolerance. There will be some residual error in returning the pointer to its initial position. Correspondingly, there will be some residual error in the weight measurement.
4. Adding the weights imparts a momentum to the box which can be measured with an accuracy delimited by It is clear that where is the gravitational constant. Plugging in yields
5. General relativity informs us that while the box has been at a height different than its original height, it has been ticking at a rate different than its original rate. The red shift formula informs us that there will be an uncertainty in the determination of the emission time of the photon.
6. Hence, The accuracy with which the energy of the photon is measured restricts the precision with which its moment of emission can be measured, following the Heisenberg uncertainty principle.
After finding his last attempt at finding a loophole around the uncertainty principle refuted, Einstein quit trying to search for inconsistencies in quantum mechanics. Instead, he shifted his focus to the other aspects of quantum mechanics with which he was uncomfortable, focusing on his critique of action at a distance. His next paper on quantum mechanics foreshadowed his later paper on the EPR paradox.
Einstein was gracious in his defeat. The following September, Einstein nominated Heisenberg and Schroedinger for the Nobel Prize, stating, "I am convinced that this theory undoubtedly contains a part of the ultimate truth."
EPR paradox
Einstein's fundamental dispute with quantum mechanics was not about whether God rolled dice, whether the uncertainty principle allowed simultaneous measurement of position and momentum, or even whether quantum mechanics was complete. It was about reality. Does a physical reality exist independent of our ability to observe it? To Bohr and his followers, such questions were meaningless. All that we can know are the results of measurements and observations. It makes no sense to speculate about an ultimate reality that exists beyond our perceptions.
Einstein's beliefs had evolved over the years from those that he had held when he was young, when, as a logical positivist heavily influenced by his reading of David Hume and Ernst Mach, he had rejected such unobservable concepts as absolute time and space. Einstein believed:
1. A reality exists independent of our ability to observe it.
2. Objects are located at distinct points in spacetime and have their own independent, real existence. In other words, he believed in separability and locality.
3. Although at a superficial level, quantum events may appear random, at some ultimate level, strict causality underlies all processes in nature.
Einstein considered that realism and localism were fundamental underpinnings of physics. After leaving Nazi Germany and settling in Princeton at the Institute for Advanced Study, Einstein began writing up a thought experiment that he had been mulling over since attending a lecture by Léon Rosenfeld in 1933. Since the paper was to be in English, Einstein enlisted the help of the 46-year-old Boris Podolsky, a fellow who had moved to the institute from Caltech; he also enlisted the help of the 26-year-old Nathan Rosen, also at the institute, who did much of the math. The result of their collaboration was the four page EPR paper, which in its title asked the question Can Quantum-Mechanical Description of Physical Reality be Considered Complete?
After seeing the paper in print, Einstein found himself unhappy with the result. His clear conceptual visualization had been buried under layers of mathematical formalism.
Einstein's thought experiment involved two particles that have collided or which have been created in such a way that they have properties which are correlated. The total wave function for the pair links the positions of the particles as well as their linear momenta. The figure depicts the spreading of the wave function from the collision point. However, observation of the position of the first particle allows us to determine precisely the position of the second particle no matter how far the pair have separated. Likewise, measuring the momentum of the first particle allows us to determine precisely the momentum of the second particle. "In accordance with our criterion for reality, in the first case we must consider the quantity P as being an element of reality, in the second case the quantity Q is an element of reality."
Einstein concluded that the second particle, which we have never directly observed, must have at any moment a position that is real and a momentum that is real. Quantum mechanics does not account for these features of reality. Therefore, quantum mechanics is not complete. It is known, from the uncertainty principle, that position and momentum cannot be measured at the same time. But even though their values can only be determined in distinct contexts of measurement, can they both be definite at the same time? Einstein concluded that the answer must be yes.
The only alternative, claimed Einstein, would be to assert that measuring the first particle instantaneously affected the reality of the position and momentum of the second particle. "No reasonable definition of reality could be expected to permit this."
Bohr was stunned when he read Einstein's paper and spent more than six weeks framing his response, which he gave exactly the same title as the EPR paper. The EPR paper forced Bohr to make a major revision in his understanding of complementarity in the Copenhagen interpretation of quantum mechanics.
Prior to EPR, Bohr had maintained that disturbance caused by the act of observation was the physical explanation for quantum uncertainty. In the EPR thought experiment, however, Bohr had to admit that "there is no question of a mechanical disturbance of the system under investigation." On the other hand, he noted that the two particles were one system described by one quantum function. Furthermore, the EPR paper did nothing to dispel the uncertainty principle.
Later commentators have questioned the strength and coherence of Bohr's response. As a practical matter, however, physicists for the most part did not pay much attention to the debate between Bohr and Einstein, since the opposing views did not affect one's ability to apply quantum mechanics to practical problems, but only affected one's interpretation of the quantum formalism. If they thought about the problem at all, most working physicists tended to follow Bohr's leadership.
In 1964, John Stewart Bell made the groundbreaking discovery that Einstein's local realist world view made experimentally verifiable predictions that would be in conflict with those of quantum mechanics. Bell's discovery shifted the Einstein–Bohr debate from philosophy to the realm of experimental physics. Bell's theorem showed that, for any local realist formalism, there exist limits on the predicted correlations between pairs of particles in an experimental realization of the EPR thought experiment. In 1972, the first experimental tests were carried out that demonstrated violation of these limits. Successive experiments improved the accuracy of observation and closed loopholes. To date, it is virtually certain that local realist theories have been falsified.
The EPR paper has recently been recognized as prescient, since it identified the phenomenon of quantum entanglement, which has inspired approaches to quantum mechanics different from the Copenhagen interpretation, and has been at the forefront of major technological advances in quantum computing, quantum encryption, and quantum information theory.
Notes
Primary sources
References
External links
NOVA: Inside Einstein's Mind (2015) — Retrace the thought experiments that inspired his theory on the nature of reality.
Special relativity
General relativity
History of physics
Thought experiments in quantum mechanics
Albert Einstein | Einstein's thought experiments | [
"Physics"
] | 10,557 | [
"Quantum mechanics",
"General relativity",
"Special relativity",
"Thought experiments in quantum mechanics",
"Theory of relativity"
] |
57,266,820 | https://en.wikipedia.org/wiki/Autonomous%20peripheral%20operation | In computing, autonomous peripheral operation is a hardware feature found in some microcontroller architectures to off-load certain tasks into embedded autonomous peripherals in order to minimize latencies and improve throughput in hard real-time applications as well as to save energy in ultra-low-power designs.
Overview
Forms of autonomous peripherals in microcontrollers were first introduced in the 1990s. Allowing embedded peripherals to work independently of the CPU and even interact with each other in certain pre-configurable ways off-loads event-driven communication into the peripherals to help improve the real-time performance due to lower latency and allows for potentially higher data throughput due to the added parallelism. Since 2009, the scheme has been improved in newer implementations to continue functioning in sleep modes as well, thereby allowing the CPU (and other unaffected peripheral blocks) to remain dormant for longer periods of time in order to save energy. This is partially driven by the emerging IoT market.
Conceptually, autonomous peripheral operation can be seen as a generalization of and mixture between direct memory access (DMA) and hardware interrupts. Peripherals that issue event signals are called event generators or producers whereas target peripherals are called event users or consumers. In some implementations, peripherals can be configured to pre-process the incoming data and perform various peripheral-specific functions like comparing, windowing, filtering or averaging in hardware without having to pass the data through the CPU for processing.
Implementations
Known implementations include:
Peripheral Event Controller (PEC) in Siemens/Infineon C166 and C167 16-bit microcontrollers since 1990
Intelligent autonomous peripherals ( CCU6) in Infineon XC800 series of 8051-compatible 8-bit microcontrollers since 2005
Event System (EVSYS) in Atmel AVR XMEGA 8-bit microcontrollers since 2008
Peripheral Event System (PES) with SleepWalking in Atmel (now Microchip Technology) AVR32 AT32UC3L 32-bit microcontrollers since 2009
Peripheral Reflex System (PRS) in Energy Micro (now Silicon Labs) Gecko EFM32 32-bit ARM-based microcontrollers since 2009
IXYS/Zilog ZNEO Z16FMC 16-bit microcontrollers since 2011
Event Link Controller (ELC) in Renesas microcontrollers since 2011
Programmable Peripheral Interconnect (PPI) in Nordic nRF 32-bit ARM-based microcontrollers since about 2011
Autonomous peripherals in Infineon XMC 32-bit microcontrollers since 2012
Data Transfer Manager (DTM) in Silicon Labs Precision32 SiM3L1 32-bit ARM Cortex-M3 microcontrollers since 2012
Peripheral Event System (PES) with SleepWalking in Atmel (now Microchip Technology) SAM4L 32-bit ARM Cortex-M4 microcontrollers since 2012
Power-Smart Peripherals in Freescale (now NXP) Kinetis L 32-bit ARM Cortex-M0+ microcontrollers since 2012
Event System (EVSYS) with SleepWalking in Atmel (now Microchip Technology) SAMD, SAML and SAMC 32-bit ARM Cortex-M0+ microcontrollers since 2013
Core Independent Peripherals (CIP) in Microchip PIC16F and PIC18F as well as Microchip AVR ATtiny 8-bit microcontrollers since 2015
Peripherals Interconnect Matrix in STMicroelectronics' STM32 32-bit ARM-based microcontrollers since 2015
Low-Power Background Autonomous Mode (LPBAM) in STMicroelectronics' STM32U5 32-bit ARM-based microcontrollers since 2021
See also
Channel I/O
Peripheral DMA controller (PDC)
Clock gating, autonomous peripheral clock gating
Power gating
CPU power dissipation
Low-power electronics
Event-driven architecture
Event-driven programming
Always On, Always Connected (AOAC)
Energy-Efficient Ethernet (EEE)
TCP offload engine (TOE)
References
Central processing unit
Electric power
Electronics and the environment
Electronics optimization | Autonomous peripheral operation | [
"Physics",
"Engineering"
] | 890 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
58,746,462 | https://en.wikipedia.org/wiki/IEEE%20Transactions%20on%20Nuclear%20Science | IEEE Transactions on Nuclear Science is a peer-reviewed scientific journal published monthly by the IEEE. Sponsored by IEEE Nuclear and Plasma Sciences Society, the journal covers the theory, technology, and application areas related to nuclear science and engineering. Its editor-in-chief is Zane Bell (Oak Ridge National Laboratory).
The journal was founded in 1954 under the name Transactions of the Institute of Radio Engineers Professional Group on Nuclear Science and was retitled to IRE Transactions on Nuclear Science the following year. Its title was changed to its current name in 1963.
According to the Journal Citation Reports, the journal has a 2022 impact factor of 1.8.
References
External links
Nuclear Science, IEEE Transactions on
Nuclear physics journals
Academic journals established in 1954
English-language journals
Monthly journals | IEEE Transactions on Nuclear Science | [
"Physics"
] | 153 | [
"Nuclear and atomic physics stubs",
"Nuclear physics journals",
"Nuclear physics"
] |
58,756,838 | https://en.wikipedia.org/wiki/Wigner%20surmise | In mathematical physics, the Wigner surmise is a statement about the probability distribution of the spaces between points in the spectra of nuclei of heavy atoms, which have many degrees of freedom, or quantum systems with few degrees of freedom but chaotic classical dynamics. It was proposed by Eugene Wigner in probability theory. The surmise was a result of Wigner's introduction of random matrices in the field of nuclear physics. The surmise consists of two postulates:
In a simple sequence (spin and parity are same), the probability density function for a spacing is given by,
Here, where S is a particular spacing and D is the mean distance between neighboring intervals.
In a mixed sequence (spin and parity are different), the probability density function can be obtained by randomly superimposing simple sequences.
The above result is exact for real symmetric matrices , with elements that are independent standard gaussian random variables, with joint distribution proportional to
In practice, it is a good approximation for the actual distribution for real symmetric matrices of any dimension. The corresponding result for complex hermitian matrices (which is also exact in the case and a good approximation in general) with distribution proportional to , is given by
History
During the conference on Neutron Physics by Time-of-Flight, held at Gatlinburg, Tennessee, November 1 and 2, 1956, Wigner delivered a presentation on the theoretical arrangement of neighboring neutron resonances (with matching spin and parity) in heavy nuclei. In the presentation he gave the following guess:
See also
Wigner semicircle distribution
References
Mathematical physics
Nuclear physics | Wigner surmise | [
"Physics",
"Mathematics"
] | 323 | [
"Random matrices",
"Applied mathematics",
"Theoretical physics",
"Mathematical objects",
"Matrices (mathematics)",
"Nuclear and atomic physics stubs",
"Nuclear physics",
"Statistical mechanics",
"Mathematical physics"
] |
34,217,148 | https://en.wikipedia.org/wiki/Vectors%20in%20gene%20therapy | Gene therapy utilizes the delivery of DNA into cells, which can be accomplished by several methods, summarized below. The two major classes of methods are those that use recombinant viruses (sometimes called biological nanoparticles or viral vectors) and those that use naked DNA or DNA complexes (non-viral methods).
Viruses
All viruses bind to their hosts and introduce their genetic material into the host cell as part of their replication cycle. This genetic material contains basic 'instructions' of how to produce more copies of these viruses, hacking the body's normal production machinery to serve the needs of the virus. The host cell will carry out these instructions and produce additional copies of the virus, leading to more and more cells becoming infected. Some types of viruses insert their genome into the host's cytoplasm, but do not actually enter the cell. Others penetrate the cell membrane disguised as protein molecules and enter the cell.
There are two main types of virus infection: lytic and lysogenic. Shortly after inserting its DNA, viruses of the lytic cycle quickly produce more viruses, burst from the cell and infect more cells. Lysogenic viruses integrate their DNA into the DNA of the host cell and may live in the body for many years before responding to a trigger. The virus reproduces as the cell does and does not inflict bodily harm until it is triggered. The trigger releases the DNA from that of the host and employs it to create new viruses.
Retroviruses
The genetic material in retroviruses is in the form of RNA molecules, while the genetic material of their hosts is in the form of DNA. When a retrovirus infects a host cell, it will introduce its RNA together with some enzymes, namely reverse transcriptase and integrase, into the cell. This RNA molecule from the retrovirus must produce a DNA copy from its RNA molecule before it can be integrated into the genetic material of the host cell. The process of producing a DNA copy from an RNA molecule is termed reverse transcription. It is carried out by one of the enzymes carried in the virus, called reverse transcriptase. After this DNA copy is produced and is free in the nucleus of the host cell, it must be incorporated into the genome of the host cell. That is, it must be inserted into the large DNA molecules in the cell (the chromosomes). This process is done by another enzyme carried in the virus called integrase.
Now that the genetic material of the virus has been inserted, it can be said that the host cell has been modified to contain new genes. If this host cell divides later, its descendants will all contain the new genes. Sometimes the genes of the retrovirus do not express their information immediately.
One of the problems of gene therapy using retroviruses is that the integrase enzyme can insert the genetic material of the virus into any arbitrary position in the genome of the host; it randomly inserts the genetic material into a chromosome. If genetic material happens to be inserted in the middle of one of the original genes of the host cell, this gene will be disrupted (insertional mutagenesis). If the gene happens to be one regulating cell division, uncontrolled cell division (i.e., cancer) can occur. This problem has recently begun to be addressed by utilizing zinc finger nucleases or by including certain sequences such as the beta-globin locus control region to direct the site of integration to specific chromosomal sites.
Gene therapy trials using retroviral vectors to treat X-linked severe combined immunodeficiency (X-SCID) represent the most successful application of gene therapy to date. More than twenty patients have been treated in France and Britain, with a high rate of immune system reconstitution observed. Similar trials were restricted or halted in the US when leukemia was reported in patients treated in the French X-SCID gene therapy trial. To date, four children in the French trial and one in the British trial have developed leukemia as a result of insertional mutagenesis by the retroviral vector. All but one of these children responded well to conventional anti-leukemia treatment. Gene therapy trials to treat SCID due to deficiency of the Adenosine Deaminase (ADA) enzyme (one form of SCID) continue with relative success in the US, Britain, Ireland, Italy and Japan.
Adenoviruses
Adenoviruses are viruses that carry their genetic material in the form of double-stranded DNA. They cause respiratory, intestinal, and eye infections in humans (especially the common cold). When these viruses infect a host cell, they introduce their DNA molecule into the host. The genetic material of the adenoviruses is not incorporated (transient) into the host cell's genetic material. The DNA molecule is left free in the nucleus of the host cell, and the instructions in this extra DNA molecule are transcribed just like any other gene. The only difference is that these extra genes are not replicated when the cell is about to undergo cell division so the descendants of that cell will not have the extra gene.
As a result, treatment with the adenovirus will require re-administration in a growing cell population although the absence of integration into the host cell's genome should prevent the type of cancer seen in the SCID trials. This vector system has been promoted for treating cancer and indeed the first gene therapy product to be licensed to treat cancer, Gendicine, is an adenovirus. Gendicine, an adenoviral p53-based gene therapy was approved by the Chinese food and drug regulators in 2003 for treatment of head and neck cancer. Advexin, a similar gene therapy approach from Introgen, was turned down by the US Food and Drug Administration (FDA) in 2008.
Concerns about the safety of adenovirus vectors were raised after the 1999 death of Jesse Gelsinger while participating in a gene therapy trial. Since then, work using adenovirus vectors has focused on genetically limited versions of the virus.
Cytomegalovirus
Cytomegalovirus (CMV) is part of the β-herpesvirus subfamily that includes roseoloviruses. CMV coevolved with an assortment of mammalian hosts, including human CMV (HCMV), murine CMV (MCMV) and rhesus CMV (RhCMV). CMVs are characterized by large DNA genomes and typically asymptomatic infection in healthy hosts.
The first investigation into cytomegalovirus (CMV) as a gene therapy vector was published in 2000. CMV's tropism for hematopoietic progenitor cells and its large genome (230 kbp) initially attracted researchers. CMV-based vaccine vectors have since been used to induce T Cell response. More recently, CMV containing telomerase and follistatin was intravenously and intranasally delivered in mouse studies with the intention of extending healthspan.
Envelope protein pseudotyping of viral vectors
The viral vectors described above have natural host cell populations that they infect most efficiently. Retroviruses have limited natural host cell ranges, and although adenovirus and adeno-associated virus are able to infect a relatively broader range of cells efficiently, some cell types are resistant to infection by these viruses as well. Attachment to and entry into a susceptible cell is mediated by the protein envelope on the surface of a virus. Retroviruses and adeno-associated viruses have a single protein coating their membrane, while adenoviruses are coated with both an envelope protein and fibers that extend away from the surface of the virus. The envelope proteins on each of these viruses bind to cell-surface molecules such as heparin sulfate, which localizes them upon the surface of the potential host, as well as with the specific protein receptor that either induces entry-promoting structural changes in the viral protein, or localizes the virus in endosomes wherein acidification of the lumen induces this refolding of the viral coat. In either case, entry into potential host cells requires a favorable interaction between a protein on the surface of the virus and a protein on the surface of the cell.
For the purposes of gene therapy, one might either want to limit or expand the range of cells susceptible to transduction by a gene therapy vector. To this end, many vectors have been developed in which the endogenous viral envelope proteins have been replaced by either envelope proteins from other viruses, or by chimeric proteins. Such chimera would consist of those parts of the viral protein necessary for incorporation into the virion as well as sequences meant to interact with specific host cell proteins. Viruses in which the envelope proteins have been replaced as described are referred to as pseudotyped viruses. For example, the most popular retroviral vector for use in gene therapy trials has been the lentivirus Simian immunodeficiency virus coated with the envelope proteins, G-protein, from Vesicular stomatitis virus. This vector is referred to as VSV G-pseudotyped lentivirus, and infects an almost universal set of cells. This tropism is characteristic of the VSV G-protein with which this vector is coated. Many attempts have been made to limit the tropism of viral vectors to one or a few host cell populations. This advance would allow for the systemic administration of a relatively small amount of vector. The potential for off-target cell modification would be limited, and many concerns from the medical community would be alleviated. Most attempts to limit tropism have used chimeric envelope proteins bearing antibody fragments. These vectors show great promise for the development of "magic bullet" gene therapies.
Replication-competent vectors
A replication-competent vector called ONYX-015 is used in replicating tumor cells. It was found that in the absence of the E1B-55Kd viral protein, adenovirus caused very rapid apoptosis of infected, p53(+) cells, and this results in dramatically reduced virus progeny and no subsequent spread. Apoptosis was mainly the result of the ability of EIA to inactivate p300. In p53(-) cells, deletion of E1B 55kd has no consequence in terms of apoptosis, and viral replication is similar to that of wild-type virus, resulting in massive killing of cells.
A replication-defective vector deletes some essential genes. These deleted genes are still necessary in the body so they are replaced with either a helper virus or a DNA molecule.
Cis and trans-acting elements
Replication-defective vectors always contain a "transfer construct". The transfer construct carries the gene to be transduced or "transgene". The transfer construct also carries the sequences which are necessary for the general functioning of the viral genome: packaging sequence, repeats for replication and, when needed, priming of reverse transcription. These are denominated cis-acting elements, because they need to be on the same piece of DNA as the viral genome and the gene of interest. Trans-acting elements are viral elements, which can be encoded on a different DNA molecule. For example, the viral structural proteins can be expressed from a different genetic element than the viral genome.
Herpes simplex virus
The herpes simplex virus is a human neurotropic virus. This is mostly examined for gene transfer in the nervous system. The wild type HSV-1 virus is able to infect neurons and evade the host immune response, but may still become reactivated and produce a lytic cycle of viral replication. Therefore, it is typical to use mutant strains of HSV-1 that are deficient in their ability to replicate. Though the latent virus is not transcriptionally apparent, it does possess neuron specific promoters that can continue to function normally. Antibodies to HSV-1 are common in humans, however complications due to herpes infection are somewhat rare. Caution for rare cases of encephalitis must be taken and this provides some rationale to using HSV-2 as a viral vector as it generally has tropism for neuronal cells innervating the urogenital area of the body and could then spare the host of severe pathology in the brain.
Non-viral methods
Non-viral methods present certain advantages over viral methods, with simple large scale production and low host immunogenicity being just two. Previously, low levels of transfection and expression of the gene held non-viral methods at a disadvantage; however, recent advances in vector technology have yielded molecules and techniques with transfection efficiencies similar to those of viruses.
Injection of naked DNA
This is the simplest method of non-viral transfection. Clinical trials carried out of intramuscular injection of a naked DNA plasmid have occurred with some success; however, the expression has been very low in comparison to other methods of transfection. In addition to trials with plasmids, there have been trials with naked PCR product, which have had similar or greater success. Cellular uptake of naked DNA is generally inefficient. Research efforts focusing on improving the efficiency of naked DNA uptake have yielded several novel methods, such as electroporation, sonoporation, and the use of a "gene gun", which shoots DNA coated gold particles into the cell using high pressure gas.
Physical methods to enhance delivery
Electroporation
Electroporation is a method that uses short pulses of high voltage to carry DNA across the cell membrane. This shock is thought to cause temporary formation of pores in the cell membrane, allowing DNA molecules to pass through. Electroporation is generally efficient and works across a broad range of cell types. However, a high rate of cell death following electroporation has limited its use, including clinical applications.
More recently a newer method of electroporation, termed electron-avalanche transfection, has been used in gene therapy experiments. By using a high-voltage plasma discharge, DNA was efficiently delivered following very short (microsecond) pulses. Compared to electroporation, the technique resulted in greatly increased efficiency and less cellular damage.
Gene gun
The use of particle bombardment, or the gene gun, is another physical method of DNA transfection. In this technique, DNA is coated onto gold particles and loaded into a device which generates a force to achieve penetration of the DNA into the cells, leaving the gold behind on a "stopping" disk.
Sonoporation
Sonoporation uses ultrasonic frequencies to deliver DNA into cells. The process of acoustic cavitation is thought to disrupt the cell membrane and allow DNA to move into cells.
Magnetofection
In a method termed magnetofection, DNA is complexed to magnetic particles, and a magnet is placed underneath the tissue culture dish to bring DNA complexes into contact with a cell monolayer.
Hydrodynamic delivery
Hydrodynamic delivery involves rapid injection of a high volume of a solution into vasculature (such as into the inferior vena cava, bile duct, or tail vein). The solution contains molecules that are to be inserted into cells, such as DNA plasmids or siRNA, and transfer of these molecules into cells is assisted by the elevated hydrostatic pressure caused by the high volume of injected solution.
Chemical methods to enhance delivery
Oligonucleotides
The use of synthetic oligonucleotides in gene therapy is to deactivate the genes involved in the disease process. There are several methods by which this is achieved. One strategy uses antisense specific to the target gene to disrupt the transcription of the faulty gene. Another uses small molecules of RNA called siRNA to signal the cell to cleave specific unique sequences in the mRNA transcript of the faulty gene, disrupting translation of the faulty mRNA, and therefore expression of the gene. A further strategy uses double stranded oligodeoxynucleotides as a decoy for the transcription factors that are required to activate the transcription of the target gene. The transcription factors bind to the decoys instead of the promoter of the faulty gene, which reduces the transcription of the target gene, lowering expression. Additionally, single stranded DNA oligonucleotides have been used to direct a single base change within a mutant gene. The oligonucleotide is designed to anneal with complementarity to the target gene with the exception of a central base, the target base, which serves as the template base for repair. This technique is referred to as oligonucleotide mediated gene repair, targeted gene repair, or targeted nucleotide alteration.
Lipoplexes
To improve the delivery of the new DNA into the cell, the DNA must be protected from damage and positively charged. Initially, anionic and neutral lipids were used for the construction of lipoplexes for synthetic vectors. However, in spite of the facts that there is little toxicity associated with them, that they are compatible with body fluids and that there was a possibility of adapting them to be tissue specific; they are complicated and time-consuming to produce so attention was turned to the cationic versions.
Cationic lipids, due to their positive charge, were first used to condense negatively charged DNA molecules so as to facilitate the encapsulation of DNA into liposomes. Later it was found that the use of cationic lipids significantly enhanced the stability of lipoplexes. Also as a result of their charge, cationic liposomes interact with the cell membrane, endocytosis was widely believed as the major route by which cells uptake lipoplexes. Endosomes are formed as the results of endocytosis, however, if genes can not be released into cytoplasm by breaking the membrane of endosome, they will be sent to lysosomes where all DNA will be destroyed before they could achieve their functions. It was also found that although cationic lipids themselves could condense and encapsulate DNA into liposomes, the transfection efficiency is very low due to the lack of ability in terms of "endosomal escaping". However, when helper lipids (usually electroneutral lipids, such as DOPE) were added to form lipoplexes, much higher transfection efficiency was observed. Later on, it was discovered that certain lipids have the ability to destabilize endosomal membranes so as to facilitate the escape of DNA from endosome, therefore those lipids are called fusogenic lipids. Although cationic liposomes have been widely used as an alternative for gene delivery vectors, a dose dependent toxicity of cationic lipids were also observed which could limit their therapeutic usages.
The most common use of lipoplexes has been in gene transfer into cancer cells, where the supplied genes have activated tumor suppressor control genes in the cell and decrease the activity of oncogenes. Recent studies have shown lipoplexes to be useful in transfecting respiratory epithelial cells.
Polymersomes
Polymersomes are synthetic versions of liposomes (vesicles with a lipid bilayer), made of amphiphilic block copolymers. They can encapsulate either hydrophilic or hydrophobic contents and can be used to deliver cargo such as DNA, proteins, or drugs to cells. Advantages of polymersomes over liposomes include greater stability, mechanical strength, blood circulation time, and storage capacity.
Polyplexes
Complexes of polymers with DNA are called polyplexes. Most polyplexes consist of cationic polymers and their fabrication is based on self-assembly by ionic interactions. One important difference between the methods of action of polyplexes and lipoplexes is that polyplexes cannot directly release their DNA load into the cytoplasm. As a result, co-transfection with endosome-lytic agents such as inactivated adenovirus was required to facilitate nanoparticle escape from the endocytic vesicle made during particle uptake. However, a better understanding of the mechanisms by which DNA can escape from endolysosomal pathway, i.e. proton sponge effect, has triggered new polymer synthesis strategies such as incorporation of protonable residues in polymer backbone and has revitalized research on polycation-based systems.
Due to their low toxicity, high loading capacity, and ease of fabrication, polycationic nanocarriers demonstrate great promise compared to their rivals such as viral vectors which show high immunogenicity and potential carcinogenicity, and lipid-based vectors which cause dose dependence toxicity. Polyethyleneimine and chitosan are among the polymeric carriers that have been extensively studied for development of gene delivery therapeutics. Other polycationic carriers such as poly(beta-amino esters) and polyphosphoramidate are being added to the library of potential gene carriers. In addition to the variety of polymers and copolymers, the ease of controlling the size, shape, surface chemistry of these polymeric nano-carriers gives them an edge in targeting capability and taking advantage of enhanced permeability and retention effect.
Dendrimers
A dendrimer is a highly branched macromolecule with a spherical shape. The surface of the particle may be functionalized in many ways and many of the properties of the resulting construct are determined by its surface.
In particular it is possible to construct a cationic dendrimer, i.e. one with a positive surface charge. When in the presence of genetic material such as DNA or RNA, charge complementarity leads to a temporary association of the nucleic acid with the cationic dendrimer. On reaching its destination the dendrimer-nucleic acid complex is then taken into the cell via endocytosis.
In recent years the benchmark for transfection agents has been cationic lipids. Limitations of these competing reagents have been reported to include: the lack of ability to transfect some cell types, the lack of robust active targeting capabilities, incompatibility with animal models, and toxicity. Dendrimers offer robust covalent construction and extreme control over molecule structure, and therefore size. Together these give compelling advantages compared to existing approaches.
Producing dendrimers has historically been a slow and expensive process consisting of numerous slow reactions, an obstacle that severely curtailed their commercial development. The Michigan-based company Dendritic Nanotechnologies discovered a method to produce dendrimers using kinetically driven chemistry, a process that not only reduced cost by a magnitude of three, but also cut reaction time from over a month to several days. These new "Priostar" dendrimers can be specifically constructed to carry a DNA or RNA payload that transfects cells at a high efficiency with little or no toxicity.
Inorganic nanoparticles
Inorganic nanoparticles, such as gold, silica, iron oxide (ex. magnetofection) and calcium phosphates have been shown to be capable of gene delivery. Some of the benefits of inorganic vectors is in their storage stability, low manufacturing cost and often time, low immunogenicity, and resistance to microbial attack. Nanosized materials less than 100 nm have been shown to efficiently trap the DNA or RNA and allows its escape from the endosome without degradation. Inorganics have also been shown to exhibit improved in vitro transfection for attached cell lines due to their increased density and preferential location on the base of the culture dish. Quantum dots have also been used successfully and permits the coupling of gene therapy with a stable fluorescence marker. Engineered organic nanoparticles are also under development, which could be used for co-delivery of genes and therapeutic agents.
Cell-penetrating peptides
Cell-penetrating peptides (CPPs), also known as peptide transduction domains (PTDs), are short peptides (< 40 amino acids) that efficiently pass through cell membranes while being covalently or non-covalently bound to various molecules, thus facilitating these molecules' entry into cells. Cell entry occurs primarily by endocytosis but other entry mechanisms also exist. Examples of cargo molecules of CPPs include nucleic acids, liposomes, and drugs of low molecular weight.
CPP cargo can be directed into specific cell organelles by incorporating localization sequences into CPP sequences. For example, nuclear localization sequences are commonly used to guide CPP cargo into the nucleus. For guidance into mitochondria, a mitochondrial targeting sequence can be used; this method is used in protofection (a technique that allows for foreign mitochondrial DNA to be inserted into cells' mitochondria).
Hybrid methods
Due to every method of gene transfer having shortcomings, there have been some hybrid methods developed that combine two or more techniques. Virosomes are one example; they combine liposomes with an inactivated HIV or influenza virus. This has been shown to have more efficient gene transfer in respiratory epithelial cells than either viral or liposomal methods alone. Other methods involve mixing other viral vectors with cationic lipids or hybridising viruses.
See also
Genosome (lipoplex)
Techniques of genetic engineering
Transformation
Transfection
Transduction
References
Applied genetics
Bioethics
Biotechnology
Medical genetics
Molecular biology
Gene delivery | Vectors in gene therapy | [
"Chemistry",
"Technology",
"Biology"
] | 5,188 | [
"Bioethics",
"Genetics techniques",
"Gene delivery",
"Biotechnology",
"Molecular biology techniques",
"nan",
"Ethics of science and technology",
"Biochemistry",
"Molecular biology"
] |
40,947,668 | https://en.wikipedia.org/wiki/Price%27s%20model | Price's model (named after the physicist Derek J. de Solla Price) is a mathematical model for the growth of citation networks. It was the first model which generalized the Simon model to be used for networks, especially for growing networks. Price's model belongs to the broader class of network growing models (together with the Barabási–Albert model) whose primary target is to explain the origination of networks with strongly skewed degree distributions. The model picked up the ideas of the Simon model reflecting the concept of rich get richer, also known as the Matthew effect. Price took the example of a network of citations between scientific papers and expressed its properties. His idea was that the way an old vertex (existing paper) gets new edges (new citations) should be proportional to the number of existing edges (existing citations) the vertex already has. This was referred to as cumulative advantage, now also known as preferential attachment. Price's work is also significant in providing the first known example of a scale-free network (although this term was introduced later). His ideas were used to describe many real-world networks such as the Web.
The model
Basics
Considering a directed graph with n nodes. Let denote the fraction of nodes with degree k so that . Each new node has a given out-degree (namely those papers it cites) and it is fixed in the long run. This does not mean that the out-degrees can not vary across nodes, simply we assume that the mean out-degree m is fixed over time. It is clear, that , consequently m is not restricted to integers. The most trivial form of preferential attachment means that a new node connects to an existing node proportionally to its in-degrees. In other words, a new paper cites an existing paper in proportional to its in-degrees. The caveat of such idea is that no new paper is cited when it is joined to the network so it is going to have zero probability of being cited in the future (which necessarily is not how it happens). To overcome this, Price proposed that an attachment should be proportional to some with constant. In general can be arbitrary, yet Price proposes a , in that way an initial citation is associated with the paper itself (so the proportionality factor is now k + 1 instead of k). The probability of a new edge connecting to any node with a degree k is
Evolution of the network
The next question is the net change in the number of nodes with degree k when we add new nodes to the network. Naturally, this number is decreasing, as some k-degree nodes have new edges, hence becoming (k + 1)-degree nodes; but on the other hand this number is also increasing, as some (k − 1)-degree nodes might get new edges, becoming k degree nodes. To express this net change formally, let us denote the fraction of k-degree nodes at a network of n vertices with :
and
To obtain a stationary solution for , first let us express using the well-known master equation method, as
After some manipulation, the expression above yields to
and
with being the Beta-function. As a consequence, . This is identical to saying that follows a power-law distribution with exponent . Typically, this places the exponent between 2 and 3, which is the case for many real world networks. Price tested his model by comparing to the citation network data and concluded that the resulting m is feasible to produce a sufficiently good power-law distribution.
Generalization
It is straightforward how to generalize the above results to the case when . Basic calculations show that
which once more yields to a power law distribution of with the same exponent for large k and fixed .
Properties
The key difference from the more recent Barabási–Albert model is that the Price model produces a graph with directed edges while the Barabási–Albert model is the same model but with undirected edges. The direction is central to the citation network application which motivated Price. This means that the Price model produces a directed acyclic graph and these networks have distinctive properties.
For example, in a directed acyclic graph both longest paths and shortest paths are well defined. In the Price model the length of the longest path from the n-th node added to the network to the first node in the network, scales as
Notes
For further discussion, see, and. Price was able to derive these results but this was how far he could get with it, without the provision of computational resources. Fortunately, much work dedicated to preferential attachment and network growth has been enabled by recent technological progress.
References
Sources
Mathematical modeling
Networks | Price's model | [
"Mathematics"
] | 939 | [
"Applied mathematics",
"Mathematical modeling"
] |
40,949,171 | https://en.wikipedia.org/wiki/Valley%20of%20Peace%20initiative | The Valley of Peace initiative is an effort to promote economic cooperation between Israel, Jordan, and Palestine based around efforts and joint projects in the Arava/Arabah Valley, along which runs the southern portion of the Israel - Jordan border. It received the personal attention and support of Shimon Peres, President of Israel. The initiative involved ongoing joint efforts by regional leaders to launch joint new industrial and economic projects, which will create new local businesses and job growth, and promote ongoing cooperation.
This effort also fits with other new trends and efforts within Israeli and Palestinian society to promote reconciliation based on joint economic effort and dialogue between both groups.
One major component of this plan is the construction and operation of Qualifying Industrial Zones. These are industrial facilities within Jordan and Egypt which can serve as centers of collaborative effort.
Overview and current status
The idea for this project began in 2005, when Israel, Jordan and the Palestinian Authority asked the World Bank to analyze the feasibility of this idea.
The formal proposal for the Valley of Peace initiative began with a joint proposal in 2008 to build a canal between the Red and Dead Seas, desalinating the water, producing hydroelectric power and yielding profits, clean water, jobs and potentially unprecedented regional cooperation. The study concluded in 2013, and an agreement was signed in 2013 by Israel, Jordan, and the Palestinian Authority to move ahead with the plan.
In February 2015, Israel and Jordan signed an agreement to exchange water and jointly convey Red Sea brine to the shrinking Dead Sea. The agreement was reported to be worth about $800 million. It was the result of a memorandum of understanding signed among Israeli, Jordanian and Palestinian officials on December 9, 2013, in Washington. Under this agreement, Jordan and Israel will share the potable water produced by a future desalination plant in Aqaba, while a pipeline will supply saltwater to the Dead Sea.
In December 2015, Israel and Jordan formally released the technical plans to move ahead with this project.
A new desalination plant to be built near the Jordanian tourist resort of Aqaba would convert salt water from the Red Sea into fresh water for use in southern Israel and southern Jordan; each region would get eight billion to 13 billion gallons a year. This process would produces about as much brine as a waste product; the brine would be piped more than 100 miles to help replenish the Dead Sea, already known for its high salt content. This would reinforce the status of the Dead Sea as an important economic resource to both nations, in multiple areas including tourism, industry and business.
In July 2017, Israel and the Palestinian Authority announced a new deal to provide drinking water for millions of Palestinians. this was part of the larger deal between Israel, Jordan and Palestine to build a 220-kilometer (137-mile) pipeline to convey water from the Red Sea to the Dead Sea. One benefit of this canal would be to replenish the dwindling Dead Sea. Also, the water in this canal will generate electricity for local towns, and will also power a desalination plant to produce drinking water.
Original plan and features, 2008
The effort first got attention in 2008, when various regional leaders began to promote this set of ideas.
One major part of the plan includes the private sector development of a $3 billion, 166 km-long (103-mile) canal system along the Arava, known as the Two Seas Canal. This engineering scheme would bring Red Sea water to the Dead Sea and could provide additional projects and benefits to the region and increase cooperation between Israelis, Jordanians, and Palestinians, through greater development and economic integration. Some environmentalists have criticized the plan, saying that rehabilitation of the Jordan River would be a better way to save the Dead Sea, and would bring less disruption.
This Valley of Peace is part of a 20-kilometer [23-mile] corridor being proposed by Israeli President Shimon Peres for regional economic development. About 420 km of the corridor runs along the Jordanian border, with no border fences, and another 100 km touches on the Palestinian territories. Other projects involve the German, Japanese, and Turkish governments and are slated to create up to a million new jobs in Israel and the West Bank.
Original project contents, 2008
As first proposed in 2008, the plan encompassed a number of items. Some possible future developments along the canal may include convention centers, hotels for up to 200,000 people, restaurants, parks, and artificial lakes and lagoons, and greenhouses for winter fruits and vegetables. A high-speed train line and highway would run along the canal allowing travel between the Dead and Red Seas within an hour. The area may also become a free-trade zone, thus attracting investment from around the world.
The canal might also include a major desalinization plant. In May 2008, it was announced that this project was getting close to being implemented.
PIEFZA is a Palestinian economic organization designed to promote participation in the industrial parks which will be created by this effort. The project will also include a number of other separate efforts and projects, including:
An industrial area in Jenin to create jobs, with support from Germany, which pledged about $30 million to help with this. This would be used for businesses in the fields of textile, wood, and food products. A similar effort has already succeeded in the Jordanian industrial zone.
An agro-industrial park to be built near Jericho, with support from Japan.
The "Erez Industrial Estate" project - a free industrial zone near Erez crossing, with support from Turkey. It will be supported through a joint effort by Turkish, Israeli and Palestinian private sectors, known as the Ankara Forum which will also pursue other common economic goals and industrial projects. The project received the support of the Israeli, Turkish and Palestinian presidents during a trilateral summit in Ankara the previous November, according to an aide to Peres. The seventh meeting of the Ankara Forum for Economic Cooperation on November 13, 2007, brought together business representatives from the Union of Chambers and Commodity Exchanges of Turkey (TOBB), the Federation of Palestinian Chambers of Commerce, Industry and Agriculture and the Israeli Manufacturers' Association. Turkey has played a crucial role to bring peace to the region by playing facilitator role between the regional countries.
A hi-tech industrial park in Nazareth by entrepreneur Stef Wertheimer, which was completed and opened on April 23, 2013. Wertheimer has stated, "Coexistence in the industrial park in Arab Nazareth is a good example of coexistence. When people work together, they have no time for nonsense. They're too tired at night to commit terrorist acts. They're satisfied, they engage in producing. They work together, not against each other." Wertheimer has founded four industrial parks in Israel. "The idea of industrial parks in the Middle East and on the borders between Israel and its neighbors is that the parks will bring industry and provide jobs, which will keep people busy working, instead of engaging in terrorism," explains Wertheimer. One park is the Tefen Industrial Park which includes everything from transportation to cultural and educational facilities. In April 2013, a new industrial park opened in Nazareth. He has established seven industrial parks – in Tefen, Tel-Hai, Dalton, Lavon and now Nazareth in the Galilee; in Omer in the Negev; and another in Turkey.
A car factory to be operated jointly by Israel and Jordan. It would manufacture cars made by Toyota and Renault.
an airport in Eilat shared by Jordan and Israel, which will facilitate future cooperation in tourism between the two countries. It would include a Jordanian terminal, for tourists to Jordan, and an Israeli terminal, leading tourism to Eilat.
A railway connection between Jordan and Israel, which would facilitate shipments of goods between the two countries.
Agro-industrial development in Jericho, enabling the region to be a major agricultural source for the Middle East. Japan has offered to aid in the development of this.
A joint Israeli-Palestinian university and medical center. This effort would be led by Ali Dogramaci, a Turkish professor, and will be located on the Israeli side of the Green Line, between Afula and Jenin. As of June 2009, the status of this project is in doubt due to some objections raised by some officials within the Israeli Government. However, the Turkish Government has given this effort some encouragement.
Project history
Project origins, 2005–2008
The idea for this project began in 2005, when Israel, Jordan and the Palestinian Authority asked the World Bank to analyze the feasibility of the idea.
In July 2006, Japan announced a plan for peace called "Corridor for Peace and Prosperity", which would be based on common economic development and effort, rather than on continuous contention over land. Shimon Peres gave this idea much attention during his participation in an international conference in New York in September 2006 which was organized by former U.S. President Bill Clinton.
In March 2007, at a two-day conference in Tokyo which included officials from Japan, Israel and the Palestinian Authority, Japan discussed its plan for peace based on common economic development and effort. Both sides stated their support.
In March 2007, the Israeli Cabinet officially decided to adopt the Peace Valley plan, which would entail promotion of and cooperation on economic development for Palestinians. However, some news reports indicated there was little chance of movement due to lack of attention by Prime Minister Ehud Olmert and the government of Israel.
In his inaugural speech in July 2007, Peres mentioned this effort, and asserted that there was great potential for cooperation among Israel, Palestinians, and Jordan. He also noted this might mean positive support from Persian Gulf states. In August 2007, Peres met with several Israeli businessmen to discuss ways to press the plan forward. Peres stated that the plan might have many positive effects which might help promote peace.
In August 2007, Foreign Ministers of Israel, Jordan, the Palestinian Authority, and Japan met in Jericho, and formally agreed to go ahead with this plan. A ceremony took place that month in Jericho formally launching the project. it was attended by Israeli Foreign Minister Tzipi Livni, Palestinian negotiator Saeb Erekat, Japanese Foreign Minister Taro Aso and Jordanian Foreign Minister Abdul-Ilah Khatib.
In January 2008, Peres announced that the plan had moved closer to realization, as new details were announced for implementation of joint economic effort in four locations in the West Bank. This included specific plans for industrial projects, and a jointly-built university, and investments from several countries, including Japan, Turkey and Germany. Peres discussed this with Tony Blair during Blair's visit to the Mideast in February 2008. Peres said that efforts were moving ahead.
USAID and the World Bank have reviewed many of the specific proposals in depth, and issued a critique of many strengths and weaknesses of the plan. In May 2008 Tony Blair announced a new plan for peace and for Palestinian rights, based heavily on the ideas on the Peace Valley plan.
In May 2008, Peres hosted a conference in celebration of Israel's 60th anniversary, called "Facing Tomorrow". He addressed numerous issues related to Israel's future. He discussed the Peace Valley initiative with numerous foreign leaders. President George Bush expressed support for the idea. Peres said that the initiative could bring lasting peace and transformation to the region. Regarding Palestinians, he said,
Public statements
Benjamin Netanyahu, a former Finance Minister of Israel and the former Prime Minister of Israel has repeatedly made public statements during the 2009 Israeli elections which advocated an approach to peace based on economic cooperation and joint effort, rather than continuous contention over political and diplomatic issues. He raised these ideas during discussions with U.S. Secretary of State Condoleezza Rice. Netanyahu continued to advocate these ideas as the Israeli elections got nearer and plans to execute them after he assumed office.
Netanyahu has said:
Right now, the peace talks are based only one thing, only on peace talks. It makes no sense at this point to talk about the most contractible issue. It's Jerusalem or bust, or right of return or bust. That has led to failure and is likely to lead to failure again....We must weave an economic peace alongside a political process. That means that we have to strengthen the moderate parts of the Palestinian economy by handing rapid growth in those area, rapid economic growth that gives a stake for peace for the ordinary Palestinians."
Similarly, in a Jerusalem Post interview, Tony Blair, the special envoy for the Quartet, said in May 2009:
Question: ...we're hearing about a determination to build from the bottom up with the Palestinians, including assurances that economic projects that had been stymied will now be advanced...
Blair: ...you have to build from the bottom up as well as negotiate from the top down...because once you take the three "headings" - politics, economics and security... Each of these things take decisions...it will become apparent, whether Israelis are prepared to build from the bottom up, and whether Palestinians understand that Israel will only tolerate a Palestinian state that is a stable and secure neighbor...
...people ask me, why are you bothered about whether there's a bit of agri-industrial thing around Jericho. And I say, because it matters. The detail on the ground really matters. Just supposing you've [created the conditions] in the Jericho area to exploit the [tourism] potential it has got. You're creating a whole set of stake-holders who, when it comes to those difficult concessions, are going to say, "We want the state." They are then believing in a reality, not a shibboleth...
Implementation and effects
The Bethlehem Small Enterprise Center opened in early 2008 with funding from Germany, and has helped Palestinian small businesses in various areas, such as helping printers to improve software and olive wood craftsmen to market their products.
In 2008, the economy in the West Bank improved gradually. Economic growth for the occupied areas reached about 4-5% and unemployment dropped about 3%. Israeli figures indicated that wages in the West Bank rose more than 20% in 2008 and trade rose about 35%. Tourism in Bethlehem increased to about twice its previous levels, and tourism increased by 50% in Jericho. In 2009, an economic boom began with growth reaching 7 percent, higher than in Israel or the West. Tourism to Bethlehem, which had doubled to 1 million in 2008, rose to nearly 1.5 million in 2009. New car imports increased by 44 percent. New shopping malls opened in Jenin and Nablus. Palestinian developers are planning to build the first modern Palestinian city, Rawabi.
In 2009, efforts continued to build Palestinian local institutions and governments from the ground up. Much of this work was done by Tony Blair and U.S. General Keith Dayton. Some analysts saw this as a more substantial way to lay a groundwork for viable institutions and for local peace.
See also
Israeli–Palestinian conflict
Projects working for peace among Arabs and Israelis
Middle East economic integration
Red Sea–Dead Sea Canal
Israeli–Palestinian economic peace efforts
Specific groups
Trade Unions Linking Israel and Palestine
Arava Institute for Environmental Studies
Alliance for Middle East Peace
References
External links
Official websites
Israel Ministry of Foreign Affairs
Project website at Japan foreign ministry
Industrial Estates in the Palestinian Authority, Peres Center for Peace website
Palestinian Investment Promotion Agency official website.
Palestinian Industrial Estates & Free Zones Authority official website.
Israeli-Palestinian Chamber of Commerce
US Palestinian Partnership Official agency to promote joint partnership and economic development efforts.
Middle East Investment Initiative Promotes economic development and cooperation, in conjunction with public and private organizations
Media coverage and private sites
Peace and Prosperity in the West Bank? Can a breakthrough experiment finally bring peace to the West Bank? -- Documentary and article at PBS website, 7/10/09.
In West Bank, the Economy Offers a Ray of Hope, By Ethan Bronner, November 10, 2009.
Can West Bank improvements hold in 2010?, By Leslie Susser, jta.org, January 11, 2010.
Project Video showing a computer generated design
Blog entry listing new projects Maximizing Progress blog.
Gertner Architects & Urban Planners master planning for Peace Valley project
News coverage of individual sites and effort
Joint Israeli-Palestinian industrial park to be built in north, By Haaretz Service, 8/9/08. "A new joint Israeli-Palestinian industrial park will be built next year in the northern Gilboa region. It will cost $200 million and occupy some 350 acres of land, was approved on Monday by a joint Israeli-American committee, and will be built under the auspices of the US ambassador to Israel James Cunningham, the U.S. special envoy to the Mideast General James L. Jones and the U.S. Security Coordinator for Israel and the Palestinian Authority Keith Dayton."
Israeli, Palestinian mayors pitch rare joint industrial project, By Gil Shefler, Jewish Telegraphic Agency, September 1, 2009. "The Jewish mayor of a region in northern Israel adjacent to the West Bank announced a plan with the governor of the West Bank city of Jenin for a joint industrial zone, coexistence projects and a sports league that would bring together the region’s Israeli and Palestinian children."
Israel-Palestine: third party industrial zones, Bitter Lemons blog website, 5/24/07.
Town on Israeli-Palestinian border finds a good balance, Inexpensive goods on the West Bank side of Barta’a attract many Israelis for shopping, By Linda Gradstein February 17, 2012. in Barta’a, Israelis and Palestinians mix freely. The town is legally divided, with West Barta’a inside Israel and East Barta’a in the West Bank. But there’s no physical barrier between the two sides, and East Barta’a has developed a thriving market of hundreds of small stores selling everything from coffee sets to sheets to food to special teddy bears for Valentine's Day.
Palestinians mull 'economic peace' plan, June 2013, BBC. Yolande Knell takes a look at the economic outlook for Palestinians. With talks between Israel and the Palestinians stalled, US Secretary of State John Kerry returns to the Middle East with the incentive of a major investment plan for the Palestinians, dependent on progress towards a peace deal. The plan could boost the struggling Palestinian economy, but there are some who fear it could come at the price of their political demands.
Israeli–Palestinian peace process
Israeli–Palestinian joint economic efforts
Irrigation projects
Water politics in the Middle East
Environment of Israel
Environment of Jordan
Israel–Jordan relations
Jordan–State of Palestine relations
Jordan in the Arab–Israeli conflict | Valley of Peace initiative | [
"Engineering"
] | 3,794 | [
"Irrigation projects"
] |
40,951,831 | https://en.wikipedia.org/wiki/Viral%20metagenomics | Viral metagenomics uses metagenomic technologies to detect viral genomic material from diverse environmental and clinical samples. Viruses are the most abundant biological entity and are extremely diverse; however, only a small fraction of viruses have been sequenced and only an even smaller fraction have been isolated and cultured. Sequencing viruses can be challenging because viruses lack a universally conserved marker gene so gene-based approaches are limited. Metagenomics can be used to study and analyze unculturable viruses and has been an important tool in understanding viral diversity and abundance and in the discovery of novel viruses. For example, metagenomics methods have been used to describe viruses associated with cancerous tumors and in terrestrial ecosystems.
History
The traditional methods for discovering, characterizing, and assigning viral taxonomy to viruses were based on isolating the virus particle or its nucleic acid from samples. The virus morphology could be visualized using electron microscopy but only if the virus could be isolated in high enough titer to be detected. The virus could be cultured in eukaryotic cell lines or bacteria but only if the appropriate host cell type was known and the nucleic acid of the virus would be detected using PCR but only if a consensus primer was known.
Metagenomics requires no prior knowledge of the viral genome as it does not require a universal marker gene, a primer or probe design. Because this method uses prediction tools to detect viral content of a sample, it can be used to identify new virus species or divergent members of known species.
The earliest metagenomic studies of viruses were carried out on ocean samples in 2002. The sequences that were matched to referenced sequences were predominantly double-stranded DNA bacteriophages and double-stranded algal viruses.
In 2016 the International Committee on Taxonomy of Viruses (ICTV) officially recognized that viral genomes assembled from metagenomic data can be classified using the same procedures for viruses isolated via classical virology approaches.
Challenges
Viral dark matter
In the 2002 metagenomics study the researchers found that 65% of the sequences of DNA and RNA viruses had no matches in the reference databases. This phenomenon of unmatched viral sequences in sequence reference databases is prevalent in viral metagenomics studies and is referred to as “viral dark matter". It is predominantly caused by the lack of complete viral genome sequences of diverse samples in reference databases and the rapid rate of viral evolution.
Virus nucleic acid type diversity
Adding to these challenges, there are seven classes of viruses based on the Baltimore classification system which groups viruses based on their genomic structure and their manner of transcription: there are double-stranded DNA viruses, single-stranded DNA viruses, double-stranded RNA viruses, and single-stranded RNA virus. Single-stranded RNA can be positive or negative sense. These different nucleic acids types need different sequencing approaches and there is no universal gene marker that is conserved for all virus types. Gene-based approaches can only target specific groups of viruses (such as RNA viruses that share a conserved RNA polymerase sequence).
DNA virus bias
There is still a bias towards DNA viruses in reference databases. Common reasons for this bias is because RNA viruses mutate more rapidly than DNA viruses, DNA is easier to handle from samples while RNA is unstable, and more steps are needed for RNA metagenomics analysis (reverse transcription).
Sequence contamination
Sequences can be contaminated with the host organism's' sequences which is particularly troublesome if the host organism of the virus is unknown. There could also be contamination from nucleic acid extraction and PCR.
Methods
Untargeted metagenomics
Metagenomic analysis uses whole genome shotgun sequencing to characterize microbial diversity in clinical and environmental samples. Total DNA and/or RNA are extracted from the samples and are prepared on a DNA or RNA library for sequencing. These methods have been used to sequence the whole genome of Epstein–Barr virus (EBV) and HCV, however, contaminating host nucleic acids can affect the sensitivity to the target viral genome with the proportion of reads related to the target sequence often being low.
The IMG/VR system and the IMG/VR v.2.0 are the largest interactive public virus databases with over 760,000 metagenomic viral sequences and isolate viruses and serves as a starting point for the sequence analysis of viral fragments derived from metagenomic samples.
Targeted metagenomics: amplicon sequencing
While untargeted metagenomics and metatranscriptomics does not need a genetic marker, amplicon sequencing does. It uses a gene that is highly conserved as a genetic marker, but because of the varied nucleic acid types, the marker used has to be for specific groups of viruses. This is done via PCR amplification of primers that are complementary to a known, highly conserved nucleotide sequence. PCR is then followed by whole genome sequencing methods and has been used to track the Ebola virus, Zika Virus, and COVID-19 epidemics. PCR amplicon sequencing is more successful for whole genome sequencing of samples with low concentrations. However, with larger viral genomes and the heterogeneity of RNA viruses multiple overlapping primers may be required to cover the amplification of all genotypes. PCR amplicon sequencing requires knowledge of the viral genome prior to sequencing, appropriate primers, and is highly dependent on viral titers, however, PCR amplicon sequencing is a cheaper evaluation method than metagenomic sequencing when studying known viruses with relatively small genomes.
Targeted metagenomics: enrichment with probes
Target enrichment is a culture independent method that sequences viral genomes directly from a sample using small RNA or DNA probes complementary to the pathogens reference sequence. The probes, which can be bound to a solid phase and capture and pull down complementary DNA sequences in the sample. The presence of overlapping probes increases the tolerance for primer mismatches but their design requires high cost and time so a rapid response is limited. DNA capture is followed by brief PCR cycling and shotgun sequencing. Success of this method is dependent available reference sequences to create the probes and is not suitable for characterization of novel viruses. This method has been used to characterize large and small viruses such as HCV, HSV-1, and HCMV.
Limitations
Viral metagenomics methods can produce erroneous chimerical sequences. These can include in vitro artifacts from amplification and in silico artifacts from assembly. Chimeras can form between unrelated viruses, as well as between viral and eukaryotic sequences. The likelihood of errors is partially mitigated by greater sequencing depth, but chimeras can still form in areas of high coverage if the reads are highly fragmented.
Applications
Agriculture
Plant viruses pose a global threat to crop production but through metagenomic sequencing and viral database creation, modified plant viruses can be used to aid in plant immunity as well as alter physical appearance. Data obtained on plant virus genomes from metagenomic sequencing can be used to create clone viruses to inoculate the plant with to study viral components and biological characterization of viral agents with increased reproducibility. Engineered mutant virus strains have been used to alter the coloration and size of various ornamental plants and promote the health of crops.
Ecology
Viral metagenomics contributes to viral classification without the need of culture based methodologies and has provided vast insights on viral diversity in any system. Metagenomics can be used to study viruses effects on a given ecosystem and how they effect the microbiome as well as monitoring viruses in an ecosystem for possible spillover into human populations. Within the ecosystems, viruses can be studied to determine how they compete with each other as well as viral effects on functions of the host. Viral metagenomics has been used to study unculturable viral communities in marine and soil ecosystems.
Infectious disease research
Viral metagenomics is readily used to discover novel viruses, with a major focus on those zoonotic or pathogenic to humans. Viral databases obtained from metagenomics provides quick response methods to determine viral infections as well as determine drug resistant variants in clinical samples. The contributions of viral metagenomics to viral classification have aided pandemic surveillance efforts as well as made infectious disease surveillance and testing more affordable. Since the majority of human pandemics are zoonotic in origin, metagenomic surveillance can provide faster identification of novel viruses and their reservoirs.
One such surveillance program is the Global Virome Project (GVP) an international collaborative research initiative based at the One Health Institute at the University of California, Davis. The GVP aims to boost infectious disease surveillance around the globe by using low cost sequencing methods in high risk countries to prevent disease outbreaks and to prevent future virus outbreaks.
Medicine
Viral metagenomics has been used to test for virus related cancers and difficult to diagnose cases in clinical diagnostics. This method is most often used when conventional and advanced molecular testing cannot find a causative agent for disease. Metagenomic sequencing can also be used to detect pathogenic viruses in clinical samples and provide real time data for a pathogens presence in a population.
The methods used for clinical viral metagenomics are not standardized, but guidelines have been published by the European Society for Clinical Virology. A mixture of different sequencing platforms are used for clinical viral metagenomics, the most common being instruments from Illumina and Oxford Nanopore Technologies. There are also several different protocols, both for wet lab work and for bioinformatic analysis, that are in use.
See also
Epidemiology and sewage
Pathogenomics
References
External links
IMG/VR The IMG Viral Database (IMG/VR).
CAMERA Cyberinfrastructure for Metagenomics, data repository and tools for metagenomics research.
GOLD Genomes OnLine Database (GOLD).
IMG/M The Integrated Microbial Genomes system, for metagenome analysis by the DOE-JGI.
MEGAN MEtaGenome ANalyzer. A stand-alone metagenome analysis tool.
MetaGeneMark MetaGeneMark for MetaGenome Gene Finding
Metagenomics and Our Microbial Planet A website on metagenomics and the vital role of microbes on Earth from the National Academies.
Metagenomics at the European Bioinformatics Institute Analysis and archiving of metagenomic data.
Metagenomics: Sequences from the Environment free ebook from NCBI Bookshelf.
MG-RAST Metagenomics Rapid Annotation using Subsystem Technology server
The New Science of Metagenomics: Revealing the Secrets of Our Microbial Planet A report released by the National Research Council in March 2007. Also, see the Report In Brief.
http://www.globalviromeproject.org/ Official site of the Global Virome Project
https://www.usaid.gov/ept2 Emerging pandemic threats program EPT-2
https://www.ecohealthalliance.org/program/predict
Bioinformatics
Environmental microbiology
Pathogen genomics
Metagenomics
Microbiology techniques
Virology | Viral metagenomics | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 2,255 | [
"Biological engineering",
"Bioinformatics",
"Molecular genetics",
"Microbiology techniques",
"DNA sequencing",
"Environmental microbiology",
"Pathogen genomics"
] |
40,956,474 | https://en.wikipedia.org/wiki/R%C3%BCchardt%20experiment | The Rüchardt experiment, invented by Eduard Rüchardt, is a famous experiment in thermodynamics, which determines the ratio of the molar heat capacities of a gas, i.e. the ratio of (heat capacity at constant pressure) and (heat capacity at constant volume) and is denoted by (gamma, for ideal gas) or (kappa, isentropic exponent, for real gas). It arises because the temperature of a gas changes as pressure changes.
The experiment directly yields the heat capacity ratio or adiabatic index of the gas, which is the ratio of the heat capacity at constant pressure to heat capacity at constant volume. The results are sometimes also known as the isentropic expansion factor.
Background
If a gas is compressed adiabatically, i.e. without outflow of heat from the system, the temperature rises (due to the pressure increase) at a higher rate with respect to isothermal compression, where the performed work is dissipated as heat. The exponent, , with which the expansion of the gas can be calculated by the application of heat is called the isentropic – or adiabatic coefficient. Its value is determined by the Rüchardt experiment.
An adiabatic and reversible running state change is isentropic (entropy S remains the same as temperature T changes). The technique is usually an adiabatic change of state. For example, a steam turbine is not isentropic, as friction, choke and shock processes produce entropy.
Experiment
A typical experiment, consists of a glass tube of volume V, and of cross-section A, which is open on one of its end. A ball (or sometimes a piston) of mass m with the same cross-section, creating an air-tight seal, is allowed to fall under gravity g. The entrapped gas is first compressed by the weight of the piston, which leads to an increase in temperature. In the course of the piston falling, a gas cushion is created, and the piston bounces. Harmonic oscillation occurs, which slowly damps. The result is a rapid sequence of expansion and compression of the gas. The picture shows a revised version of the original Rüchardt setup: the sphere oscillating inside the tube is here replaced by a "breast-pump" which acts as an oscillating glass-piston; in this new setup three sensors allow to measure in real-time the piston oscillations as well as the pressure and temperature oscillations of the air inside the bottle (more details may be found in )
According to Figure 1, the piston inside the tube is in equilibrium if the pressure P inside the glass bottle is equal to the sum of the atmospheric pressure P0 and the pressure increase due to the piston weight :
When the piston moves beyond the equilibrium by a distance dx, the pressure changes by dp. A force F will be exerted on the piston, equal to
According to Newton's second law of motion, this force will create an acceleration a equal to
As this process is adiabatic, the equation for ideal gas (Poisson's equation) is:
It follows using differentiation from the equation above that:
If the piston moves by a distance in the glass tube, the corresponding change in volume will be
By substituting equation into equation , we can rewrite as follows:
Solving this equation and rearranging terms yields the differential equation of a harmonic oscillation from which the angular frequency ω can be deduced:
From this, the period T of harmonic oscillation performed by the ball is:
Measuring the period of oscillation T and the relative pressure P in the tube yields the equation for the adiabatic exponent:
List of various versions of the Rüchardt experiment
In 1929, Rinkel proposed a different method to calculate while using the Rüchardt apparatus: he noted that it may be shown that the vertical distance L which the sphere falls before it begin to rise is: , so may be calculated from measured values of L, m, V, P and A.
In 1951, Koehler and later, in 1972 Flammersfeld introduced a trick in the original Rüchardt setup, to increase the number of oscillations that are limited by the unavoidable friction-damping and gas leak (through the piston-tube seal): they made a thin hole on the tube (at half-height) and provided a gas-feeding pump to keep the pressure inside the vessel constant. By properly trimming the gas inlet flux (through a throttling valve) they obtained the following result: during the oscillations the piston is pushed-up by the gas overpressure until it crosses the hole position; then the gas leakage through the hole reduces the pressure, and the piston falls back. The force acting onto the piston varies at a rate that is regulated by the piston oscillation frequency leading to forced oscillation; fine adjustment of the throttle valve allows to achieve maximum amplitude at resonance.
In 1958, Christy and Rieser used only a gas-feeding pump to stabilize the gas pressure. A slightly different solution was found in 1964 by Hafner who used a tapered tube (conical: slightly larger at the top).
In 1959, Taylor used a column of mercury oscillating inside a U-shaped tube instead of the Rüchardt sphere.
In 1964, Donnally and Jensen used a variable load attached to the Rüchardt sphere in order to allows frequency measurements with different oscillating mass.
In 1967, Lerner suggested a modified version of the Taylor method (with mercury replaced by water).
In 1979, Smith reported a simplified version of the complex Rüchardt-resonance method, originally invented by Clark and Katz, in which an oscillating magnetic piston is driven into resonance by an external coil.
In 1988, Connolly suggested the use of a photogate to measure more precisely the frequency of the Rüchardt sphere.
In 2001, Severn and Steffensen used a pressure transducer to monitor the pressure oscillations in the original Rüchardt setup.
In 2001, Torzo, Delfitto, Pecori and Scatturin implemented the version of Rüchardt apparatus (shown in the top picture) using three sensors: a sonar that monitors the breast-pump oscillations, and pressure and temperature sensors that monitor the changes in pressure and temperature inside the glass vessel.
References
Thermodynamics
Chemical engineering
Physics experiments
Chemistry experiments | Rüchardt experiment | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 1,351 | [
"Physics experiments",
"Chemical engineering",
"Experimental physics",
"Thermodynamics",
"nan",
"Dynamical systems"
] |
56,825,635 | https://en.wikipedia.org/wiki/Low%20thrust%20relative%20orbital%20transfer | In orbital mechanics, low-thrust relative transfer is an orbital maneuver in which a chaser spacecraft covers a specific relative distance relative to the target spacecraft using continuous low-thrust system with specific impulse of the order of 4000-8000s. This is in contrast to conventional impulsive transfers in the orbit which uses thermal rocket engines to develop impulse of the order of 300-400s. Such type of transfer uses low-thrust propulsion systems such as electrically powered spacecraft propulsion and solar sail.
Low-thrust relative transfer uses the orbital relative motion equations which are the non-linear set of equations that describes the motion of the chaser spacecraft relative to the target in terms of displacements along the respective axis of the accelerated frame of reference fixed on the target spacecraft. In 1960, W. H. Clohessy and R. S. Wiltshire published the Clohessy-Wiltshire equations, which presents a rather simplified model of orbital relative motion, in which the target is in a circular orbit, and the chaser spacecraft is in an elliptical or circular orbit. Since, the quantity of available thrust is limited, the transfer is occasionally posed as an optimal control problem subjected to the required objective and constraints.
Explanation
Relative motion in the orbit means the motion of a spacecraft orbiting a planet relative to the other spacecraft orbiting the same planet. There can be one primary spacecraft known as the target and the other spacecraft with the task of performing the required maneuver relative to the target. Based on the mission requirement, the various relative orbital transfers can be rendezvous and docking operations, and maintaining station relative to the target. Unlike using a thrust-impulse to instantaneously change the velocity of the spacecraft, in non-impulsive transfer, there is a continuous application of thrust, so that, the spacecraft changes its direction gradually. Non-impulsive transfers relies on the low-thrust propulsion for the operation. Some of the mentionable low-thrust propulsion methods are, ionic propulsion, Hall-effect thruster and solar-sail systems. The electrostatic ion thruster uses high-voltage electrodes to accelerate ions with electrostatic forces, and achieve a specific impulse within the range of 4000-8000s.
Mathematical Models
The continuous low-thrust relative transfer can be described in mathematical form by adding components of specific thrust which will act as control input in the equations of motion model for relative orbital transfer. Although a number of linearized models have been developed since 1960s which gives simplified set of equations, one popular model was developed by W. H. Clohessy and R. S. Wiltshire, and is modified to account for continuous motion and can be written as:
where:
, and are the relative distance component of the chaser in the target fixed frame of reference
and are the specific thrust in the form of control input along , and -axis of the target fixed frame of reference
is the orbital frequency of the target orbit
Optimal relative transfers
Since, in continuous low-thrust transfers the thrust availability is limited, such type of transfers are usually subjected to certain performance index and final state constraints, posing the transfer as an optimal control problem with defined boundary conditions. For the transfer to have optimal control input expenditure, the problem can be written as:
subjected to dynamics of the relative transfer:
and boundary conditions:
where:
is the state-vector defined as
is the control input vector defined as
is the weight matrix
is the state matrix obtained from the Clohessy-Wiltshire equations, such that,
is the input matrix, such that,
is the time of start of transfer
is the time of end of transfer
is the initial value of the state vector
is the final value of the state vector
Sometimes, it is also useful to subject the system to control constraints because in case of continuous low-thrust transfer, there are always bounds on the availability of thrust. Hence, if the maximum quantity of thrust available is , then, an additional inequality constraint can be imposed on the optimal control problem posed above as:
Additionally, if the relative transfer is occurring such that the chaser and the target spacecraft are very close to each other, the collision-avoidance constraints can also be employed in the optimal control problem in the form of a minimum relative distance, as:
and because of obvious reasons, the final value of state-vector cannot be less than .
See also
Bi-elliptic transfer
Delta-v budget
Geostationary transfer orbit
Halo orbit
Lissajous orbit
List of orbits
Orbital mechanics
References
Astrodynamics
Orbital maneuvers | Low thrust relative orbital transfer | [
"Engineering"
] | 893 | [
"Astrodynamics",
"Aerospace engineering"
] |
56,830,451 | https://en.wikipedia.org/wiki/Workplace%20robotics%20safety | Workplace robotics safety is an aspect of occupational safety and health when robots are used in the workplace. This includes traditional industrial robots as well as emerging technologies such as drone aircraft and wearable robotic exoskeletons. Types of accidents include collisions, crushing, and injuries from mechanical parts. Hazard controls include physical barriers, good work practices, and proper maintenance.
Background
Many workplace robots are industrial robots used in manufacturing. According to the International Federation of Robotics, 1.7 million new robots are expected to be used in factories between 2017 and 2020. Emerging robot technologies include collaborative robots, personal care robots, construction robots, exoskeletons, autonomous vehicles, and drone aircraft (also known as unmanned aerial vehicles or UAVs).
Advances in automation technologies (e.g. fixed robots, collaborative and mobile robots, and exoskeletons) have the potential to improve work conditions but also to introduce workplace hazards in manufacturing workplaces. Fifty-six percent of robot injuries are classified as pinch injuries and 44% of injuries are classified as impact injuries. A 1987 study found that line workers are at the greatest risk, followed by maintenance workers, and programmers. Poor workplace design and human error caused most injuries. Despite the lack of occupational surveillance data on injuries associated specifically with robots, researchers from the US National Institute for Occupational Safety and Health (NIOSH) identified 61 robot-related deaths between 1992 and 2015 using keyword searches of the Bureau of Labor Statistics (BLS) Census of Fatal Occupational Injuries research database (see info from Center for Occupational Robotics Research). Using data from the Bureau of Labor Statistics, NIOSH and its state partners have investigated 4 robot-related fatalities under the Fatality Assessment and Control Evaluation Program. In addition the Occupational Safety and Health Administration (OSHA) has investigated robot-related deaths and injuries, which can be reviewed at OSHA Accident Search page. Injuries and fatalities could increase over time because of the increasing number of collaborative and co-existing robots, powered exoskeletons, and autonomous vehicles into the work environment.
Safety standards are being developed by the Robotic Industries Association (RIA) in conjunction with the American National Standards Institute (ANSI). On October 5, 2017, OSHA, NIOSH and RIA signed an alliance to work together to enhance technical expertise, identify and help address potential workplace hazards associated with traditional industrial robots and the emerging technology of human-robot collaboration installations and systems, and help identify needed research to reduce workplace hazards. On October 16 NIOSH launched the Center for Occupational Robotics Research to "provide scientific leadership to guide the development and use of occupational robots that enhance worker safety, health, and well being". So far, the research needs identified by NIOSH and its partners include: tracking and preventing injuries and fatalities, intervention and dissemination strategies to promote safe machine control and maintenance procedures, and on translating effective evidence-based interventions into workplace practice.
Hazards
Many hazards and injuries can result from the use of robots in the workplace. Some robots, notably those in a traditional industrial environment, are fast and powerful. This increases the potential for injury as one swing from a robotic arm, for example, could cause serious bodily harm. There are additional risks when a robot malfunctions or is in need of maintenance. A worker who is working on the robot may be injured because a malfunctioning robot is typically unpredictable. For example, a robotic arm that is part of a car assembly line may experience a jammed motor. A worker who is working to fix the jam may suddenly get hit by the arm the moment it becomes unjammed. Additionally, if a worker is standing in a zone that is overlapping with nearby robotic arms, he or she may get injured by other moving equipment.
There are four types of accidents that can occur with robots: impact or collision accidents, crushing and trapping accidents, mechanical part accidents, and other accidents. Impact or collision accidents occur generally from malfunctions and unpredicted changes. Crushing and trapping accidents occur when a part of a worker's body becomes trapped or caught on robotic equipment. Mechanical part accidents can occur when a robot malfunctions and starts to "break down", where the ejection of parts or exposed wire can cause serious injury. Other accidents at just general accidents that occur from working with robots.
There are seven sources of hazards that are associated with human interaction with robots and machines: human errors, control errors, unauthorized access, mechanical failures, environmental sources, power systems, and improper installation. Human errors could be anything from one line of incorrect code to a loose bolt on a robotic arm. Many hazards can stem from human-based error. Control errors are intrinsic and are usually not controllable nor predictable. Unauthorized access hazards occur when a person who is not familiar with the area enters the domain of a robot. Mechanical failures can happen at any time, and a faulty unit is usually unpredictable. Environmental sources are things such as electromagnetic or radio interference in the environment that can cause a robot to malfunction. Power systems are pneumatic, hydraulic, or electrical power sources; these power sources can malfunction and cause fires, leaks, or electrical shocks. Improper installation is fairly self-explanatory; a loose bolt or an exposed wire can lead to inherent hazards.
Emerging technologies
Emerging robotic technologies can reduce hazards to workers, but can also introduce new hazards. For example, robotic exoskeletons can be used in construction to reduce load to the spine, improve posture, and reduce fatigue; however, they can also increase chest pressure, limit mobility when moving out of the way of a falling object, and cause balance problems. Unmanned aerial vehicles are being used in the construction industry to do monitoring and inspections of buildings under construction. This reduces the need for humans to be in hazardous locations, but the risk of a UAV collision presents a hazard to workers. For collaborative robots, isolation is not possible. Possible hazard controls include collision avoidance systems, and making the robot less stiff to lessen the impact force. Robotic tech vest is a wearable device for humans, worn in Amazon warehouses.
Hazard controls
There are a few ways to prevent injuries by implementing hazard controls. There can be risk assessments at each of the various stages of a robot's development. Risk assessments can help gather information about a robot's status, how well it is being maintained, and if repairs are needed soon. By being aware of the status of a robot, injuries can be prevented and hazards reduced.
Safeguarding devices can be implemented to reduce the risk of injuries. These can include engineering controls such as physical barriers, guard rails, presence-sensing safeguarding devices, etc. Awareness devices are usually used in conjunction with safeguarding devices. They are usually a system of rope or chain barriers with lights, signs, whistles, and horns. Their purpose it to be able to alert workers or personnel of certain dangers.
Operator safeguards can also be in place. These usually utilize safeguarding devices to protect the operator and reduce risk of injury. Additionally, when an operator is within close proximity of a robot, the working speed of the robot can be reduced to ensure that the operator is in full control. This can be done by placing the robot in the manual or teach mode. It is also crucial to inform the programmer of the robot of what type of work the robot will be doing, how it will interact with other robots, and how it will work in relation to an operator.
Proper maintenance of robotic equipment is also critical in order to reduce hazards. Maintaining a robot insures that it continues to function properly, thereby reducing the risks associated with a malfunction.
One common safeguard used in industrial settings is the installation of robot safety fencing. These barriers, often made from durable materials such as mesh or polycarbonate, prevent accidental interactions between workers and robotic systems, reducing the risk of injury. Robot safety fencing is particularly important in environments where high-speed or powerful robots are used.
Regulations
Some existing regulations regarding robots and robotic systems include:
ANSI/RIA R15.06
OSHA 29 CFR 1910.333
OSHA 29 CFR 1910.147
ISO 10218
ISO/TS 15066
ISO/DIS 13482
References
Organizational cybernetics
Robotics
Industrial robotics
Military robots
Workplace
Occupational hazards
Occupational safety and health
External links
Center for Occupational Robotics Research of the National Institute for Occupational Safety and Health, in the U.S. | Workplace robotics safety | [
"Engineering"
] | 1,702 | [
"Robotics",
"Automation"
] |
56,835,844 | https://en.wikipedia.org/wiki/Cryptographic%20multilinear%20map | A cryptographic -multilinear map is a kind of multilinear map, that is, a function such that for any integers and elements , , and which in addition is efficiently computable and satisfies some security properties. It has several applications on cryptography, as key exchange protocols, identity-based encryption, and broadcast encryption. There exist constructions of cryptographic 2-multilinear maps, known as bilinear maps, however, the problem of constructing such multilinear maps for seems much more difficult and the security of the proposed candidates is still unclear.
Definition
For n = 2
In this case, multilinear maps are mostly known as bilinear maps or pairings, and they are usually defined as follows: Let be two additive cyclic groups of prime order , and another cyclic group of order written multiplicatively. A pairing is a map: , which satisfies the following properties:
Bilinearity
Non-degeneracy If and are generators of and , respectively, then is a generator of .
Computability There exists an efficient algorithm to compute .
In addition, for security purposes, the discrete logarithm problem is required to be hard in both and .
General case (for any n)
We say that a map is a -multilinear map if it satisfies the following properties:
All (for ) and are groups of same order;
if and , then ;
the map is non-degenerate in the sense that if are generators of , respectively, then is a generator of
There exists an efficient algorithm to compute .
In addition, for security purposes, the discrete logarithm problem is required to be hard in .
Candidates
All the candidates multilinear maps are actually slightly generalizations of multilinear maps known as graded-encoding systems, since they allow the map to be applied partially: instead of being applied in all the values at once, which would produce a value in the target set , it is possible to apply to some values, which generates values in intermediate target sets. For example, for , it is possible to do then .
The three main candidates are GGH13, which is based on ideals of polynomial rings; CLT13, which is based approximate GCD problem and works over integers, hence, it is supposed to be easier to understand than GGH13 multilinear map; and GGH15, which is based on graphs.
References
Cryptography
Multilinear algebra | Cryptographic multilinear map | [
"Mathematics",
"Engineering"
] | 501 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
56,835,885 | https://en.wikipedia.org/wiki/Fairfield%20Experiment | The Fairfield experiment was an experiment in industrial relations carried out at the Fairfield Shipbuilding and Engineering Company, Glasgow, during the 1960s. The experiment was initiated by Sir Iain Maxwell Stewart, industrialist, chairman of Thermotank Ltd, and signatory to the Marlow Declaration of the early 1960s, and supported by George Brown, the First Secretary in Harold Wilson's cabinet, in 1966. The company was facing closure, and Brown agreed to provide £1 million (£13,135,456.90 in 2021 terms) to enable the Trade Unions, the management, and the shareholders to try out new ways of industrial management.
The Bowler and the Bunnet
The Bowler and the Bunnet was a film directed by Sean Connery and written by Cliff Hanley about the Fairfield Experiment.
References
Operations research
Govan
Shipbuilding
Industry in Scotland | Fairfield Experiment | [
"Mathematics",
"Engineering"
] | 167 | [
"Applied mathematics",
"Operations research",
"Shipbuilding",
"Marine engineering"
] |
56,840,178 | https://en.wikipedia.org/wiki/Thermostad | A thermostad is a homogeneous layer of oceanic waters in terms of temperature, it is defined as a relative minimum of the vertical temperature gradient.
The term was coined in 1966 by R. Carlton Seitz, at the time at the Chesapeake Bay Institute of Johns Hopkins University.
He proposed it as in opposition to a thermocline, in which the thermal gradient is large. The ending "stad" is from the Greek word στάδην meaning "in an upright position", from the root ἵστημι meaning to stand.
The suffix "-stad" is now widely used in oceanography.
References
Oceanography | Thermostad | [
"Physics",
"Environmental_science"
] | 135 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
60,260,200 | https://en.wikipedia.org/wiki/Fariborz%20Haghighat | Fariborz Haghighat is an Iranian-Canadian academic, engineer and Distinguished Professor of Building, Civil & Environmental Engineering at Concordia University. Haghighat has a Concordia University Research Chair (Tier I) in Energy and Environment and he was Inducted into the Provost's Circle of Distinction in 2009.
Early life, education and career
He completed his undergraduate degree in chemical engineering at the Aryamehr Technical University of Technology (now called Sharif University of Technology) in 1975. He moved to the United States to continue his M.Sc.Eng. in mechanical engineering at the University of Arizona. In 1983, Haghighat decided to pursue a PhD in Systems Design Engineering at the University of Waterloo. Following his doctoral studies, Haghighat worked as an NSERC Postdoctoral Fellow at the National Research Council.
In 1986, Haghighat started his work as a full-time member of the Centre for Building Studies (CBS) at the Concordia University. He was promoted to Associate Professor in 1993, and became a Full Professor in 1999. In 2019, he was made a Distinguished University Research Professor in the Department of Building, Civil and Environmental Engineering.
In 1992, Haghighat founded the International Conference on Indoor Air Quality, Ventilation and Energy Conservation in Buildings (IAQVEC).
Research
Haghighat serves as a subject matter expert on various national and international committees, and Editor Board member of several international scientific journals. He has authored over 400 papers in the peer-reviewed scientific journals, conference papers, proceedings, books, contributions to books and technical reports.
Haghighat's many accomplishments throughout his career includes:
Editor-in-chief, International Journal of Sustainable Cities and Society (SCS)
Editor-in-chief, International Journal of Energy and Built Environment (EBE)
Operating Agent, International Energy Agency, ECES Annex 31: "Energy storage with Net Zero Energy Buildings and Districts: Optimisation and Automation"
Operating Agent, International Energy Agency, ECES Annex 23: "Applying Energy Storage in Buildings of the Future"
Canadian Representative, International Energy Agency, ECES Annex 53: "Total energy use in buildings: Analysis and evaluation methods"
Member, Editorial Board of the International Journal of Ventilation
Member, Editorial Board of the International Journal of Building Simulation
International Editorial Board, Editorial Board of the International Journal of High-Rise Buildings
Honours
Haghighat is a member of the Professional Engineers of Ontario since 1998.
Since 2002 he has been a member of ISIAQ Academy of Fellows – International Society of Indoor Air Quality and Climate (previously known as IAIAS, the International Academy of Indoor Air Sciences).
He is a Honorary Theme Editor (HTE) for the Theme 1.32 – Technology, Information, and Systems Management to develop the UNESCO's Encyclopedia of Life Support Systems (ELOSS).
He was awarded Fellow, American Society of Heating, Refrigeration and Air-Conditioning Engineers (ASHRAE) in 2007.
Haghighat received the Thomas C. Keefer Medal (2015) for his paper entitled "Efficient non-hydrostatic modelling of flow and bed shear stress in a pier scour hole", Canadian Journal of Civil Engineering, 2014, 41(5): 450–460.
In 2008, Haghighat and his wife Roya Haghighat established the "Fariborz and Roya Haghighat Entrance Scholarship in Engineering". This entrance scholarship is intended to promote and recognize academic excellence among newly admitted undergraduate students in the Gina Cody School of Engineering and Computer Science.
See also
List of University of Waterloo people
References
External links
Academic staff of Concordia University
Living people
Canadian academics in engineering
21st-century Canadian engineers
Canada Research Chairs
University of Arizona alumni
1951 births
Iranian emigrants to Canada
University of Waterloo alumni
Sharif University of Technology alumni
Iranian expatriates in the United States
Academic journal editors
Fellows of ASHRAE | Fariborz Haghighat | [
"Engineering"
] | 789 | [
"Building engineering",
"Fellows of ASHRAE"
] |
60,261,843 | https://en.wikipedia.org/wiki/Nhan%20Phan-Thien | Nhan Phan-Thien, Fellow of the Australian Academy of Science (born October 31, 1952, in AnGiang, Vietnam), is an emeritus professor of mechanical engineering at the National University of Singapore, Singapore. He has been an associate editor of Physics of Fluids since 2016, currently one of its Deputy Editors since 2023, and an editorial board member of Journal Non-Newtonian Fluid Mechanics. He held a Personal Chair at
University of Sydney [1991-02] and head of the Mechanical Engineering Department National University of Singapore [2016–19]. His contribution to the rheology field includes the PTT (Phan-Thien – Tanner) model for viscoelastic fluid (1569 citations) and its variant (694 citations). He is the author and co-author of several books in rheology
Education
Phan-Thien graduated from the University of Sydney, Sydney Australia, with a BEng (Mech. Eng. 1st Honours, University Medalist, 1975). He later completed a PhD degree at the University of Sydney, the Department of Mechanical Engineering in 1979.
Career
He has been a faculty member in mechanical engineering at University of Newcastle (Australia) (1978–80) and at the University of Sydney, Sydney, Australia (1980–02). At the University of Sydney, he held a personal chair (1991–02). He was a professor of mechanical engineering at the National University of Singapore, Singapore (2000–04, 2011–present). He was the founding chair in Bio-Engineering Division, which later becomes known as Biomedical Engineering Department at the National University of Singapore. He has held visiting professorships in Los Alamos National Laboratory, New Mexico, US, Caltech, California, US, and Stanford University, California, US, a Qiushi Chair Professor at Zhejiang University, Hangzhou, China, an adjunct professor at the University of Southern Queensland, Queensland, Australia, and an honorary professor at the University of Sydney, Sydney, Australia.
Research
Phan-Thien and his group have published extensively on the rheology of polymeric liquids, computational and constitutive modelling.
Honours and awards
Qiushi Chair Professor, Zhejiang University (2018–21);
Fellow of the ASEAN Academy of Engineering and Technology (elected 2016);
Fellow of the Australian Academy of Science (elected 1999);
Centenary Medal, awarded by the Governor General of Australia for services to Australian society and science in mechanical engineering (2001);
Gordon Bell Prize, Price-Performance category, IEEE Computer Society (1997);
Australian Society of Rheology Medal, awarded by the Australian Society of Rheology for distinguished contributions to Rheology (1997);
Edgeworth David Medal, awarded by the Royal Society of New South Wales, for distinguished research in science amongst younger workers in Applied Mechanics (1982);
Senior Fulbright Scholar (1982–83), California Institute of Technology, California, USA.
Books
Understanding Viscoelasticity: Basics of Rheology (Advanced Texts in Physics), , Springer; 1st Edition 2002, 2nd Edition 2008,
Understanding Viscoelasticity: An Introduction to Rheology (Graduate Texts in Physics), , 3rd Edition with Nam Mai-Duy 2017, Springer,
Understanding Viscoelasticity: An Introduction to Rheology (Graduate Texts in Physics), , 2nd Edition 2013, Springer,
Microstructures in Elastic Media: Principles and Computational Methods, , 1st Edition with Sangtae Kim 1994, Oxford University Press,
Numerical Study on Some Rheological Problems of Fibre Suspensions: Numerical Simulations of Fibre Suspensions, , with Xijun Fan and Roger I. Tanner 2008, VDM Verlag Dr. Müller.
References
External links
Research Group Homepage
Google Scholar Page
Academic journal editors
American mechanical engineers
Living people
University of Sydney alumni
Rheologists
Fluid dynamicists
American materials scientists
1952 births
Australian materials scientists
Australian mechanical engineers
Fellows of the Australian Academy of Science | Nhan Phan-Thien | [
"Chemistry"
] | 799 | [
"Fluid dynamicists",
"Fluid dynamics"
] |
43,822,252 | https://en.wikipedia.org/wiki/Self-interference%20cancellation | Self-interference cancellation (SIC) is a signal processing technique that enables a radio transceiver to simultaneously transmit and receive on a single channel, a pair of partially-overlapping channels, or any pair of channels in the same frequency band. When used to allow simultaneous transmission and reception on the same frequency, sometimes referred to as “in-band full-duplex” or “simultaneous transmit and receive,” SIC effectively doubles spectral efficiency. SIC also enables devices and platforms containing two radios that use the same frequency band to operate both radios simultaneously.
Self-interference cancellation has applications in mobile networks, the unlicensed bands, cable TV, mesh networks, the military, and public safety.
In-band full-duplex has advantages over conventional duplexing schemes. A frequency division duplexing (FDD) system transmits and receives at the same time by using two (usually widely separated) channels in the same frequency band. In-band full-duplex performs the same function using half of the spectrum resources. A time division duplexing (TDD) system operates half-duplex on a single channel, creating the illusion of full-duplex communication by rapidly switching back-and-forth between transmit and receive. In-band full-duplex radios achieve twice the throughput using the same spectrum resources.
Techniques
A radio transceiver cannot cancel out its own transmit signal based solely on knowledge of what information is being sent and how the transmit signal is constructed. The signal that the receiver sees is not entirely predictable. The signal that appears at the receiver is subject to varying delays. It consists of a combination of leakage (the signal traveling directly from the transmitter to the receiver) and local reflections. In addition, transmitter components (such as mixers and power amplifiers) introduce non-linearities that generate harmonics and noise. These distortions must be sampled at the output of the transmitter. Finally, the self-interference cancellation solution must detect and compensate for real-time changes caused by temperature variations, mechanical vibrations, and the motion of things in the environment.
The transmit signal can be cancelled out at the receiver by creating an accurate model of the signal and using it to generate a new signal that when combined with the signal arriving at the receiver leaves only the desired receive signal. The precise amount of cancellation required will vary depending on the power of the transmit signal that is the source of the self-interference and the signal-to-noise ratio (SNR) that the link is expected to handle in half-duplex mode. A typical figure for Wi-Fi and cellular applications is 110 dB of signal cancellation, though some applications require greater cancellation.
Cancelling a local transmit signal requires a combination of analog and digital electronics. The strength of the transmit signal can be modestly reduced before it reaches the receiver by using a circulator (if a shared antenna is used) or antenna isolation techniques (such as cross polarization) if separate antennas are used. The analog canceller is most effective at handling strong signals with a short delay spread. A digital canceller is most effective at handling weak signals with delays greater than 1,000 nanoseconds. The analog canceller should contribute at least 60 dB of cancellation. The digital canceller must process both linear and non-linear signal components, producing about 50 dB of cancellation. Both the analog and digital cancellers consist of a number of “taps” composed of attenuators, phase shifters, and delay elements. The cost, size, and complexity of the SIC solution is primarily determined by the analog stage. Also essential are the tuning algorithms that enable the canceller to adapt to rapid changes. Cancellation algorithms typically need to adapt at the rate of once every few hundred microseconds to keep up with changes in the environment.
SIC can also be employed to reduce or eliminate adjacent channel interference. This allows a device containing two radios (such as a Wi-Fi access Point with two 5 GHz radios) to use any pair of channels regardless of separation. Adjacent channel interference consists of two main components. The signal on the transmit frequency, known as the blocker, may be so strong that it desensitizes a receiver listening on an adjacent channel. A strong, local transmitter also produces noise that spills over onto the adjacent channel. SIC may be used to reduce both the blocker and the noise that might otherwise prevent use of an adjacent channel.
Applications
In-band full duplex
Transmitting and receiving on exactly the same frequency at exactly the same time has multiple purposes. In-band full duplex can potentially double spectral efficiency. It permits true full duplex operation where only a single frequency is available. And it enables “listen while talking” operation (see cognitive radio, below).
Integrated access and backhaul
Though most small cells are expected to be fed using fiber optic cable, running fiber isn't always practical. Reuse of the frequencies used by a small cell to communicate with users (“access”) for communication between the small cell and the network (“backhaul”) will be part of the 3GPP's 5G standards. When implemented using SIC, the local backhaul radio's transmit signal is cancelled out at the small cell's receiver, and the small cell's transmit signal is cancelled out at the local backhaul radio's receiver. No changes are required to the users’ devices or the remote backhaul radio. The use of SIC in this applications has been successfully field-tested by Telecom Italia Mobile and Deutsche Telekom.
Satellite repeaters
SIC enables satellite repeaters to extend coverage to indoor, urban canyon, and other locations by reusing the same frequencies. This type of repeater is essentially two radios connected back-to-back. One radio faces the satellite, while the other radio faces the area not in direct coverage. The two radios relay the signals (rather than store-and-forward data bits) and must be isolated from each other to prevent feedback. The satellite-facing radio listens to the satellite and must be isolated from the transmitter repeating the signal. Likewise, the indoor-facing radio listens for indoor users and must be isolated from the transmitter repeating their signals to the satellite. SIC may be used to cancel out each radio's transmit signal at the other radio's receiver.
Full-duplex DOCSIS 3.1
Cable networks have traditionally allocated most of their capacity to downstream transmissions. The recent growth in user-generated content calls for more upstream capacity. Cable Labs developed the Full Duplex DOCSIS 3.1 standard to enable symmetrical service at speeds up to 10 Gbit/s in each direction. In DOCSIS 3.1, different frequencies are allocated for upstream and downstream transmissions, separated by a guard band. Full Duplex DOCSIS establishes a new band allowing a mix of upstream and downstream channels on adjacent channels. The headend must support simultaneous transmission and reception across the full duplex band, which requires SIC technology. The cable modems are not required to transmit and receive on the same channels simultaneously, but they are required to use different combinations of upstream and downstream channels as instructed by the headend.
Wireless mesh networks
Mesh networks are used to extend coverage (to cover entire homes) and for ad-hoc networking (emergency communication). Wireless mesh networks use a mesh topology to provide the desired coverage. The data travels from one node to another until it reaches its destination. In mesh networks using a single frequency, the data is typically store-and-forwarded, with each hop adding a delay. SIC can enable wireless mesh nodes to reuse frequencies so that the data is retransmitted (relayed) as it is received. In mesh networks using multiple frequencies, such as whole-home Wi-Fi networks using “tri-band” routers, SIC can enable greater flexibility in channel selection. Tri-band routers have one 2.4 GHz and one 5 GHz radio to communicate with client devices, and a second 5 GHz radio that is used exclusively for internode communication. Most tri-band routers use the same pair of 80 MHz channels (at opposite ends of the 5 GHz band) to minimize interference. SIC can allow tri-band routers to use any of the six 80-MHz channels in the 5 GHz band for coordination both within networks and between neighboring networks.
Military communication
The military frequently requires multiple, high power radios on the same air, land, or sea platform for tactical communication. These radios must be reliable even in the face of interference and enemy jamming. SIC enables multiple radios to operate on the same platform at the same time. SIC also has potential applications in military and vehicular radar, allowing radar systems to transmit and receive continuously rather than constantly switching between transmit and receive, yielding higher resolution. These new capabilities have been recognized as a potential 'superpower' for armed forces that may bring about a paradigm shift in tactical communications and electronic warfare.
Spectrum sharing
National regulatory agencies, such as the Federal Communications Commission in the U.S., often address the need for more spectrum resources by permitting sharing of underutilized spectrum. For instance, billions of Wi-Fi and Bluetooth devices compete for access to the ISM bands. Smartphones, Wi-Fi routers, and smart home hubs frequently support Wi-Fi, Bluetooth, and other wireless technologies in the same device. SIC technology enables these devices to operate two radios in the same band at the same time. Spectrum sharing is a topic of great interest to the mobile phone industry as it begins to deploy 5G systems.
Cognitive radio
Radios that dynamically select idle channels to make more efficient use of finite spectrum resources are the subject of considerable research. Traditional spectrum sharing schemes rely on Listen-before-talk protocols. However, when two or more radios choose to transmit on the same channel at the same time there is a collision. Collisions take time to detect and resolve. SIC enables listen-while-talking, ensuring immediate detection and faster resolution of collisions.
See also
Cognitive radio
DOCSIS
Duplex (telecommunications)
Mesh networking
Successive Interference Cancellation
References
Y. Hua, Y. Ma, A. Gholian, Y. Li, A. Cirik, P. Liang, “Radio Self-Interference Cancellation by Transmit Beamforming, All-Analog Cancellation and Blind Digital Tuning,” Signal Processing, Vol. 108, pp. 322–340, 2015.
External links
3GPP Integrated access and backhaul
CableLabs Full Duplex DOSCIS 3.1
Harris Corporation
IEEE 802.11 Full duplex topic of interest group
Kumu Networks
Radio technology
Wireless networking
Radio resource management
Radiofrequency receivers
Telecommunications engineering | Self-interference cancellation | [
"Technology",
"Engineering"
] | 2,162 | [
"Information and communications technology",
"Telecommunications engineering",
"Wireless networking",
"Computer networks engineering",
"Radio technology",
"Electrical engineering"
] |
43,823,678 | https://en.wikipedia.org/wiki/Gelfand%20ring | In mathematics, a Gelfand ring is a ring R with identity such that if I and J are distinct right ideals then there are elements i and j such that i R j = 0, i is not in I, and j is not in J. introduced them as rings for which one could prove a generalization of Gelfand duality, and named them after Israel Gelfand.
In the commutative case, Gelfand rings can also be characterized as the rings such that, for every and summing to , there exists and such that
.
Moreover, their prime spectrum deformation retracts onto the maximal spectrum.
References
Ring theory | Gelfand ring | [
"Mathematics"
] | 141 | [
"Fields of abstract algebra",
"Ring theory"
] |
43,825,298 | https://en.wikipedia.org/wiki/Armadillo%20projection | The armadillo projection is a map projection used for world maps. It is neither conformal nor equal-area but instead affords a view evoking a perspective projection while showing most of the globe instead of the half or less that a perspective would. The projection was presented in 1943 by Erwin Raisz (1893–1968) as part of a series of "orthoapsidal" projections, which are perspectives of the globe projected onto various surfaces. This entry in the series has the globe projected onto the outer half of half a torus. Raisz singled it out and named it the "armadillo" projection.
The toroidal shape and the angle it is viewed from tend to emphasize continental areas by eliminating or foreshortening swaths of ocean. In Raisz's original presentation, the torus is tilted so that New Zealand and Antarctica cannot be seen, as in the images here. However, in publications, the projection often develops a "pigtail" which shows the rest of Australia as well as New Zealand.
Raisz coined the term orthoapsidal as a combination of orthographic and apsidal. He used it to mean drawing a parallel-meridian network, or graticule, on any suitable solid other than a sphere, and then making an orthographic projection of that.
Formulas
Given a radius of sphere R, central meridian λ0 and a point with geographical latitude φ and longitude λ, plane coordinates x and y can be computed using the following formulas:
In this formulation, no latitude more southerly than φs should be plotted for the given longitude. The y-axis coincides with the central meridian.
See also
List of map projections
References
External links
Description and characteristics at mapthematics.com
Map projections | Armadillo projection | [
"Mathematics"
] | 360 | [
"Map projections",
"Coordinate systems"
] |
43,825,446 | https://en.wikipedia.org/wiki/Atmosphere-breathing%20electric%20propulsion | Atmosphere-breathing electric propulsion, or air-breathing electric propulsion, shortly ABEP, is a propulsion technology for spacecraft, which could allow thrust generation in low orbits without the need of on-board propellant, by using residual gases in the atmosphere as propellant. Atmosphere-breathing electric propulsion could make a new class of long-lived, low-orbiting missions feasible.
The concept is currently being investigated by the European Space Agency (ESA), the EU-funded BREATHE project at Sant'Anna School of Advanced Studies in Pisa and the EU-funded DISCOVERER project. Current state-of-the-art conventional electric thrusters cannot maintain flight at low altitudes for any times longer than about 2 years, because of the limitation in propellant storage and in the amount of thrust generated, which force the spacecraft's orbit to decay. The ESA officially announced the first successful RAM-EP prototype on-ground demonstration in March 2018.
Principle of operation
An ABEP is composed by an intake and an electric thruster: rarefied gases which are responsible for drag in low Earth orbit (LEO) and very low Earth orbit (VLEO), are used as the propellant. This technology would ideally allow S/Cs to orbit at very low altitudes (< 400 km around the Earth) without the need of on-board propellant, allowing longer time missions in a new section of atmosphere's altitudes. This advantage makes the technology of interest for scientific missions, military and civil surveillance services as well as low orbit even lower latency communication services than Starlink.
A special intake will be used to collect the gas molecules and direct them to the thruster. The molecules will then be ionized by the thruster and expelled from the acceleration stage at a very high velocity, generating thrust. The electric power needed can be provided by the same power subsystems developed for the actual electric propulsion systems, likely a combination of solar arrays and batteries, though other kind of electric power subsystems can be considered. An ABEP could extend the lifetime of satellites in LEO and VLEO by compensating the atmospheric drag during their time of operation. The altitude for an Earth-orbiting ABEP can be optimised between 120–250 km. This technology could also be utilized on any planet with atmosphere, if the thruster can process other propellants, and if the power source can provide the required power, e.g. sufficient solar irradiation for the solar panels, such as Mars and Venus, otherwise other electric power subsystems such as a space nuclear reactor or radioisotope thermoelectric generator (RTG) have to be implemented, for example for a mission around Titan.
Concepts and modelling
The first studies considering the collection and use of the upper atmosphere as propellant for an electric thruster can be found already in 1959 with the studies on the propulsive fluid accumulator from S. T. Demetriades.
In the development of atmosphere-breathing ion engines, a notable extension of Child's Law led to its implementation in the ABEP concept in 1995. Originally, Child's Law modeled the flow of charge between an anode and a cathode with the assumption that the initial velocity of ions was zero. This assumption, however, is not applicable to ion thrusters operating in low Earth orbit, where ambient gas enters the ionization chamber at high velocities.
Buford Ray Conley provided a generalization of Child's Law that accounts for a non-zero initial velocity of ions. This adaptation has been significant for the theoretical modeling of ion propulsion systems, particularly those that operate in the rarefied conditions of low Earth orbit.
The generalization of Child's Law has implications for the design and efficiency of atmosphere-breathing ion thrusters. By accounting for the high-velocity ambient gas that enters the ionization chamber in low Earth orbit, the modified law allows for more accurate theoretical modeling. Once the ambient gas is ionized in the chamber, it is electromagnetically accelerated out of the exhaust, contributing to the propulsion of the spacecraft.
Development and testing
European projects
ESA's RAM-EP, designed and developed by SITAEL in Italy, was first tested in laboratory in May 2017.
The Institute of Space Systems at the University of Stuttgart is developing the intake and the thruster, the latter is the RF helicon-based Plasma Thruster (IPT), which has been ignited for the first time in March 2020, see IRS Uni Stuttgart Press Release. Such a device has the main advantage of no components in direct contact with the plasma, this minimizes the performance degradation over time due to erosion from aggressive propellants, such as atomic oxygen in VLEO, and does not require a neutralizer. Intake and thruster are developed within the DISCOVERER EU H2020 Project.
Intakes have been designed in multiple studies, and are based on free molecular flow condition and on gas-surface interaction models: based on specular reflections properties of the intake materials, high efficiencies can theoretically be achieved by using telescope-like designs. With fully diffuse reflection properties, efficiencies are generally lower, but with a trapping mechanism the pressure distribution in front of the thruster can be enhanced as well.
The UK-based start-up NewOrbit Space has been developing an air-breathing electric propulsion system since 2021, achieving several milestones in the process. Notably, NewOrbit became the first in the industry to successfully operate and neutralize an ion engine entirely on atmospheric air in a vacuum chamber. Initial testing results have shown a specific impulse of 6,380 seconds, with the engine accelerating incoming air to speeds exceeding 200,000 km/h. This breakthrough enables the propulsion system to generate enough thrust to overcome atmospheric drag in very low Earth orbit, allowing sustainable spacecraft operation at altitudes below 200 km.
US & Japanese work
Busek Co. Inc. in the U.S. patented their concept of an Air Breathing Hall Effect Thruster (ABHET) in 2004, and with funding from the NASA Institute for Advanced Concepts, started in 2011 a feasibility study that would be applied to Mars (Mars-ABHET or MABHET), where the system would breathe and ionize atmospheric carbon dioxide. The MABHET concept is based on the same general principles as JAXA's Air Breathing Ion Engine (ABIE) or ESA's RAM-EP.
See also
Ion-propelled aircraft
References
Ion engines | Atmosphere-breathing electric propulsion | [
"Physics",
"Chemistry"
] | 1,307 | [
"Ions",
"Ion engines",
"Matter"
] |
48,689,154 | https://en.wikipedia.org/wiki/Cam%20shedding | Cam shedding, also known as tappet shedding, is the control of the movement of heald shafts in weaving simple constructions by means of cams or tappets.
In positive cam shedding, the heddle (or heald) shafts are both raised and lowered by the tappets.
In negative cam shedding, the heald shafts are either raised or lowered by the mechanism but are returned by the action of an external device, usually springs. The maximum number of heald shafts controlled by tappet shedding is 20, but this is not possible in practice.
References
Bibliography
Textile Terms and Definitions, 11th revised edition, The Textile Institute, 2002.
Principles of Weaving, R. Marks and A.T.C. Robinson, The Textile Institute, 1986.
Weaving equipment | Cam shedding | [
"Engineering"
] | 161 | [
"Weaving equipment"
] |
48,692,933 | https://en.wikipedia.org/wiki/Zero%20Gradient%20Synchrotron | The Zero Gradient Synchrotron (ZGS), was a weak focusing 12.5 GeV proton accelerator that operated at the Argonne National Laboratory in Illinois from 1964 to 1979.
It enabled pioneering experiments in particle physics, in the areas of
quark model tests;
neutrino physics (observation of neutrino interaction in its 12 ft hydrogen bubble chamber for the first time in 1970);
spin physics of hadrons (utilizing a polarized accelerated proton beam in the GeV range for the first time); and
Kaon decays.
Other noteworthy features of the ZGS program were the large number of university-based users and the pioneering development of large superconducting magnets for bubble chambers and beam transport.
The hardware and building of the ZGS were ultimately inherited by a spallation neutron source program, the Intense Pulsed Neutron Source (IPNS).
In media
Significant portions of the 1996 chase film Chain Reaction were shot in the Zero Gradient Synchrotron ring room and the former Continuous Wave Deuterium Demonstrator laboratory.
References
Symposium on the 30th Anniversary of the ZGS Startup, Malcolm Derrick (ed), ANL-HEP-CP-96-12, 1994.
History of the ZGS, J. Day et al. (eds), AIP Conference Proceedings 60, AIP, New York, 1980, .
Particle physics facilities
Argonne National Laboratory | Zero Gradient Synchrotron | [
"Physics"
] | 286 | [
"Particle physics stubs",
"Particle physics"
] |
48,698,109 | https://en.wikipedia.org/wiki/Lithium%20toxicity | Lithium toxicity, also known as lithium overdose, is the condition of having too much lithium. Symptoms may include a tremor, increased reflexes, trouble walking, kidney problems, and an altered level of consciousness. Some symptoms may last for a year after levels return to normal. Complications may include serotonin syndrome.
Lithium toxicity can occur due to excessive intake or decreased excretion. Excessive intake may be either a suicide attempt or accidental. Decreased excretion may occur as a result of dehydration such as from vomiting or diarrhea, a low sodium diet, or from kidney problems. The diagnosis is generally based on symptoms and supported by a lithium level in blood serum of greater than 1.2 mEq/L.
Gastric lavage and whole bowel irrigation may be useful if done early. Activated charcoal is not effective. For severe toxicity hemodialysis is recommended. The risk of death is generally low. Acute toxicity generally has better outcomes than chronic toxicity. In the United States about 5,000 cases are reported to poison control centers a year. Lithium toxicity was first described in 1898.
Signs and symptoms
Symptoms of lithium toxicity can be mild, moderate, or severe.
Mild symptoms include nausea, feeling tired, and tremor occur at a level of 1.5 to 2.5 mEq/L in blood serum. Moderate symptoms include confusion, an increased heart rate, and low muscle tone occur at a level of 2.5 to 3.5 mEq/L. Severe symptoms include coma, seizures, low blood pressure and increased body temperature which occur at a lithium concentration greater than 3.5 mEq/L. When lithium overdoses produce neurological deficits or cardiac toxicity, the symptoms are considered serious and can be fatal.
Acute toxicity
In acute toxicity, people have primarily gastrointestinal symptoms such as vomiting and diarrhea, which may result in volume depletion. During acute toxicity, lithium distributes later into the central nervous system causing dizziness and other mild neurological symptoms.
Chronic toxicity
In chronic toxicity, people have primarily neurological symptoms which include nystagmus, tremor, hyperreflexia, ataxia, and change in mental status. During chronic toxicity, the gastrointestinal symptoms seen in acute toxicity are less prominent. The symptoms are often vague and nonspecific.
Acute on chronic toxicity
In acute on chronic toxicity, people have symptoms of both acute and chronic toxicity.
Complications
People who survive an intoxication episode may develop persistent health problems. This group of persistent health symptoms are called syndrome of irreversible lithium-effectuated neurotoxicity (SILENT). The syndrome presents with irreversible neurological and neuro-psychiatric effects. The neurological signs are cerebellar dysfunction, extrapyramidal symptoms, and brainstem dysfunction. The neuro-psychiatric findings present with memory deficits, cognitive deficits, and sub-cortical dementia. For a diagnosis, the syndrome requires the absence of prior symptoms and persistence of symptoms for greater than 2 months after cessation of lithium.
Pathophysiology
Lithium is readily absorbed from the gastrointestinal tract. It is distributed to the body with higher levels in the kidney, thyroid, and bone as compared to other tissues. Since lithium is almost exclusively excreted by the kidneys, people with preexisting chronic kidney disease are at high risk of developing lithium intoxication. The drug itself is also known to be nephrotoxic, opening up the possibility of spontaneous emergence of toxicity at doses that were previously well-tolerated. Lithium toxicity can be mistaken for other syndromes associated with antipsychotic use, such as serotonin syndrome because lithium increases serotonin metabolites in the cerebrospinal fluid.
There are several drug interactions with lithium. Interactions can occur from typical antipsychotics or atypical antipsychotics. In particular, certain drugs enhance lithium levels by increasing renal re-absorption at the proximal tubule. These drugs are angiotensin-converting enzyme inhibitors, non-steroidal anti-inflammatory drugs and thiazide diuretics.
Diagnosis
The diagnosis is generally based on symptoms and supported by a lithium level blood level. Blood levels are most useful six to twelve hours after the last dose. The normal blood serum lithium level in those on treatment is between 0.6-1.2 mEq/L. Some blood tubes contain lithium heparin which may result in falsely elevated results.
When lithium toxicity is suspected tests may include:
fingerstick glucose
serum lithium concentration
basic metabolic panel to assess renal function
serum acetaminophen and salicylate concentrations to rule out other sources of acute ingestion
urine pregnancy tests to ensure management does not cause abortion
Imaging tests are not helpful.
Treatment
If the person's lithium toxicity is mild or moderate, lithium dosage is reduced or stopped entirely. If the toxicity is severe, lithium may need to be removed from the body. The removal of lithium is done in a hospital emergency department. It may involve:
Gastric lavage. A tube is placed through the nose or mouth into the stomach. The tube is used to remove lithium that has not been digested yet. It may also be used to put medicines directly into the stomach to help stop lithium from being absorbed.
Use of an artificial kidney to clean the blood (dialysis). This is usually done only in the most severe cases.
Diuretic medications such as furosemide and hydration via intravenous normal saline appear to be effective in speeding the removal of lithium and also rehydrate patients who've lost fluids.
Hemodialysis. Hemodialysis is widely advocated as a means of reducing the risk of permanent neurological sequelae following lithium poisoning. Although hemodialysis clearly enhances the elimination of lithium, it is unclear whether this translates into improved patient outcomes.
See also
Biology and pharmacology of chemical elements
Diabetes insipidus
References
Biology and pharmacology of chemical elements
Bipolar disorder
Depression (mood)
Element toxicology
Lithium
Lithium in biology
Mood disorders
Toxic effects of metals
Toxicology
Wikipedia medicine articles ready to translate | Lithium toxicity | [
"Chemistry",
"Biology",
"Environmental_science"
] | 1,254 | [
"Pharmacology",
"Element toxicology",
"Toxicology",
"Lithium in biology",
"Properties of chemical elements",
"Biology and pharmacology of chemical elements",
"Biochemistry"
] |
48,700,463 | https://en.wikipedia.org/wiki/Pooling%20equilibrium | A pooling equilibrium in game theory is an equilibrium outcome of a signaling game.
In a signaling game, players send actions called "signals" to other players. These signals are based on privately held information, which is not known to others in the game. These actions do reveal a player's "type" to other players, who then choose their strategies accordingly. In a pooling equilibrium, all types of a given sender send the same signal. Some senders represent their true type, while others correctly mimic the type of others, having no incentive to differentiate themselves. As a result, the receiver acts as if they have received no information, maximizing their utility according to their prior beliefs.
See also
Separating equilibrium
References
Game theory equilibrium concepts
Asymmetric information | Pooling equilibrium | [
"Physics",
"Mathematics"
] | 155 | [
"Asymmetric information",
"Game theory",
"Asymmetry",
"Game theory equilibrium concepts",
"Symmetry"
] |
45,693,651 | https://en.wikipedia.org/wiki/Application%20of%20silicon-germanium%20thermoelectrics%20in%20space%20exploration | Silicon-germanium (SiGe) thermoelectrics have been used for converting heat into electrical power in spacecraft designed for deep-space NASA missions since 1976. This material is used in the radioisotope thermoelectric generators (RTGs) that power Voyager 1, Voyager 2, Galileo, Ulysses, Cassini, and New Horizons spacecraft. SiGe thermoelectric material converts enough radiated heat into electrical power to fully meet the power demands of each spacecraft. The properties of the material and the remaining components of the RTG contribute towards the efficiency of this thermoelectric conversion.
Properties
Heavily doped semiconductors, such as silicon-germanium (SiGe) thermoelectric couples (also called thermocouples or unicouples), are used in space exploration.
SiGe alloys present good thermoelectric properties. Their performance in thermoelectric power production is characterized by high dimensionless figures-of-merit (ZT) under high temperatures, which has been shown to be near 2 in some nanostructured-SiGe models.
SiGe alloy devices are mechanically rugged and can withstand severe shock and vibration due to its high tensile strength (i.e. >7000 psi) and low dislocation density. SiGe material is malleable with standard metallurgical equipment and bonds easily to construct components. SiGe alloy devices can operate under high temperatures (i.e. >1300 ˚C) without degradation due to their electronic stability, low thermal expansion coefficient and high oxidation resistance.
Near the Sun, solar cell performance deteriorates from high incident particle flux and high temperatures from heat flux. However, thermoelectric energy conversion systems that use thermoelectric materials (e.g. SiGe alloys) as a supplemental source of power for missions near the Sun can operate unprotected in vacuum and air environments under high temperatures due to their low sensitivity to radiation damage. Such properties have made SiGe thermoelectrics convenient for power generation in space.
The multifoil cold stack assembly, composed of molybdenum, tungsten, stainless steel, copper, and alumina materials, provides the insulation between the electrical and thermal currents of the system. The SiGe n-leg doped with boron and SiGe p-leg doped with phosphorus act as the intermediary between the heat source and electrical assembly.
Power generation
SiGe thermocouples in an RTG convert heat directly into electricity. Thermoelectric power generation requires a constantly maintained temperature difference among the junctions of the two dissimilar metals (i.e. Si and Ge) to produce a low power closed circuit electric current without extra circuitry or external power sources.
A large array of SiGe thermocouples/unicouples form a thermopile that was incorporated into the design of radioisotope thermoelectric generators (RTGs) used in the missions Voyager, Galileo, Ulysses, Cassini, and New Horizons. On these spacecraft, Pu-238 dioxide fuel undergoes natural decay. The SiGe thermocouples/unicouples convert this heat to hundreds of Watts of electrical power.
Thermocouple/unicouple assembly
The thermocouples/unicouples attached to the outer shell consist of a SiGe alloy n-leg doped with boron and a SiGe p-leg doped with phosphorus to provide thermoelectric polarity to the couple. The electrical and thermal currents of the system are separated by bonding the SiGe alloy thermocouple to a multifoil cold stack assembly of molybdenum, tungsten, stainless steel, copper, and alumina components. Several layers of Astroquartz silica fiber yarn electrically insulate the legs of the SiGe thermocouples. In between the inner insulation system and the outer shell, copper connectors form the electrical circuit, which uses a two-string, series-parallel wiring design to connect the unicouples. The circuit loop arrangement minimizes the net magnetic field of the generator.
Application history
SiGe has been used as a material in RTGs since 1976. Each mission that has used RTG technology involves exploration of far-reaching regions of the solar system. The most recent mission, New Horizons (2005), was originally set for a 3-year exploration, but was extended to 17 years.
Multi-hundred-watt (MHW) applications
Voyager 1 and Voyager 2 spacecraft launched in August and September 1977 required multi-hundred-watt (MHW) RTG containing plutonium oxide fuel spheres for an operational life appropriate for exploration of Jupiter, Saturn, Uranus, and Neptune. Conversion of the decay heat of the plutonium to electrical power was accomplished through 312 silicon-germanium (SiGe) thermoelectric couples. A hot junction temperature of 1273 K (1832 °F) with a cold junction temperature of 573 K (572 °F) compose the temperature gradient in the thermoelectric couple in the RTG. This mechanism provided the total electrical power to operate the spacecraft's instruments, communications and other power demands. The RTG on Voyager will produce adequate electrical power for spacecraft operation until about the year 2020. Similar MHW-RTG models are also used on the two U.S. Air Force communications Lincoln Experimental Satellites 8 and 9 (LES-8/9).
General purpose heat source (GPHS) applications
The Galileo spacecraft launched on October 18, 1989, the Ulysses on October 6, 1990, the Cassini on October 15, 1997, and the New Horizons on January 19, 2006. All of these spacecraft contain the general purpose heat source (GPHS) RTG commissioned by the U.S. Department of Energy. The GPHS-RTG employs identical heat-to-electrical conversion technology used in the MHW-RTGs from the Voyager missions, using SiGe thermocouples/unicouples and the Pu-238–fueled GPHS. New Horizons made its historic flyby past Pluto and its moons on July 14, 2015 (see JHU Applied Physics website). The spacecraft's next destination will be a small Kuiper Belt object (KBO) known as 486958 Arrokoth that orbits nearly a billion miles beyond Pluto. Based on performance, data and modeling for the SiGe alloy RTGs, the GPHS-RTGs on Ulysses, Cassini and New Horizons are expected to meet or exceed the remaining power performance requirements for their deep-space missions.
RTG alternative
Missions after 2010 requiring RTGs will instead use the multi-mission radioisotope thermoelectric generator (MMRTG) containing lead telluride (PbTe) thermocouples and Pu-238 dioxide for spacecraft power applications.
See also
Thermoelectric cooling
Advanced Stirling Radioisotope Generator
Radioisotope heater units
References
Nuclear power in space
Thermoelectricity
Nuclear technology
Spacecraft components
Inorganic chemistry | Application of silicon-germanium thermoelectrics in space exploration | [
"Physics",
"Chemistry"
] | 1,467 | [
"Nuclear technology",
"nan",
"Nuclear physics"
] |
45,696,716 | https://en.wikipedia.org/wiki/Sheaf%20of%20planes | In mathematics, a sheaf of planes is the set of all planes that have the same common line. It may also be known as a fan of planes or a pencil of planes.
When extending the concept of line to the line at infinity, a set of parallel planes can be seen as a sheaf of planes intersecting in a line at infinity. To distinguish it from the more general definition, the adjective parallel can be added to it, resulting in the expression parallel sheaf of planes.
See also
Book embedding, a notion of graph embedding onto sheafs of half-planes
Notes
Planes (geometry) | Sheaf of planes | [
"Mathematics"
] | 126 | [
"Planes (geometry)",
"Mathematical objects",
"Infinity"
] |
45,698,778 | https://en.wikipedia.org/wiki/Dodecaborate | The dodecaborate(12) anion, [B12H12]2−, is a borane with an icosahedral arrangement of 12 boron atoms, with each boron atom being attached to a hydrogen atom. Its symmetry is classified by the molecular point group Ih.
Synthesis and reactions
The existence of the dodecaborate(12) anion, [B12H12]2−, was predicted by H. C. Longuet-Higgins and M. de V. Roberts in 1955. Hawthorne and Pitochelli first made it 5 years later, by the reaction of 2-iododecaborane with triethylamine in benzene solution at 80 °C. It is more conveniently prepared in two steps from sodium borohydride. First the borohydride is converted into a triborate anion using the etherate of boron trifluoride:
5 NaBH4 + BF3 → 2 NaB3H8 + 3 NaF + 2 H2
Pyrolysis of the triborate gives the twelve-boron cluster as the sodium salt. A variety of other synthetic methods have been published.
Salts of the dodecaborate ion are stable in air and do not react with hot aqueous sodium hydroxide or hydrochloric acid. The anion can be electrochemically oxidised to [B24H23]3−.
Substituted derivatives
Salts of undergo hydroxylation with hydrogen peroxide to give salts of [B12(OH)12]2−. The hydrogen atoms in the ion [B12H12]2− can be replaced by the halogens with various degrees of substitution. The following numbering scheme is used to identify the products. The first boron atom is numbered 1, then the closest ring of five atoms around it is numbered anticlockwise from 2 to 6. The next ring of boron atoms is started from 7 for the atoms closest to number 2 and 3, and counts anticlockwise to 11. The atom opposite the original is numbered 12. A related derivative is [B12(CH3)12]2−. The icosahedron of boron atoms is aromatic in nature.
Under kilobar pressure of carbon monoxide [B12H12]2− reacts to form the carbonyl derivatives [B12H11CO]− and the 1,12- and 1,7-isomers of B12H10(CO)2. The para disubstitution at the 1,12 is unusual. In water the dicarbonyls appear to form carboxylic ions: [B12H10(CO)CO2H]− and [B12H10(CO2H)2]2−.
A perfluoroborane derivative (with the hydrogen atoms replaced by fluorine atoms) is also known.
Potential applications
Compounds based on the ion [B12H12]2− have been evaluated for solvent extraction of the radioactive ions 152Eu3+ and 241Am3+.
[B12H12]2−, [B12(OH)12]2− and [B12(OMe)12]2− show promise for use in drug delivery. They form "closomers", which have been used to make nontargeted high-performance MRI contrast agents which are persistent in tumor tissue.
Salts of [B12H12]2− are potential therapeutic agents in cancer treatment. For applications in boron neutron capture therapy, derivatives of closo-dodecaborate increase the specificity of neutron irradiation treatment. Neutron irradiation of boron-10 leads to the emission of an alpha particle near the tumor.
References
Boranes
Anions
Substances discovered in the 1960s | Dodecaborate | [
"Physics",
"Chemistry"
] | 782 | [
"Ions",
"Matter",
"Anions"
] |
45,700,548 | https://en.wikipedia.org/wiki/Two-dimensional%20semiconductor | A two-dimensional semiconductor (also known as 2D semiconductor) is a type of natural semiconductor with thicknesses on the atomic scale. Geim and Novoselov et al. initiated the field in 2004 when they reported a new semiconducting material graphene, a flat monolayer of carbon atoms arranged in a 2D honeycomb lattice. A 2D monolayer semiconductor is significant because it exhibits stronger piezoelectric coupling than traditionally employed bulk forms. This coupling could enable applications. One research focus is on designing nanoelectronic components by the use of graphene as electrical conductor, hexagonal boron nitride as electrical insulator, and a transition metal dichalcogenide as semiconductor.
Materials
Graphene
Graphene, consisting of single sheets of carbon atoms, has high electron mobility and high thermal conductivity. One issue regarding graphene is its lack of a band gap, which poses a problem in particular with digital electronics because it is unable to switch off field-effect transistors (FETs).
Hexagonal boron nitride
Monolayer hexagonal boron nitride (h-BN) is an insulator with a high energy gap (5.97 eV). However, it can also function as a semiconductor with enhanced conductivity due to its zigzag sharp edges and vacancies. h-BN is often used as substrate and barrier due to its insulating property. h-BN also has a large thermal conductivity.
Transition-metal dichalcogenides
Transition-metal dichalcogenide monolayers (TMDs or TMDCs) are a class of two-dimensional materials that have the chemical formula MX2, where M represents transition metals from group IV, V and VI, and X represents a chalcogen such as sulfur, selenium or tellurium. MoS2, MoSe2, MoTe2, WS2 and WSe2 are TMDCs. TMDCs have layered structure with a plane of metal atoms in between two planes of chalcogen atoms as shown in Figure 1. Each layer is bonded strongly in plane, but weakly in interlayers. Therefore, TMDCs can be easily exfoliated into atomically thin layers through various methods. TMDCs show layer-dependent optical and electrical properties. When exfoliated into monolayers, the band gaps of several TMDCs change from indirect to direct, which lead to broad applications in nanoelectronics, optoelectronics, and quantum computing. While exfoliated TMDC monolayers exhibit promising optoelectronic properties, they are often limited by intrinsic and extrinsic defects, such as sulfur vacancies and grain boundaries, which can negatively affect their performance. To address these issues, various chemical passivation techniques, including the use of superacids and thiol molecules, have been developed to enhance their photoluminescence and charge transport properties. Additionally, phase and strain engineering have emerged as powerful strategies to further optimize the electronic characteristics of TMDCs, making them more suitable for advanced applications in nanoelectronics and quantum computing.
III-VI chalcogenides
Another class of 2D semiconductors are III-VI chalcogenides. These materials have the chemical formula MX, where M is a metal from group 13 (Ga, In) and X is a chalcogen atom (S, Se, Te). Typical members of this group are InSe and GaSe, both of which have shown high electronic mobilities and band gaps suitable for a wide range of electronic applications.
Synthesis
2D semiconductor materials are often synthesized using a chemical vapor deposition (CVD) method. Because CVD can provide large-area, high-quality, and well-controlled layered growth of 2D semiconductor materials, it also allows synthesis of two-dimensional heterojunctions. When building devices by stacking different 2D materials, mechanical exfoliation followed by transferring is often used. Other possible synthesis methods include electrochemical deposition, chemical exfoliation, hydrothermal synthesis, and thermal decomposition. In 2008 cadmium selenide CdSe quasi 2D platelets were first synthesized by colloidal method with thicknesses of several atomic layers and lateral sizes up to dozens of nanometers. Modification of the procedure allowed to obtain other nanoparticles with different compositions (like CdTe, HgSe, CdSexS1−x alloys, core/shell and core/crown heterostructures) and shapes (as scrolls, nanoribbons, etc).
Mechanical Behavior
2D semiconductor materials unique crystal structures often yield unique mechanical properties, especially in the monolayer limit, such as high stiffness and strength in the 2D atomic plane, but low flexural rigidity. Testing these materials is more challenging that their bulk counterparts, with methods employing the use of scanning probe techniques such as atomic force microscopy (AFM). These experimental methods are typically performed on 2D materials suspended over holes in a substrate. The tip of the AFM is then used to press into the flake and measure the response of the material. From this mechanical properties such as Young modulus, yield strain, and flexural strength.
Graphene
With a Youngs modulus of almost 1 TPa, graphene boasts incredible toughness due to the strength of the carbon-carbon bonding. Graphene however, has a fracture toughness of about 4 MPa/m, making it brittle and easy to crack . Graphene was later shown by the same group that discovered its fracture toughness, to have incredible fore distribution abilities, with about ten times the ability of steel.
Atomically thin boron nitride
Monolayer boron nitride has fracture strength and Youngs modulus of 70.5 GPa and 0.865 TPa, respectively. Boron nitride also maintains its high Youngs modulus and fracture strengths with increasing thickness.
Transition metal dichalcogenides
2D transition metal dichalcogenides are often used in applications such as flexible and stretchable electronics, where an understanding of their mechanical properties and the operational impact of mechanical changes to the materials is paramount for device performance. Under strain TMDs change their electronic bandgap structure of both the direct gap monolayer and the indirect gap few layer cases indicating applied strain as a tunable parameter. Monolayer MoS2 has a Youngs modulus of 270 GPA and with a maximum strain of 10% before yield. In comparison, bilayer MoS2 has a Youngs modulus of 200 GPa attributed to interlayer slip. As layer number is increased further the interlayer slip is overshadowed by the bending rigidity with a Youngs modulus of 330 GPa.
Proposed applications
Some applications include electronic devices, photonic and energy harvesting devices, and flexible and transparent substrates. Other applications include on quantum computing qubit devices solar cells, and flexible electronics.
Quantum computing
Theoretical work has predicted the control of the band edges hybridization on some van der Waals heterostructures via electric fields and proposed its usage in quantum bit devices, considering the ZrSe2/SnSe2 heterobilayer as an example. Further experimental work has confirmed these predictions for the case of the MoS2/WS2 heterobilayer.
Magnetic NEMS
2D layered magnetic materials are attractive building blocks for nanoelectromechanical systems (NEMS): while they share high stiffness and strength and low mass with other 2D materials, they are magnetically active. Among the large class of newly emerged 2D layered magnetic materials, of particular interest is few-layer CrI3, whose magnetic ground state consists of antiferromagnetically coupled ferromagnetic (FM) monolayers with out-of-plane easy axis. The interlayer exchange interaction is relatively weak, a magnetic field on the order of 0.5 T in the out-of-plane (𝒛) direction can induce spin-flip transition in bilayer CrI3. Remarkable phenomena and device concepts based on detecting and controlling the interlayer magnetic state have been recently demonstrated, including spin-filter giant magnetoresistance, magnetic switching by electric field or electrostatic doping, and spin transistors. The coupling between the magnetic and mechanical properties in atomically thin materials, the basis for 2D magnetic NEMS, however, remains elusive although NEMS made of thicker magnetic materials or coated with FM metals have been studied.
References
Semiconductors
Two-dimensional nanomaterials
Condensed matter physics | Two-dimensional semiconductor | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,743 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Phases of matter",
"Materials science",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
47,006,545 | https://en.wikipedia.org/wiki/EN%2012255 | EN 12255 - Wastewater treatment plants refers to a set of European standards which specify the general requirements for structures and equipment that relate to wastewater treatment plants for a total population of more than 50 PT (population total). This standard, however, does not include the design of a treatment processes itself. The standards consist of the following parts:
EN 12255-1: Part 1: General construction principles
EN 12255-2: Part 2: Performance requirements of raw wastewater pumping installations
EN 12255-3: Part 3: Preliminary treatment
EN 12255-4: Part 4: Primary settlement
EN 12255-5: Part 5: Lagooning processes
EN 12255-6: Part 6: Activated sludge process
EN 12255-7: Part 7: Biological fixed-film reactors
EN 12255-8: Part 8: Sludge treatment and storage
EN 12255-9: Part 9: Odour control and ventilation
EN 12255-10: Part 10: Safety principles
EN 12255-11: Part 11: General data required
EN 12255-12: Part 12: Control and automation
EN 12255-13: Part 13: Chemical treatment - Treatment of wastewater by precipitation/flocculation
EN 12255-14: Part 14: Disinfection
EN 12255-15: Part 15: Measurement of the oxygen transfer in clean water in aeration tanks of activated sludge plants
EN 12255-16: Part 16: Physical (mechanical) filtration
See also
List of EN standards
European Committee for Standardization
External links
European Committee for Standardization
References
12255
Sewerage | EN 12255 | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 320 | [
"Sewerage",
"Environmental engineering",
"Water pollution"
] |
47,008,203 | https://en.wikipedia.org/wiki/Complex%20oxide | A complex oxide is a chemical compound that contains oxygen and at least two other elements (or oxygen and just one other element that's in at least two oxidation states). Complex oxide materials are notable for their wide range of magnetic and electronic properties, such as ferromagnetism, ferroelectricity, and high-temperature superconductivity. These properties often come from their strongly correlated electrons in d or f orbitals.
Natural occurrence
Many minerals found in the ground are complex oxides. Commonly studied mineral crystal families include spinels and perovskites.
Applications
Complex oxide materials are used in a variety of commercial applications.
Magnets
Magnets made of the complex oxide ferrite are commonly used in transformer cores and in inductors. Ferrites are ideal for these applications because they are magnetic, electrically insulating, and inexpensive.
Transducers and actuators
Piezoelectric transducers and actuators are often made of the complex oxide PZT (lead zirconate titanate). These transducers are used in applications such ultrasound imaging and some microphones. PZT is also sometimes used for piezo ignition in lighters and gas grills.
Capacitors
Complex oxide materials are the dominant dielectric material in ceramic capacitors. About one trillion ceramic capacitors are produced each year to be used in electronic equipment.
Fuel cells
Solid oxide fuel cells often use complex oxide materials as their electrolytes, anodes, and cathodes.
Gemstone jewelry
Many precious gemstones, such as emerald and topaz, are complex oxide crystals. Historically, some complex oxide materials (such as strontium titanate, yttrium aluminium garnet, and gadolinium gallium garnet) were also synthesized as inexpensive diamond simulants, though after 1976 they were mostly eclipsed by cubic zirconia.
New electronic devices
As of 2015, there is research underway to commercialize complex oxides in new kinds of electronic devices, such as ReRAM, FeRAM, and memristors. Complex oxide materials are also being researched for their use in spintronics.
Another potential application of complex oxide materials is superconducting power lines. A few companies have invested in pilot projects, but the technology is not widespread.
Commonly studied complex oxides
Barium titanate (a multiferroic material)
Bismuth ferrite (a multiferroic material)
Bismuth strontium calcium copper oxide (a high-temperature superconductor)
Lanthanum aluminate (a high-dielectric insulator)
Lanthanum strontium manganite (a material exhibiting colossal magnetoresistance)
Lead zirconate titanate (a piezoelectric material)
Strontium titanate (a high-dielectric semiconductor)
Yttrium barium copper oxide (a high-temperature superconductor)
See also
Colossal magnetoresistance
Half-metal
Lanthanum aluminate-strontium titanate interface
Mixed oxide
Mott insulator
Multiferroics
References
External links
Materials science: Enter the oxides, Nature. (subscription required)
Condensed-matter physics: Complex oxides on fire
Complex oxides: A tale of two enemies
Oxide interfaces
Ferromagnetic materials
Superconductivity | Complex oxide | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 689 | [
"Electrical resistance and conductance",
"Physical quantities",
"Superconductivity",
"Ferromagnetic materials",
"Oxides",
"Materials science",
"Salts",
"Materials",
"Condensed matter physics",
"Matter"
] |
39,638,268 | https://en.wikipedia.org/wiki/Self-healing%20hydrogels | Self-healing hydrogels are a specialized type of polymer hydrogel. A hydrogel is a macromolecular polymer gel constructed of a network of crosslinked polymer chains. Hydrogels are synthesized from hydrophilic monomers by either chain or step growth, along with a functional crosslinker to promote network formation. A net-like structure along with void imperfections enhance the hydrogel's ability to absorb large amounts of water via hydrogen bonding. As a result, hydrogels, self-healing alike, develop characteristic firm yet elastic mechanical properties. Self-healing refers to the spontaneous formation of new bonds when old bonds are broken within a material. The structure of the hydrogel along with electrostatic attraction forces drive new bond formation through reconstructive covalent dangling side chain or non-covalent hydrogen bonding. These flesh-like properties have motivated the research and development of self-healing hydrogels in fields such as reconstructive tissue engineering as scaffolding, as well as use in passive and preventive applications.
Synthesis
A variety of different polymerization methods may be utilized for the synthesis of the polymer chains that make up hydrogels. Their properties depend on how these chains are crosslinked.
Crosslinking
Crosslinking is the process of joining two or more polymer chains. Both chemical and physical crosslinking exists. In addition, both natural polymers such as proteins or synthetic polymers with a high affinity for water may be used as starting materials when selecting a hydrogel. Different crosslinking methods can be implemented for the design of a hydrogel. By definition, a crosslinked polymer gel is a macromolecule that solvent will not dissolve. Due to the polymeric domains created by crosslinking in the gel microstructure, hydrogels are not homogenous within the selected solvent system. The following sections summarize the chemical and physical methods by which hydrogels are crosslinked.
Chemical crosslinking
Physical crosslinking
Interface chemistry of self-healing hydrogels
Hydrogen bonding
Hydrogen bonding is a strong intermolecular force that forms a special type of dipole-dipole attraction. Hydrogen bonds form when a hydrogen atom bonded to a strongly electronegative atom is around another electronegative atom with a lone pair of electrons. Hydrogen bonds are stronger than normal dipole-dipole interactions and dispersion forces but they remain weaker than covalent and ionic bonds. In hydrogels, structure and stability of water molecules are highly affected by the bonds. The polar groups in the polymer strongly bind water molecules and form hydrogen bonds which also cause hydrophobic effects to occur. These hydrophobic effects can be exploited to design physically crosslinked hydrogels that exhibit self healing abilities. The hydrophobic effects combined with the hydrophilic effects within the hydrogel structure can be balanced through dangling side chains that mediates the hydrogen bonding that occurs between two separate hydrogel pieces or across a ruptured hydrogel.
Dangling side chain
A dangling side chain is a hydrocarbon chain side chains that branch off of the backbone of the polymer. Attached to the side chain are polar functional groups. The side chains "dangle" across the surface of the hydrogel, allowing it to interact with other functional groups and form new bonds. The ideal side chain would be long and flexible so it could reach across the surface to react, but short enough to minimize steric hindrance and collapse from the hydrophobic effect. The side chains need to keep both the hydrophobic and hydrophilic effects in balance. In a study performed by the University of California San Diego to compare healing ability, hydrogels of varying side chain lengths with similar crosslinking contents were compared and the results showed that healing ability of the hydrogels depends nonmonotonically on the side chain length. With shorter side chain lengths, there is limited reach of the carboxyl group which decreases the mediation of the hydrogen bonds across the interface. As the chain increases in length, the reach of the carboxyl group becomes more flexible and the hydrogen bonds can mediated. However, when a side chain length is too long, the interruption between the interaction of the carboxyl and amide groups that help to mediate the hydrogen bonds. It can also accumulate and collapse the hydrogel and prevent the healing from occurring.
Surfactant effects
Most self-healing hydrogels rely on electrostatic attraction to spontaneously create new bonds. The electrostatic attraction can be masked using protonation of the polar functional groups. When the pH is raised the polar functional groups become deprotonated, freeing the polar functional group to react.
Since the hydrogels rely on electrostatic attraction for self-healing, the process can be affected by electrostatic screening. The effects of a change in salinity can be modeled using the Gouy-Chapman-Stern theory Double Layer .
: Zeta Potential
: Salinity of solution
: Distance between molecules, if the polar functional group is one molecule and an ion in solution is the other.
To calculate the Gouy–Chapmanm potential, the salinity factor must be calculated. The expression given for the salinity factor is as follows:
: charge of ion
:
: number of ions per cubic meter
: dielectric constant of solvent
: , the permittivity of free space
: , Boltzmann constant
: thermodynamic temperature
These effects become important when considering the application of self-healing hydrogels to the medical field. They will be affected by the pH and salinity of blood.
These effects also come into play during synthesis when trying to add large hydrophobes to a hydrophilic polymer backbone. A research group from the Istanbul Technical University has shown that large hydrophobes can be added by adding an electrolyte in a sufficient amount. During synthesis, the hydrophobes were held in micelles before attaching to the polymer backbone. By increasing the salinity of the solution, the micelles were able to grow and encompass more hydrophobes. If there are more hydrophobes in a micelle, then the solubility of the hydrophobe increases. The increase in the solubility lead to an increase in the formation of hydrogels with large hydrophobes.
Physical properties
Surface properties
Surface tension and energy
The surface tension (γ) of a material is directly related to its intramolecular and intermolecular forces. The stronger the force, the greater the surface tension. This can be modeled by an equation:
Where ΔvapU is the energy of vaporization, NA is the Avogadro constant, and a2 is the surface area per molecule. This equation also implies that the energy of vaporization affects surface tension. It is known that the stronger the force, the higher the energy of vaporization. Surface tension can then be used to calculate surface energy (uσ). An equation describing this property is:
where T is temperature and the system is at constant pressure and area. Specifically for hydrogels, the free surface energy can be predicted using the Flory–Huggins free energy function for the hydrogels.
For hydrogels, surface tension plays a role in several additional characteristics including swelling ratio and stabilization.
Swelling
Hydrogels have the remarkable ability to swell in water and aqueous solvents. During the process of swelling, surface instability can occur. This instability depends on the thickness of the hydrogel layers and the surface tension. A higher surface tension stabilizes the flat surface of the hydrogel, which is the outer-most layer. The swelling ratio of the flat layer can be calculated using the following equation derived from the Flory–Huggins theory of free surface energy in hydrogels:
where λh is the swelling ratio, μ is the chemical potential, p is pressure, kB is the Boltzmann constant, and χ and Nv are unitless hydrogel constants.
As swelling increases, mechanical properties generally suffer.
Surface deformation
The surface deformation of hydrogels is important because it can result in self-induced cracking. Each hydrogel has a characteristic wavelength of instability (λ) that depends on elastocapillary length. This length is calculated by dividing the surface tension (γ) by the elasticity (μ) of the hydrogel. The greater the wavelength of instability, the greater the elastocapillary length of instability, which makes a material more prone to cracking. The characteristic wavelength of instability can be modeled by:
where H is the thickness of the hydrogel.
Critical solution temperature
Some hydrogels are able to respond to stimuli and their surrounding environments. Examples of these stimuli include light, temperature, pH, and electrical fields. Hydrogels that are temperature sensitive are known as thermogels. Thermo-responsive hydrogels undergo reversible, thermally induced phase transition upon reaching either the upper or lower critical solution temperature. By definition, a crosslinked polymer gel is a macromolecule that cannot dissolve. Due to the polymeric domains created by crosslinking, in the gel microstructure, hydrogels are not homogenous within the solvent system in which they are placed into. Swelling of the network, however, does occur in the presence of a proper solvent. Voids in the microstructure of the gel where crosslinking agent or monomer has aggregated during polymerization can cause solvent to diffuse into or out of the hydrogel. The microstructure of hydrogel therefore are not constant, and imperfections occur where water from outside of the gel can accumulate these voids. This process is temperature dependent, and solvent behavior depends on whether the solvent-gel system has reached, or surpassed, the critical solution temperature (LCST). The LCST defines a boundary between which a gel or polymer chain will separate solvent into one or two phases. The spinodial and binodial regions of a polymer-solvent phase diagram represent the energetic favorability of the hydrogel becoming miscible in solution or separating into two phases.
Applications
Medical uses
Self-healing hydrogels encompass a wide range of applications. With a high biocompatibility, hydrogels are useful for a number of medical applications. Areas where active research is currently being conducted include:
Absorbable sutures
Tissue engineering and regeneration
Drug delivery
Tissue engineering and regeneration
Polymer scaffolds
Hydrogels are created from crosslinked polymers that are water-insoluble. Polymer hydrogels absorb significant amounts of aqueous solutions, and therefore have a high water content. This high water content makes hydrogel more similar to living body tissues than any other material for tissue regeneration. Additionally, polymer scaffolds using self-healing hydrogels are structurally similar to the extracellular matrices of many of the tissues. Scaffolds act as three-dimensional artificial templates in which the tissue targeted for reconstruction is cultured to grow onto. The high porosity of hydrogels allows for the diffusion of cells during migration, as well as the transfer of nutrients and waste products away from cellular membranes. Scaffolds are subject to harsh processing conditions during tissue culturing. These include mechanical stimulation to promote cellular growth, a process which places stress on the scaffold structure. This stress may lead to localized rupturing of the scaffold which is detrimental to the reconstruction process. In a self-healing hydrogel scaffold, ruptured scaffolds have the ability for localized self-repair of their damaged three-dimensional structure.
Current research is exploring the effectiveness of using various types of hydrogel scaffolds for tissue engineering and regeneration including synthetic hydrogels, biological hydrogels, and biohybrid hydrogels.
In 2019, researchers Biplab Sarkar and Vivek Kumar of the New Jersey Institute of Technology developed a self-assembling peptide hydrogel that has proven successful in increasing blood vessel regrowth and neuron survival in rats affected by Traumatic Brain Injuries (TBI). By adapting the hydrogel to closely resemble brain tissue and injecting it into the injured areas of the brain, the researchers’ studies have shown improved mobility and cognition after only a week of treatment. If trials continued to prove successful, this peptide hydrogel may be approved for human trials and eventual widespread use in the medical community as a treatment for TBIs. This hydrogel also has the potential to be adapted to other forms of tissue in the human body, and promote regeneration and recovery from other injuries.
Synthetic hydrogels
Polyethylene glycol (PEG) hydrogels
Poly (2-hydroxyethyl methacrylate) (PHEMA) hydrogels
Polyethylene glycol(PEG) polymers are synthetic materials that can be crosslinked to form hydrogels. PEG hydrogels are not toxic to the body, do not elicit an immune response, and have been approved by the US Food and Drug Administration for clinical use. The surfaces of PEG polymers are easily modified with peptide sequences that can attract cells for adhesion and could therefore be used for tissue regeneration.
Poly (2-hydroxyethyl methacrylate) (PHEMA) hydrogels can be combined with rosette nanotubes (RNTs). RNTs can emulate skin structures such as collagen and keratin and self-assemble when injected into the body. This type of hydrogel is being explored for use in skin regeneration and has shown promising results such as fibroblast and keratinocyte proliferation. Both of these cell types are crucial for the production of skin components.
Biological hydrogels
Biological hydrogels are derived from preexisting components of body tissues such as collagen, hyaluronic acid (HA), or fibrin. Collagen, HA, and fibrin are components that occur naturally in the extracellular matrix of mammals. Collagen is the main structural component in tissues and it already contains cell-signaling domains that can promote cell growth. In order to mechanically enhance collagen into a hydrogel, it must be chemically crosslinked, crosslinked using UV light or temperature, or mixed with other polymers. Collagen hydrogels would be nontoxic and biocompatible.
Hybrid hydrogels
Hybrid hydrogels combine synthetic and biological materials and take advantage of the best properties of each. Synthetic polymers are easily customizable and can be tailored for specific functions such as biocompatibility. Biological polymers such as peptides also have adventitious properties such as specificity of binding and high affinity for certain cells and molecules. A hybrid of these two polymer types allows for the creation of hydrogels with novel properties. An example of a hybrid hydrogel would include a synthetically created polymer with several peptide domains.
Integrated fibre nanostructures
Peptide-based self-healing hydrogels may be selectively grown onto nanofiber material which can then incorporated into the desired reconstructive tissue target. The hydrogel framework is then chemically modified to promote cell adhesion to the nanofiber peptide scaffold. Because the growth of the extracellular matrix scaffold is pH dependent, the materials selected must be factored for pH response when selecting the scaffolding material.
Drug delivery
The swelling and bioadhesion of hydrogels can be controlled based on the fluid environment they are introduced to in the body. These properties make them excellent for use as controlled drug delivery devices. Where the hydrogel adheres in the body will be determined by its chemistry and reactions with the surrounding tissues. If introduced by mouth, the hydrogel could adhere to anywhere in the gastrointestinal tract including the mouth, the stomach, the small intestine, or the colon. Adhesion in a specifically targeted region will cause for a localized drug delivery and an increased concentration of the drug taken up by the tissues.
Smart hydrogels in drug delivery
Smart hydrogels are sensitive to stimuli such as changes in temperature or pH. Changes in the environment alter the swelling properties of the hydrogels and can cause them to increase or decrease the release of the drug impregnated into the fibers. An example of this would be hydrogels that release insulin in the presence of high glucose levels in the bloodstream. These glucose sensitive hydrogels are modified with the enzyme glucose oxidase. In the presence of glucose, the glucose oxidase will catalyze a reaction that ends in increased levels of H+. These H+ ions raise the pH of the surrounding environment and could therefore cause a change in a smart hydrogel that would initiate the release of insulin.
Other uses
Although research is currently focusing on the bioengineering aspect of self-healing hydrogels, several non-medical applications do exist, including:
pH meters
Sealants for acid leaks
pH Meter
Dangling type side chain self-healing hydrogels are activated by changes in the relative acidity of solution they are in. Depending on user specified application, side chains may be selectively used in self-healing hydrogels as pH indicators. If a specified functional group chain end with a low pKa, such as a carboxylic acid, is subject to a neutral pH conditions, water will deprotonate the acidic chain end, activating the chain ends. Crosslinking or what is known as self-healing will begin, causing two or more separated hydrogels to fuse into one.
Sealant
Research into the use self-healing hydrogels has revealed an effective method for mitigating acid spills through the ability to selectively crosslink under acidic conditions. In a testing done by the University of California San Diego, various surfaces were coated with self healing hydrogels and then mechanically damaged with 300 micrometer wide cracks with the coatings healing the crack within seconds upon exposure of low pH buffers. The hydrogels also can adhere to various plastics due to hydrophobic interactions. Both findings suggest the use of these hydrogels as a sealant for vessels containing corrosive acids. No commercial applications currently exist for implementation of this technology.
Derivatives
Drying of hydrogels under controlled circumstances may yield xerogels and aerogels. A xerogel is a solid that retains significant porosity (15-50%) with a very small pore size (1–10 nm). In an aerogel, the porosity is somewhat higher and the pores are more than an order of magnitude larger, resulting in an ultra-low-density material with a low thermal conductivity and an almost translucent, smoke-like appearance.
See also
Self healing material
Biopolymer
Tissue engineering
Biosensor
Supramolecular chemistry
Gel
Hydrogel
Surface chemistry
References
Further reading
Polymer chemistry
Tissue engineering | Self-healing hydrogels | [
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 3,844 | [
"Biological engineering",
"Cloning",
"Chemical engineering",
"Materials science",
"Tissue engineering",
"Polymer chemistry",
"Medical technology"
] |
39,639,151 | https://en.wikipedia.org/wiki/Depletion%20force | A depletion force is an effective attractive force that arises between large colloidal particles that are suspended in a dilute solution of depletants, which are smaller solutes that are preferentially excluded from the vicinity of the large particles. One of the earliest reports of depletion forces that lead to particle coagulation is that of Bondy, who observed the separation or "creaming" of rubber latex upon addition of polymer depletant molecules (sodium alginate) to solution. More generally, depletants can include polymers, micelles, osmolytes, ink, mud, or paint dispersed in a continuous phase.
Depletion forces are often regarded as entropic forces, as was first explained by the established Asakura–Oosawa model. In this theory the depletion force arises from an increase in osmotic pressure of the surrounding solution when colloidal particles get close enough such that the excluded cosolutes (depletants) cannot fit in between them.
Because the particles were considered as hard-core (completely rigid) particles, the emerging picture of the underlying mechanism inducing the force was necessarily entropic.
Causes
Sterics
The system of colloids and depletants in solution is typically modeled by treating the large colloids and small depletants as dissimilarly sized hard spheres. Hard spheres are characterized as non-interacting and impenetrable spheres. These two fundamental properties of hard spheres are described mathematically by the hard-sphere potential. The hard-sphere potential imposes steric constraint around large spheres which in turn gives rise to excluded volume, that is, volume that is unavailable for small spheres to occupy.
Hard-sphere potential
In a colloidal dispersion, the colloid-colloid interaction potential is approximated as the interaction potential between two hard spheres. For two hard spheres of diameter of , the interaction potential as a function of interparticle separation is:
called the hard-sphere potential where is the center-to-center distance between the spheres.
If both colloids and depletants are in a dispersion, there is interaction potential between colloidal particles and depletant particles that is described similarly by the hard-sphere potential. Again, approximating the particles to be hard-spheres, the interaction potential between colloids of diameter and depletant sols of diameter is:
where is the center-to-center distance between the spheres. Typically, depletant particles are very small compared to the colloids so
The underlying consequence of the hard-sphere potential is that dispersed colloids cannot penetrate each other and have no mutual attraction or repulsion.
Excluded volume
When both large colloidal particles and small depletants are in a suspension, there is a region which surrounds every large colloidal particle that is unavailable for the centers of the depletants to occupy. This steric restriction is due to the colloid-depletant hard-sphere potential. The volume of the excluded region is
where is the diameter of the large spheres and is the diameter of the small spheres.
When the large spheres get close enough, the excluded volumes surrounding the spheres intersect. The overlapping volumes result in a reduced excluded volume, that is, an increase in the total free volume available to small spheres. The reduced excluded volume, can be written
where is half the width of the lens-shaped region of overlap volume formed by spherical caps. The volume available for small spheres is the difference between the total volume of the system and the excluded volume. To determine the available volume for small spheres, there are two distinguishable cases: first, the separation of the large spheres is big enough so small spheres can penetrate in between them; second, the large spheres are close enough so that small spheres cannot penetrate between them. For each case, the available volume for small spheres is given by
In the latter case small spheres are depleted from the interparticle region between large spheres and a depletion force ensues.
Thermodynamics
The depletion force is described as an entropic force because it is fundamentally a manifestation of the second law of thermodynamics, which states that a system tends to increase its entropy. The gain in translational entropy of the depletants, owing to the increased available volume, is much greater than the loss of entropy from flocculation of the colloids. The positive change in entropy lowers the Helmholtz free energy and causes colloidal flocculation to happen spontaneously. The system of colloids and depletants in a solution is modeled as a canonical ensemble of hard spheres for statistical determinations of thermodynamic quantities.
However, recent experiments and theoretical models found that depletion forces can be enthalpically driven. In these instances, the intricate balance of interactions between the solution components results in the net exclusion of cosolute from macromolecule. This exclusion results in an effective stabilization of the macromolecule self-association, which can be not only enthalpically dominated, but also entropically unfavorable.
Entropy and Helmholtz energy
The total volume available for small spheres increases when the excluded volumes around large spheres overlap. The increased volume allotted for small spheres allows them greater translational freedom which increases their entropy. Because the canonical ensemble is an athermal system at a constant volume the Helmholtz free energy is written
where is the Helmholtz free energy, is the entropy and is the temperature. The system's net gain in entropy is positive from increased volume, thus the Helmholtz free energy is negative and depletion flocculation happens spontaneously.
The free energy of the system is obtained from a statistical definition of Helmholtz free energy
where is the partition function for the canonical ensemble. The partition function contains statistical information that describes the canonical ensemble including its total volume, the total number of small spheres, the volume available for small spheres to occupy, and the de Broglie wavelength. If hard-spheres are assumed, the partition function is
The volume available for small spheres, was calculated above. is the number of small spheres and is the de Broglie wavelength. Substituting into the statistical definition, the Helmholtz free energy now reads
The magnitude of the depletion force, is equal to the change in Helmholtz free energy with distance between two large spheres and is given by
The entropic nature of depletion forces was proven experimentally in some cases. For example, some polymeric crowders induce entropic depletion forces that stabilize proteins in their native state.
Other examples include many systems with hard-core only interactions.
Osmotic pressure
The depletion force is an effect of increased osmotic pressure in the surrounding solution.
When colloids get sufficiently close, that is when their excluded volumes overlap, depletants are expelled from the interparticle region. This region between colloids then becomes a phase of pure solvent. When this occurs, there is a higher depletant concentration in the surrounding solution than in the interparticle region. The resulting density gradient gives rise to an osmotic pressure that is anisotropic in nature, acting on the outer sides of the colloids and promoting flocculation. If the hard-sphere approximation is employed, the osmotic pressure is:
where is osmotic pressure and is number density of small spheres and is the Boltzmann constant.
Asakura–Oosawa model
Depletion forces were first described by Sho Asakura and Fumio Oosawa in 1954. In their model, the force is always considered to be attractive. Additionally, the force is considered to be proportional to the osmotic pressure. The Asakura–Oosawa model assumes low macromolecule densities and that the density distribution, , of the macromolecules is constant. Asakura and Oosawa described four cases in which depletion forces would occur. They first described the most general case as two solid plates in a solution of macromolecules. The principles for the first case were then extended to three additional cases.
Free energy change due to the depletion force
In the Asakura–Oosawa model for depletion forces, the change in free-energy imposed by an excluded cosolute, , is:
where is the osmotic pressure, and is the change in excluded volume (which is related to molecular size and shape). The very same result can be derived using the Kirkwood-Buff solution theory.
Solid plates in a solution of macromolecules
In the first case, two solid plates are placed in a solution of rigid spherical macromolecules. If the distance between two plates, , is smaller than the diameter of solute molecules, , then no solute can enter between the plates. This results in pure solvent existing between the plates. The difference in concentration of macromolecules in the solution between the plates and the bulk solution causes a force equal to the osmotic pressure to act on the plates. In a very dilute and monodisperse solution the force is defined by
where is the force, and is the total number of solute molecules. The force causes the entropy of the macromolecules to increase and is attractive when
Rod-like macromolecules
Asakura and Oosawa described the second case as consisting of two plates in a solution of rod like macromolecules. The rod like macromolecules are described as having a length, , where , the area of the plates. As the length of the rods increases, the concentration of the rods between the plates is decreased as it becomes more difficult for the rods to enter between the plates due to steric hindrances. As a result, the force acting on the plates increases with the length of the rods until it becomes equal to the osmotic pressure. In this context, it is worth mentioning that even the isotropic-nematic transition of lyotropic liquid crystals, as first explained in Onsager's theory, can in itself be considered a special case of depletion forces.
Plates in a solution of polymers
The third case described by Asakura and Oosawa is two plates in a solution of polymers. Due to the size of the polymers, the concentration of polymers in the neighborhood of the plates is reduced, which result the conformational entropy of the polymers being decreased. The case can be approximated by modeling it as diffusion in a vessel with walls which absorb diffusing particles. The force, , can then be calculated according to:
In this equation is the attraction from the osmotic effect. is the repulsion due to chain molecules confined between plates. is on order of , the mean end-to-end distance of chain molecules in free space.
Large hard spheres in a solution of small hard spheres
The final case described by Asakura and Oosawa describes two large, hard spheres of diameter , in a solution of small, hard spheres of diameter . If the distance between the center of the spheres, , is less than , then the small spheres are excluded from the space between the large spheres. This results in the area between the large spheres having a reduced concentration of small spheres and therefore reduced entropy. This reduced entropy causes a force to act upon the large spheres pushing them together. This effect was convincingly demonstrated in experiments with vibrofluidized granular materials where attraction can be directly visualized.
Improvements upon Asakura–Oosawa model
Derjaguin approximation
Theory
Asakura and Oosawa assumed low concentrations of macromolecules. However, at high concentrations of macromolecules, structural correlation effects in the macromolecular liquid become important. Additionally, the repulsive interaction strength strongly increases for large values of (large radius/small radius). In order to account for these issues, the Derjaguin approximation, which is valid for any type of force law, has been applied to depletion forces. The Derjaguin approximation relates the force between two spheres to the force between two plates. The force is then integrated between small regions on one surface and the opposite surface, which is assumed to be locally flat.
Equations
If there are two spheres of radii and on the axis, and the spheres are distance apart, where is much smaller than and , then the force, , in the direction is
In this equation, , and is the normal force per unit area between two flat surfaces distance apart.
When the Derjaguin approximation is applied to depletion forces, and 0 < h < 2RS, then the depletion force given by the Derjaguin approximation is
In this equation, is the geometrical factor, which is set to 1, and , the interfacial tension at the wall-fluid interface.
Density functional theory
Theory
Asakura and Oosawa assumed a uniform particle density, which is true in a homogeneous solution. However, if an external potential is applied to a solution, then the uniform particle density is disrupted, making Asakura and Oosawa's assumption invalid. Density functional theory accounts for variations in particle density by using the grand canonical potential. The grand canonical potential, which is a state function for the grand canonical ensemble, is used to calculate the probability density for microscopic states in macroscopic state. When applied to depletion forces, the grand canonical potential calculates the local particle densities in a solution.
Equations
Density functional theory states that when any fluid is exposed to an external potential, , then all equilibrium quantities become functions of number density profile, . As a result, the total free energy is minimized. The Grand canonical potential, , is then written
where is the chemical potential, is the temperature, and is the helmholtz free energy.
Enthalpic depletion forces
The original Asakura–Oosawa model considered only hard-core interactions. In such an athermal mixture the origin of depletion forces is necessarily entropic. If the intermolecular potentials also include repulsive and/or attractive terms, and if the solvent is considered explicitly, the depletion interaction can have additional thermodynamic contributions.
The notion that depletion forces can also be enthalpically driven has surfaced due to recent experiments regarding protein stabilization induced by compatible osmolytes, such as trehalose, glycerol, and sorbitol. These osmolytes are preferentially excluded from protein surfaces, forming a layer of preferential hydration around the proteins. When the protein folds - this exclusion volume diminishes, making the folded state lower in free energy. Hence the excluded osmolytes shift the folding equilibrium towards the folded state. This effect was generally thought to be an entropic force, in the spirit of the original Asakura–Oosawa model and of macromolecular crowding. However, thermodynamic breakdown of the free-energy gain due to osmolyte addition showed the effect is in fact enthalpically driven, whereas entropy can even be disfavorable.
For many cases, the molecular origin of this enthalpically driven depletion force can be traced to an effective "soft" repulsion in the potential of mean force between macromolecule and cosolute. Both Monte-Carlo simulations and a simple analytic model demonstrate that when the hard-core potential (as in Asakura and Oosawa's model) is supplemented with an additional repulsive "softer" interaction, the depletion force can become enthalpically dominated.
Measurement and experimentation
Depletion forces have been observed and measured using a variety of instrumentation including atomic force microscopy, optical tweezers, and hydrodynamic force balance machines.
Atomic force microscopy
Atomic force microscopy (AFM) is commonly used to directly measure the magnitude of depletion forces. This method uses the deflection of a very small cantilever contacting a sample which is measured by a laser. The force required to cause a certain amount of beam deflection can be determined from the change in angle of the laser. The small scale of AFM allows for dispersion particles to be measured directly yielding a relatively accurate measurement of depletion forces.
Optical tweezers
The force required to separate two colloid particles can be measured using optical tweezers. This method uses a focused laser beam to apply an attractive or repulsive force on dielectric micro and nanoparticles. This technique is used with dispersion particles by applying a force which resists depletion forces. The displacement of the particles is then measured and used to find the attractive force between the particles.
Hydrodynamic force balance
HFB machines measure the strength of particle interactions using liquid flow to separate the particles. This method is used to find depletion force strength by adhering to a static plate one particle in a dispersion particle doublet and applying shear force through fluid flow. The drag created by the dispersion particles resists the depletion force between them, pulling the free particle away from the adhered particle. A force balance of the particles at separation can be used to determine the depletion force between the particles.
Colloidal destabilization
Mechanism
Depletion forces are used extensively as a method of destabilizing colloids. By introducing particles into a colloidal dispersion, attractive depletion forces can be induced between dispersed particles. These attractive interactions bring the dispersed particles together resulting in flocculation. This destabilizes the colloid as the particles are no longer dispersed in the liquid but concentrated in floc formations. Flocs are then easily removed through filtration processes leaving behind a non-dispersed, pure liquid.
Water treatment
The use of depletion forces to initiate flocculation is a common process in water treatment. The relatively small size of dispersed particles in waste water renders typical filtration methods ineffective. However, if the dispersion was to be destabilized and flocculation occur, particles can then be filtered out to produce pure water. Therefore, coagulants and flocculants are typically introduced to waste water which create these depletion forces between the dispersed particles.
Winemaking
Some wine production methods also use depletion forces to remove dispersed particles from wine. Unwanted colloidal particles can be found in wine originating from the must or produced during the winemaking process. These particles typically consist of carbohydrates, pigmentation molecules, or proteins which may adversely affect the taste and purity of the wine. Therefore, flocculants are often added to induce floc precipitation for easy filtration.
Common flocculants
The table below lists common flocculants along with their chemical formulas, net electrical charge, molecular weight and current applications.
Biological systems
There are suggestions that depletion forces may be a significant contributor in some biological systems, specifically in membrane interactions between cells or any membranous structure. With concentrations of large molecules such as proteins or carbohydrates in the extracellular matrix, it is likely some depletion force effects are observed between cells or vesicles that are very close. However, due to the complexity of most biological systems, it is difficult to determine how much these depletion forces influence membrane interactions. Models of vesicle interactions with depletion forces have been developed, but these are greatly simplified and their applicability to real biological systems is questionable.
Generalization: anisotropic colloids and systems without polymers
Depletion forces in colloid-polymer mixtures drive colloids to form aggregates that are densely packed locally. This local dense packing is also observed in colloidal systems without polymer depletants. Without polymer depletants the mechanism is similar, because the particles in dense colloidal suspension act, effectively, as depletants for one another This effect is particularly striking for anisotropically shaped colloidal particles, where the anisotropy of the shape leads to the emergence of directional entropic forces that are responsible for the ordering of hard anisotropic colloids into a wide range of crystal structures.
References
Colloidal chemistry
Thermodynamic entropy
Soft matter | Depletion force | [
"Physics",
"Chemistry",
"Materials_science"
] | 4,209 | [
"Colloidal chemistry",
"Physical quantities",
"Soft matter",
"Thermodynamic entropy",
"Colloids",
"Surface science",
"Entropy",
"Condensed matter physics",
"Statistical mechanics"
] |
39,639,420 | https://en.wikipedia.org/wiki/Offshore%20geotechnical%20engineering | Offshore geotechnical engineering is a sub-field of geotechnical engineering. It is concerned with foundation design, construction, maintenance and decommissioning for human-made structures in the sea. Oil platforms, artificial islands and submarine pipelines are examples of such structures. The seabed has to be able to withstand the weight of these structures and the applied loads. Geohazards must also be taken into account. The need for offshore developments stems from a gradual depletion of hydrocarbon reserves onshore or near the coastlines, as new fields are being developed at greater distances offshore and in deeper water, with a corresponding adaptation of the offshore site investigations. Today, there are more than 7,000 offshore platforms operating at a water depth up to and exceeding 2000 m. A typical field development extends over tens of square kilometers, and may comprise several fixed structures, infield flowlines with an export pipeline either to the shoreline or connected to a regional trunkline.
Differences between onshore and offshore geotechnical engineering
An offshore environment has several implications for geotechnical engineering. These include the following:
Ground improvement (on the seabed) and site investigation are expensive.
Soil conditions are unusual (e.g. presence of carbonates, shallow gas).
Offshore structures are tall, often extending over above their foundation.
Offshore structures typically have to contend with significant lateral loads (i.e. large moment loading relative to the weight of the structure).
Cyclic loading can be a major design issue.
Offshore structures are exposed to a wider range of geohazards.
The codes and technical standards are different from those used for onshore developments.
Design focuses on ultimate limit state as opposed to deformation.
Design modifications during construction are either unfeasible or very expensive.
The design life of these structures often ranges between 25–50 years.
The environmental and financial costs in case of failure can be higher.
The offshore environment
Offshore structures are exposed to various environmental loads: wind, waves, currents and, in cold oceans, sea ice and icebergs. Environmental loads act primarily in the horizontal direction, but also have a vertical component. Some of these loads get transmitted to the foundation (the seabed). Wind, wave and current regimes can be estimated from meteorological and oceanographic data, which are collectively referred to as metocean data. Earthquake-induced loading can also occur – they proceed in the opposite direction: from the foundation to the structure. Depending on location, other geohazards may also be an issue. All of these phenomena may affect the integrity or the serviceability of the structure and its foundation during its operational lifespan – they need to be taken into account in offshore design.
The nature of the soil
Following are some to the features characterizing the soil in an offshore environment:
The soil is made up of sediments, which are generally assumed to be in a saturated state – saline water fills in the pore space.
Marine sediments are composed of detrital material as well as remains of marine organisms, the latter making up calcareous soils.
Total sediment thickness varies on a regional scale – it is normally higher near the coastline than it is away from it, where it is also finer grained.
In places, the seabed can be devoid of sediment, due to strong bottom currents.
The consolidation state of the soil is either normally consolidated (due to slow sediment deposition), overconsolidated (in places, a relic of glaciation) or underconsolidated (due to high sediment input).
Metocean aspects
Wave forces induce motion of floating structures in all six degrees of freedom – they are a major design criterion for offshore structures. When a wave's orbital motion reaches the seabed, it induces sediment transport. This only occurs to a water depth of about , which is the commonly adopted boundary between shallow water and deep water. The reason is that the orbital motion only extends to a water depth that is half the wavelength, and the maximum possible wavelength is generally considered to be . In shallow water, waves may generate pore pressure build-up in the soil, which may lead to flow slide, and repeated impact on a platform may cause liquefaction, and loss of support.
Currents are a source of horizontal loading for offshore structures. Because of the Bernoulli effect, they may also exert upward or downward forces on structural surfaces and can induce the vibration of wire lines and pipelines. Currents are responsible for eddies around a structure, which cause scouring and erosion of the soil. There are various types of currents: oceanic circulation, geostrophic, tidal, wind-driven, and density currents.
Geohazards
Geohazards are associated with geological activity, geotechnical features and environmental conditions. Shallow geohazards are those occurring at less than below the seafloor. Information on the potential risks associated with these phenomena is acquired through studies of the geomorphology, geological setting and tectonic framework in the area of interest, as well as with geophysical and geotechnical surveys of the seafloor. Examples of potential threats include tsunamis, landslides, active faults, mud diapirs and the nature of the soil layering (presence of karst, gas hydrates, carbonates). In cold regions, gouging ice features are a threat to subsea installations, such as pipelines. The risks associated with a particular type of geohazard is a function of how exposed the structure is to the event, how severe this event is and how often it occurs (for episodic events). Any threat has to be monitored, and mitigated for or removed.
Site investigation
Offshore site investigations are not unlike those conducted onshore (see Geotechnical investigation). They may be divided into three phases:
A desk study, which includes data compilation.
Geophysical surveys, either shallow and deep seabed penetration.
Geotechnical surveys, which includes sampling/drilling and in situ testing.
Desk study
In this phase, which may take place over a period of several months (depending on project size), information is gathered from various sources, including reports, scientific literature (journal articles, conference proceedings) and databases, with the purpose of evaluating risks, assessing design options and planning the subsequent phases. Bathymetry, regional geology, potential geohazards, seabed obstacles and metocean data are some of the information that are sought after during that phase.
Geophysical surveys
Geophysical surveys can be used for various purposes. One is to study the bathymetry in the location of interest and to produce an image of the seafloor (irregularities, objects on the seabed, lateral variability, ice gouges, ...). Seismic refraction surveys can be done to obtain information on shallow seabed stratigraphy – it can also be used to locate material such as sand, sand deposit and gravel for use in the construction of artificial islands. Geophysical surveys are conducted from a research vessel equipped with sonar devices and related equipment, such as single-beam and multibeam echosounders, side-scan sonars, ‘towfish’ and remotely operated vehicles (ROVs). For the sub-bottom stratigraphy, the tools used include boomers, sparkers, pingers and chirp. Geophysical surveys are normally required before conducting the geotechnical surveys; in larger projects, these phases may be interwoven.
Geotechnical surveys
Geotechnical surveys involve a combination of sampling, drilling, in situ testing as well as laboratory soil testing that is conducted offshore and, with samples, onshore. They serve to ground truth the results of the geophysical investigations; they also provide a detailed account of the seabed stratigraphy and soil engineering properties. Depending on water depth and metocean conditions, geotechnical surveys may be conducted from a dedicated geotechnical drillship, a semi-submersible, a jackup rig, a large hovercraft or other means. They are done at a series of specific locations, while the vessel maintains a constant position. Dynamic positioning and mooring with four-point anchoring systems are used for that purpose.
Shallow penetration geotechnical surveys may include soil sampling of the seabed surface or in situ mechanical testing. They are used to generate information on the physical and mechanical properties of the seabed. They extend to the first few meters below the mudline. Surveys done to these depths, which may be conducted at the same time as the shallow geophysical survey, may suffice if the structure to be deployed at that location is relatively light. These surveys are also useful for planning subsea pipeline routes.
The purpose of deep penetration geotechnical surveys is to collect information on the seabed stratigraphy to depths extending up to a few 100 meters below the mudline. These surveys are done when larger structures are planned at these locations. Deep drill holes require a few days during which the drilling unit has to remain exactly in the same position (see dynamic positioning).
Sampling and drilling
Seabed surface sampling can be done with a grab sampler and with a box corer. The latter provides undisturbed specimens, on which testing can be conducted, for instance, to determine the soil's relative density, water content and mechanical properties. Sampling can also be achieved with a tube corer, either gravity-driven, or that can be pushed into the seabed by a piston or by means of a vibration system (a device called a vibrocorer).
Drilling is another means of sampling the seabed. It is used to obtain a record of the seabed stratigraphy or the rock formations below it. The set-up used to sample an offshore structure's foundation is similar to that used by the oil industry to reach and delineate hydrocarbon reservoirs, with some differences in the types of testing. The drill string consists of a series of pipe segments in diameter screwed end to end, with a drillbit assembly at the bottom. As the dragbit (teeth extending downward from the drillbit) cut into the soil, soil cuttings are produced. Viscous drilling mud flowing down the drillpipe collects these cuttings and carry them up outside the drillpipe. As is the case for onshore geotechnical surveys, different tools can be used for sampling the soil from a drill hole, notably "Shelby tubes", "piston samplers" and "split spoon samplers".
In situ soil testing
Information on the mechanical strength of the soil can be obtained in situ (from the seabed itself as opposed to in a laboratory from a soil sample). The advantage of this approach is that the data are obtained from soil that has not suffered any disturbance as a result of its relocation. Two of the most commonly used instruments used for that purpose are the cone penetrometer (CPT) and the shear vane.
The CPT is a rod-shaped tool whose end has the shape of a cone with a known apex angle (e.g. 60 degrees). As it is pushed into the soil, the resistance to penetration is measured, thereby providing an indication of soil strength. A sleeve behind the cone allows the independent determination of the frictional resistance. Some cones are also able to measure pore water pressure. The shear vane test is used to determine the undrained shear strength of soft to medium cohesive soils. This instrument usually consists of four plates welded at 90 degrees from each other at the end of a rod. The rod is then inserted into the soil and a torque is applied to it so as to achieve a constant rotation rate. The torque resistance is measured and an equation is then used to determine the undrained shear strength (and the residual strength), which takes into account the vane's size and geometry.
Offshore structures and geotechnical considerations
Offshore structures are mainly represented by platforms, notably jackup rigs, steel jacket structures and gravity-based structures. The nature of the seabed has to be taken into account when planning these developments. For instance, a gravity-based structure typically has a very large footprint and is relatively buoyant (because it encloses a large open volume). Under these circumstances, vertical loading of the foundation may not be as significant as the horizontal loads exerted by wave actions and transferred to the seabed. In that scenario, sliding could be the dominant mode of failure. A more specific example is that of the Woodside "North Rankin A" steel jacket structure offshore Australia. The shaft capacity for the piles making up each of the structure's legs was estimated on the basis of conventional design methods, notably when driven into siliceous sands. But the soil at that site was a lower capacity calcareous sand. Costly remediation measures were required to correct this oversight.
Proper seabed characterization is also required for mooring systems. For instance, the design and installation of suction piles has to take into account the soil properties, notably its undrained shear strength. The same is true for the installation and capacity assessment of plate anchors.
Submarine pipelines
Submarine pipelines are another common type of man-made structure in the offshore environment. These structures either rest on the seabed, or are placed inside a trench to protect them from fishing trawlers, dragging anchors or fatigue due current-induced oscillations. Trenching is also used to protect pipelines from gouging by ice keels. In both cases, planning of the pipeline involves geotechnical considerations. Pipelines resting on the seabed require geotechnical data along the proposed pipeline route to evaluate potential stability issues, such as passive failure of the soil below it (the pipeline drops) due to insufficient bearing capacity, or sliding failure (the pipeline shift sideways), due to low sliding resistance. The process of trenching, when required, needs to take into account soil properties and how they would affect ploughing duration. Buckling potential induced by the axial and transverse response of the buried pipeline during its operational lifespan need to be assessed at the planning phase, and this will depend on the resistance of the enclosing soil.
Offshore embedded anchors
Offshore embedded anchors are anchors that derive their capacity from the frictional and/or bearing resistance of the soil surrounding them. This is converse to gravity anchors that derive their capacity from their weight. As offshore developments move into deeper waters, gravity based structures become less economical due to the large required size and cost of transportation. This proves opportune for the employment of embedded anchors.
See also
Civil engineering
Earth materials
Floating wind turbine
Geohazard
Geotechnical engineering
Geotechnical investigation
Geotechnics
Ocean
Offshore construction
Offshore drilling
Offshore (hydrocarbons)
Oil platform
Seabed
Seabed gouging by ice
Sediment
Soil
Soil mechanics
Submarine pipeline
Subsea
Notes
References
Bibliography
Bai Y. and Bai Q. (2010) Subsea Engineering Handbook. Gulf Professional Publishing, New York, 919 pp.
Bransby M.F., Yun G.J. Morrow D.R. and Brunning P. (2005) The performance of pipeline ploughs in layered soils. In: S.C.M. Gourvenec (Editor), Frontiers in Offshore Geotechnics, Taylor & Francis, Perth, Australia, pp. 597–605.
Cathie D.N., Jaeck C., Ballard J.-C. and Wintgens J.-F. (2005) Pipeline geotechnics – state-of-the-art. In: S.C.M. Gourvenec (Editor), Frontiers in Offshore Geotechnics. Taylor & Francis, Perth, Australia, pp. 95–114.
Das B.M. (2010) Principles of geotechnical engineering, Cengage Learning, Stamfort, U.S.A., 666 p.
Dean E.T.R. (2010) Offshore Geotechnical Engineering – Principles and Practice, Thomas Telford, Reston, VA, U.S.A., 520 p.
Gerwick B.C., (2000) Construction of marine and offshore structures, CRC Press, Boca Raton, U.S.A., 657 p.
Hogan P., Lane A., Hooper J., Broughton A. and Romans B. (2008) Geohazard challenges of the Woodside OceanWay Secure Energy LNG development, offshore Southern California, Proceedings of the 40th Offshore Technology Conference (OTC), Paper OTC19563, Houston.
Kolk H.J. and Wegerif J. (2005) Offshore site investigations: new frontiers. In: S.C.M. Gourvenec (Editor), Frontiers in Offshore Geotechnics, Taylor & Francis, Perth, Australia, pp. 145–161.
Newson T.A., Bransby M.F., Brunning P. and Morrow D.R. (2004) Determination of undrained shear strength parameters for buried pipeline stability in deltaic soft clays, Proceedings of the 14th International Offshore and Polar Engineering Conference, The International Society of Offshore and Polar Engineers (ISOPE), Toulon, pp. 38–48.
Palmer, A.C. and Been K. (2011) Pipeline geohazards for Arctic conditions. In: W.O. McCarron (Editor), Deepwater Foundations and Pipeline Geomechanics, J. Ross Publishing, Fort Lauderdale, Florida, pp. 171–188.
Peuchen L.J. and Raap C., (2007) Logging, sampling and testing for offshore geohazards, Proceedings of the 39th Offshore Technology Conference (OTC), Paper 18664, Houston.
Ramakrishnan T.V. (2008). Offshore Engineering, Gene-Tech Books, New Delhi, India, 347 p.
Randolph M. and Gourvenec S. (2011) Offshore geotechnical engineering, Spon Press, N.Y., 550 p.
Younes A.I., Gibson J.L. and Shipp R.C. (2005) Geohazard assessment of the deepwater Princess field in the Northeastern Gulf of Mexico: Example of evaluating complex faulting in a subsea development, Proceedings of the 37th Offshore Technology Conference (OTC), Paper 17577, Houston.
Zhang J. and Erbrich C.T. (2005) Stability design of untrenched pipelines – geotechnical aspects. In: S.C.M. Gourvenec (Editor), Frontiers in Offshore Geotechnics, Taylor & Francis, Perth, Australia, pp. 623–628.
Geotechnical engineering
Oceanography
Articles containing video clips
Engineering education | Offshore geotechnical engineering | [
"Physics",
"Engineering",
"Environmental_science"
] | 3,821 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Offshore engineering",
"Oceanography",
"Geotechnical engineering",
"Construction",
"Civil engineering"
] |
39,644,075 | https://en.wikipedia.org/wiki/COSMO-RS | COSMO-RS (short for COnductor like Screening MOdel for Real Solvents) is a quantum chemistry based equilibrium thermodynamics method with the purpose of predicting chemical potentials μ in liquids.
It processes the screening charge density σ on the surface of molecules to calculate the chemical potential μ of each species in solution. Perhaps in dilute solution a constant potential must be considered. As an initial step a quantum chemical COSMO calculation for all molecules is performed and the results (e.g. the screening charge density) are stored in a database. In a separate step COSMO-RS uses the stored COSMO results to calculate the chemical potential of the molecules in a liquid solvent or mixture. The resulting chemical potentials are the basis for other thermodynamic equilibrium properties such as activity coefficients, solubility, partition coefficients, vapor pressure and free energy of solvation. The method was developed to provide a general prediction method with no need for system specific adjustment.
Due to the use of σ from COSMO calculations, COSMO-RS does not require functional group parameters. Quantum chemical effects like group-group interactions, mesomeric effects and inductive effects also are incorporated into COSMO-RS by this approach.
The COSMO-RS method was first published in 1995 by A. Klamt. A refined version of COSMO-RS was published in 1998 and is the basis for newer developments and reimplementations.
Basic principles
The below description is a simplified overview of the COSMO-RS version published in 1998.
Assumptions
The liquid state is incompressible
All parts of the molecular surfaces can be in contact with each other
Only pairwise interactions of molecular surface patches are allowed
As long as the above assumptions hold, the chemical potential μ in solution can be calculated from the interaction energies of pairwise surface contacts.
COSMO-RS equations
Within the basic formulation of COSMO-RS, interaction terms depend on the screening charge density σ. Each molecule and mixture can be represented by the histogram p(σ), the so-called σ-profile. The σ-profile of a mixture is the weighted sum of the profiles of all its components.
Using the interaction energy Eint(σ,σ') and the σ-profile of the solvent p(σ'), the chemical potential μs(σ) of a surface piece with screening charge σ is determined as:
Due to the fact that μs(σ) is present on both sides of the equation, it needs to be solved iteratively.
By combining the above equation with px(σ) for a solute x, and adding the σ-independent combinatorial and dispersive contributions, the chemical potential for a solute X in a solvent S results in:
In analogy to activity coefficient models used in chemical engineering, such as NRTL, UNIQUAC or UNIFAC, the final chemical potential can be split into a combinatorial and a residual (non ideal) contribution. The interaction energies Eint(σ,σ') of two surface pieces are the crucial part for the final performance of the method and different formulations are used within the various implementations. In addition to the liquid phase terms a chemical potential estimate for the ideal gas phase μgas has been added to COSMO-RS to enable the prediction of vapor pressure, free energy of solvation and related quantities.
Interaction energy (Residual)
The residual part is the sum of three different contributions, where Emisfit and Ehb are part of Eint and Edisp is added directly to the chemical potential.
Electrostatic interaction
In the Emisfit expression α is an adjustable parameter and σ and σ' refer to the screening charge densities of the two surface patches in contact. This term has been labeled "misfit" energy, because it results from the mismatch of the charged surface pieces in contact.
It represents the Coulomb interaction relative to the state in a perfect conductor. A molecule in a perfect conductor (COSMO state) is perfectly shielded electronically; each charge on the molecular surface is shielded by a charge of the same size but of opposite sign. If the conductor is replaced by surface pieces of contacting molecules the screening of the surface will not be perfect any more. Hence an interaction energy from this misfit of σ on the surface patches will arise.
Hydrogen bonding energy
In the Ehb expression σacc and σdon are the screening charge densities of the hydrogen bond acceptor and donor respectively. The hydrogen bonding threshold σhb and the prefactor chb are adjustable parameters. The max[] and min[] construction ensures that the screening charge densities of the acceptor and donor exceeds the threshold for hydrogen bonding.
Dispersion (van der Waals energy)
The COSMO-RS dispersion energy of a solute depends on an element (k) specific prefactor γ and the amount of exposed surface A of this element. It is not part of the interaction energy but enters the chemical potential directly.
Parameters
Though the use of quantum chemistry reduces the need for adjustable parameters, some fitting to experimental data is inevitable. The basic parameters are α, chb, σhb as used in the interaction energies, and one general parameter for the effective contact area. In addition, one adjustable van der Waals parameter γ per element is required.
All parameters either are general or element specific, which is a distinctive feature of COSMO-RS as compared to group contribution methods like UNIFAC.
Implementations
The original streamline of COSMO-RS was continuously developed and extended by A. Klamt in his company COSMOlogic (now part of BIOVIA), and the most advanced software for COSMO-RS is the COSMOtherm software, now available from BIOVIA. They also offer a huge database (COSMObase) with more than 12000 COSMO files. COSMOtherm proved its prediction accuracy by delivering the most accurate physicochemical property predictions in the recent SAMPL5 and SAMPL6 challenges.
LVPP maintains an open sigma-profile database with COSMO-SAC ("Segment Activity Coefficient") parameterizations.
Gaussian (software) cannot compute σ-profiles, but can produce .cosmo input files for COSMO-RS/Cosmotherm via the keyword scrf=COSMORS.
SCM licenses a commercial COSMO-RS implementation in the Amsterdam Modeling Suite, which also includes COSMO-SAC, UNIFAC and QSPR models.
See also
UNIFAC
UNIQUAC
MOSCED
NRTL
References
Thermodynamic models
Computational chemistry
Articles containing video clips | COSMO-RS | [
"Physics",
"Chemistry"
] | 1,354 | [
"Thermodynamic models",
"Theoretical chemistry",
"Computational chemistry",
"Thermodynamics"
] |
39,645,198 | https://en.wikipedia.org/wiki/Liquid%20color%20measurement | The color measurement of a liquid is the evaluation of that liquid's color properties. This is usually done through visual means, but can also be done by through automated means. The former provides approximate data, while the latter can provide objective data on the color properties of any given liquid.
Measurement
Visual color measurement is the conventional and usual form of liquid color measurement. In this case the sample is held up to a series of color standards in order to see which standard the sample most closely resembles. This measurement is only approximate, but is the less expensive method as the only expense is the set of color standards to which the sample is matched. This is by far the most commonly used method because of this inexpensive nature.
Automated color measurement is a newer method of liquid color measurement. In this case the sample is contained in a test tube; the tube is inserted into the instrument, and the color properties of the liquid read out on a screen. This method can provide objective measurements, but is far more expensive than a set of color standards. This method is used less frequently because of how expensive it is. There is also an automated method which compares a sample to its standard, also providing objective measurement.
References
Measuring instruments
Painting materials | Liquid color measurement | [
"Technology",
"Engineering"
] | 245 | [
"Measuring instruments"
] |
39,645,539 | https://en.wikipedia.org/wiki/Mycorestoration | Mycorestoration is the use of fungi to restore degraded environments. It is a multi-method approach to restore damaged habitats such as oil spill sites and logging roads, while also restoring the health of targeted forest sites that have been compromised in development. Mycorestoration is also used to control insect populations. It generally uses a four-tier approach of mycofiltration, mycoforestry, mycoremediation, and mycopesticides. The mycelia of a number of different gilled fungi are used in some of these applications.
References
Bioremediation
Mycology | Mycorestoration | [
"Chemistry",
"Biology",
"Environmental_science"
] | 121 | [
"Mycology",
"Biodegradation",
"Ecological techniques",
"Bioremediation",
"Environmental soil science"
] |
36,771,515 | https://en.wikipedia.org/wiki/Fluorescence%20intermittency%20in%20colloidal%20nanocrystals | Blinking colloidal nanocrystals is a phenomenon observed during studies of single colloidal nanocrystals that show that they randomly turn their photoluminescence on and off even under continuous light illumination.
This has also been described as luminescence intermittency.
Similar behavior has been observed in crystals made of other materials. For example, porous silicon also exhibits this affect.
Colloidal nanocrystals
Colloidal nanocrystals are a new class of optical materials that essentially constitute a new form of matter that can be considered as "artificial atoms." Like atoms, they have discrete optical energy spectra that are tunable over a wide range of wavelengths. The desired behavior and transmission directly correlates to their size. To change the emitted wavelength, the crystal is grown larger or smaller. Their electronic and optical properties can be controlled by this method. For example, to change the emission from one visible wavelength to another simply use a larger or smaller grown crystal. However, this process would not be effective in conventional semiconductors such as gallium arsenide.
The nanocrystal size controls a widely tunable absorption band resulting in widely tunable emission spectra. This tunability combined with the optical stability of nanocrystals and the great chemical flexibility in the nanocrystal growth have resulted in the widespread nanocrystal applications in use today. Practical device applications range from low-threshold lasers to solar cells and biological imaging and tracking.
Random behavior
Studies of single colloidal nanocrystals show that they randomly turn their photoluminescence on and off even under continuous light illumination.
This tends to hinder progress for engineers and scientists who study single colloidal nanocrystals and try to use their fluorescent properties for biological imaging or lasing.
The blinking in nanocrystals was first reported in 1996. The discovery was unexpected. The consensus is that blinking happens because illuminated nanocrystals can be charged (or ionized), and then neutralized. Under normal conditions when nanocrystal is neutral, a photon excites an electron-hole pair, which then recombines, emitting another photon and leading to photoluminescence. This process is called radiative recombination. If however, the nanocrystal is charged, the extra carrier triggers a process called non-radiative Auger recombination, where exciton energy is transferred to an extra electron or hole. Auger recombination occurs orders of magnitude faster than the radiative recombination. So photoluminescence is almost entirely suppressed in charged nanocrystals. Scientists still do not fully understand the origin of the charging and neutralization process. One of the photoexcited carriers (the electron or the hole) must be ejected from the nanocrystal. At some later time, the ejected charge returns to the nanocrystal (restoring charge neutrality and therefore radiative recombination). The details of how these processes occur still are not understood.
Solutions
Researchers are attempting to eliminate the problem of blinking nanocrystals. One common solution is to suppress nanocrystal ionization. This could be done, for example, by growing a very thick semiconductor shell around the nanocrystal core. However, blinking was reduced, not eliminated, because the fundamental processes responsible for blinking - the non-radiative Auger recombination- were still present.
Characteriziation
One method of study attempts to characterize the blinking behavior by studying single crystals or single quantum dots. A powerful microscope is employed along with video equipment. Another method uses ensembles or large quantities of quantum dots and develops statistical information.
References
External links
Article available to the public.
Nanoparticles
Condensed matter physics | Fluorescence intermittency in colloidal nanocrystals | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 776 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
36,772,327 | https://en.wikipedia.org/wiki/Search%20for%20the%20Higgs%20boson | The search for the Higgs boson was a 40-year effort by physicists to prove the existence or non-existence of the Higgs boson, first theorised in the 1960s. The Higgs boson was the last unobserved fundamental particle in the Standard Model of particle physics, and its discovery was described as being the "ultimate verification" of the Standard Model. In March 2013, the Higgs boson was officially confirmed to exist.
This confirmed answer proved the existence of the hypothetical Higgs field—a field of immense significance that is hypothesised as the source of electroweak symmetry breaking and the means by which elementary particles acquire mass. Symmetry breaking is considered proven but confirming exactly how this occurs in nature is a major unanswered question in physics. Proof of the Higgs field (by observing the associated particle) validates the final unconfirmed part of the Standard Model as essentially correct, avoiding the need for alternative sources for the Higgs mechanism. Evidence of its properties is likely to greatly affect human understanding of the universe and open up "new" physics beyond current theories.
Despite their importance, the search and the proof were extremely difficult and took decades, because direct production, detection and verification of the Higgs boson on the scale needed to confirm the discovery and learn its properties required a very large experimental project and huge computing resources. For this reason, most experiments until around 2011 aimed to exclude ranges of masses that the Higgs could not have. Ultimately the search led to the construction of the Large Hadron Collider (LHC) in Geneva, Switzerland, the largest particle accelerator in the world, designed especially for this and other high-energy tests of the Standard Model.
Background
The Higgs boson
The Higgs boson, sometimes called the Higgs particle, is an elementary particle in the Standard Model of particle physics produced by the quantum excitation of the Higgs field, one of the fields in particle physics theory. In the Standard Model, the Higgs particle is a massive scalar boson with zero spin, even (positive) parity, no electric charge, and no colour charge, that couples to (interacts with) mass. It is also very unstable, decaying into other particles almost immediately.
Experimental requirements
Like other massive particles (e.g. the top quark and W and Z bosons), Higgs bosons decay to other particles almost immediately, long before they can be observed directly. However, the Standard Model precisely predicts the possible modes of decay and their probabilities. This allows the creation and decay of a Higgs boson to be shown by careful examination of the decay products of collisions.
Therefore, although approaches to proving the Higgs were studied in early research from the 1960s, when the particle was proposed, large-scale experimental searches only commenced in the 1980s, with the opening of particle accelerators sufficiently powerful to provide evidence related to the Higgs boson.
Since the Higgs boson, if it existed, could have any mass in a very wide range, a number of very advanced facilities were eventually required for the search. These included very powerful particle accelerator and detectors (in order to create Higgs bosons and detect their decay, if possible), and processing and analysis of vast amounts of data, requiring very large worldwide computing facilities. For example, over 300 trillion (3 x 1014) proton-proton collisions at the LHC were analysed in confirming the July 2012 particle's discovery, requiring construction of the so-called LHC Computing Grid, the world's largest computing grid (as of 2012) comprising over 170 computing facilities in 36 countries. Experimental techniques included examination of a wide range of possible masses (often quoted in GeV) in order to gradually narrow down the search area and rule out possible masses where the Higgs was unlikely, statistical analysis, and operation of multiple experiments and teams in order to see if the results from all were in agreement.
Experimental search and discovery of unknown boson
Early limits
During the early 1970s there were only few constraints on the existence of the Higgs boson. The limits that did exist came from the absence of the observation of Higgs related effects in nuclear physics, neutron stars, and neutron scattering experiments. This resulted in the conclusion that the Higgs—if it existed—was heavier than .
Early collider phenomenology
In the mid-1970s, the first studies exploring how the Higgs boson may show itself in particle collision experiments were published. However, the prospect of actually finding the particle were not very good; the authors of one of the first articles on Higgs phenomenology warned:
One of the problems was that at the time there was almost no clue to the mass of the Higgs boson. Theoretical considerations left open a very wide range somewhere between and with no real indication where to look.
Large Electron–Positron Collider
In the early planning studies for the Large Electron–Positron Collider (LEP) at CERN, the Higgs boson played no role. In fact, it does not appear to be mentioned in any of the reports until 1979. The first detailed study examining the possibilities of discovering the Higgs boson at LEP appeared in 1986. Thereafter the search for the Higgs boson became firmly established within the LEP program.
As its name implies, the Large Electron–Positron Collider collided electrons with positrons. The three most important ways in which such a collision could lead to the production of a Higgs boson were:
The electron and the positron together produce a Z boson which in turn decay to a Higgs boson and a pair of fermions.
The electron and the positron together produce a Z boson which in turn radiates away a Higgs boson. (Higgs strahlung)
The electron and the positron exchange a W or Z boson which along the way emits a Higgs boson.
The fact that no decays of the Z boson to the Higgs were observed at LEP immediately implies that the Higgs boson, if it existed, must be heavier than the Z boson (~). Subsequently, with each successive energy upgrade of the LEP, hope re-emerged that discovery of the Higgs was just around the corner. Just prior to the planned shut down of LEP in 2000, few events that resemble a Higgs boson with a mass of ~ were observed. This led to extension of the final LEP run by a few months. But in the end the data was inconclusive and insufficient to justify another run after the winter break and the difficult decision was made to shut down and dismantle LEP to make room for the new Large Hadron Collider in November 2000. The inconclusive results of the direct search for the Higgs boson at LEP resulted in a final lower bound of the Higgs mass at the 95% confidence level.
In parallel to the direct search program, LEP made precision measurements of many observables of the weak interactions. These observables are sensitive to the value of the Higgs mass through contributions of processes containing loops of virtual Higgs bosons. This allowed for the first time a direct estimate of the Higgs mass of about . This estimate however is subject to the condition that the Standard Model is all there is, and no physics beyond the Standard Model come into play at these energy levels. New physical effects could potentially alter this estimate substantially.
Superconducting Super Collider
Planning for a new powerful collider to explore new physics at the >1 TeV scale had already started in 1983. The Superconducting Super Collider was to accelerate protons in an underground circular tunnel just outside Dallas, Texas to energies of each. One of the primary goals of this megaproject was finding the Higgs boson.
In preparation for this machine, extensive phenomenological studies were produced for the production of Higgs bosons in hadron colliders. The big downside of hadron colliders for search for the Higgs is that they collide composite particles, and as a consequence produce many more background events and provide less information about the initial state of the collision. On the other hand, they provide a much higher centre-of-mass energy than lepton colliders (such as LEP) of a similar technological level. However, hadron colliders also provide another way producing a Higgs boson through the collision of two gluons mediated by a triangle of heavy (top or bottom) quarks.
The Superconducting Super Collider project however was plagued by budget problems, and in 1993 Congress decided to pull the plug on the project, despite $2 billion having already been spent.
Tevatron
On 1 March 2001, the Tevatron Proton-antiproton (p) collider at Fermilab near Chicago commenced its run 2. After run 1 (1992–1996), in which the collider had discovered the top quark, Tevatron had shut down for significant upgrades focused on improving the potential for finding the Higgs boson; the energies of the protons and antiprotons was bumped up to , and the number of collisions per second was increased by an order of magnitude (with further increases planned as the run continued). Even with the upgrades Tevatron was not guaranteed to find the Higgs. If the Higgs were too heavy (>), then the collisions would not have enough energy to produce a Higgs boson. If it were too light (<), then the Higgs would predominantly decay to pairs of bottom quarks—a signal that would be swamped by background events, and the Tevatron would not produce enough collisions to filter out the statistics. Nonetheless, the Tevatron was at the time the only operational particle collider that was sufficiently powerful to be capable of seeking the Higgs particle.
Operation was planned to continue until the Tevatron could no longer keep up with the Large Hadron Collider. This point was reached on 30 September 2011, when the Tevatron was shut down. In their final analyses, the collaborations of the two detectors at Tevatron (CDF and DØ) report that based on their data they can exclude the possibility of a Higgs boson with a mass between and and between and at a 95% confidence level. In addition, they found an excess of events that could be from a Higgs boson in the range 115–. However, the significance of the statistics is deemed too low to base any conclusions on.
On 22 December 2011, the DØ collaboration also reported limitations on the Higgs boson within the Minimal Supersymmetric Standard Model, an extension to the Standard Model. Proton-antiproton (p) collisions with a centre-of-mass energy of 1.96 TeV had allowed them to set an upper limit for Higgs boson production within MSSM ranging from 90 to 300 GeV, and excluding > 20–30 for masses of the Higgs boson below 180 GeV ( is the ratio of the two Higgs doublet vacuum expectation values).
Large Hadron Collider
Full operation at the LHC was delayed for 14 months from its initial successful tests, on 10 September 2008, until mid-November 2009, following a magnet quench event nine days after its inaugural tests that damaged over 50 superconducting magnets and contaminated the vacuum system. The quench was traced to a faulty electrical connection and repairs took several months; electrical fault detection and rapid quench-handling systems were also upgraded.
Data collection and analysis in search of Higgs intensified from 30 March 2010 when the LHC began operating at 7 Tev . Preliminary results from the ATLAS and CMS experiments at the LHC as of July 2011 excluded a Standard Model Higgs boson in the mass range 155- and 149-, respectively, at 95% CL. All of the above confidence intervals were derived using the CLs method.
As of December 2011 the search had narrowed to the approximate region to 115–130 GeV, with a specific focus around 125 GeV, where both the ATLAS and CMS experiments had independently reported an excess of events, meaning that a higher than expected number of particle patterns compatible with the decay of a Higgs boson were detected in this energy range. The data was insufficient to show whether or not these excesses were due to background fluctuations (i.e. random chance or other causes), and its statistical significance was not large enough to draw conclusions yet or even formally to count as an "observation", but the fact that two independent experiments had both shown excesses at around the same mass led to considerable excitement in the particle physics community.
At the end of December 2011, it was therefore widely expected that the LHC would provide sufficient data to either exclude or confirm the existence of the Standard Model Higgs boson by the end of 2012, when their 2012 collision data (at energies of 8 TeV) had been examined.
Updates from the two LHC teams continued during the first part of 2012, with the tentative December 2011 data largely being confirmed and developed further. Updates were also available from the team analysing the final data from the Tevatron. All of these continued to highlight and narrow down the 125 GeV region as showing interesting features.
On 2 July 2012, the ATLAS collaboration published additional analyses of their 2011 data, excluding boson mass ranges of 111.4 GeV to 116.6 GeV, 119.4 GeV to 122.1 GeV, and 129.2 GeV to 541 GeV. They observed an excess of events corresponding to the Higgs boson mass hypotheses around 126 GeV with a local significance of 2.9 sigma. On the same date, the DØ and CDF collaborations announced further analysis that increased their confidence. The significance of the excesses at energies between 115 and 140 GeV was now quantified as 2.9 standard deviations, corresponding to a 1 in 550 probability of being due to a statistical fluctuation. However, this still fell short of the 5 sigma confidence, therefore the results of the LHC experiments were necessary to establish a discovery. They excluded Higgs mass ranges at 100–103 and 147–180 GeV.
Discovery of new boson
On 22 June 2012 CERN announced an upcoming seminar covering tentative findings for 2012, and shortly afterwards rumours began to spread in the media that this would include a major announcement, but it was unclear whether this would be a stronger signal or a formal discovery. Speculation escalated to a "fevered" pitch when reports emerged that Peter Higgs, who proposed the particle, was to be attending the seminar. On 4 July 2012 CMS announced the discovery of a previously unknown boson with mass 125.3 ± 0.6 GeV/c2 and ATLAS of a boson with mass 126.5 GeV/c2.
Using the combined analysis of two decay modes (known as 'channels'), both experiments reached a local significance of 5 sigma — or less than a 1 in one million chance of a statistical fluctuation being that strong. When additional channels were taken into account, the CMS significance was 4.9 sigma.
The two teams had been working independent from each other, meaning they did not discuss their results with each other, providing additional certainty that any common finding was genuine validation of a particle. This level of evidence, confirmed independently by two separate teams and experiments, meets the formal level of proof required to announce a confirmed discovery of a new particle. CERN has been cautious, and stated only that the new particle is "consistent with" the Higgs boson, but scientists have not positively identified it as being the Higgs boson, pending further data collection and analysis.
On July 31, the ATLAS collaboration presented further data analysis, including a third channel. They improved the significance to 5.9 sigma, and described it as an "observation of a new particle" with mass . Also CMS improved the significance to 5 sigma with the boson's mass at .
On 14 March 2013 CERN confirmed that:
"CMS and ATLAS have compared a number of options for the spin-parity of this particle, and these all prefer no spin and even parity [two fundamental criteria of a Higgs boson consistent with the Standard Model]. This, coupled with the measured interactions of the new particle with other particles, strongly indicates that it is a Higgs boson."
Events in 2012
2012 (post-discovery)
In 2012, observations were considered consistent with the observed particle being the Standard Model Higgs boson. The particle decays into at least some of the predicted channels. Moreover, the production rates and branching ratios for the observed channels match the predictions by the Standard Model within the experimental uncertainties. However, the experimental uncertainties still left room for alternative explanations. It was therefore considered too early to conclude that the found particle was indeed the Standard Model Higgs boson.
Further confirmation required more precise data on some of the characteristic of the new particle, including its other decay channels and various quantum numbers such as its parity. To allow for further data gathering, the LHC proton-proton collision run had been extended by seven weeks, postponing the planned long shutdown for upgrades in 2013.
In November 2012, in a conference in Tokyo researchers said evidence gathered since July was falling into line with the basic Standard Model more than its alternatives, with a range of results for several interactions matching that theory's predictions. Physicist Matt Strassler highlighted "considerable" evidence that the new particle is not a pseudoscalar negative parity particle (a required finding for a Higgs boson), "evaporation" or lack of increased significance for previous hints of non-Standard Model findings, expected Standard Model interactions with W and Z bosons, absence of "significant new implications" for or against supersymmetry, and in general no significant deviations to date from the results expected of a Standard Model Higgs boson. However some kinds of extensions to the Standard Model would also show very similar results; based on other particles that are still being understood long after their discovery, it could take many years to know for sure, and decades to understand the particle that has been found.
Premature media reports of confirmation as a Higgs boson
In late 2012, Time, Forbes, Slate, NPR, and others announced incorrectly that the existence of the Higgs boson had been confirmed. Numerous statements by the discoverers at CERN and other experts since July 2012 had reiterated that a particle was discovered but it was not yet confirmed to be a Higgs boson. It was only in March 2013 that it was announced officially. This was followed by the making of a documentary film about the hunt.
Timeline of experimental evidence
All results refer to the Standard Model Higgs boson, unless otherwise stated.
2000–2004 – using data collected before 2000, in 2003–2004 Large Electron–Positron Collider experiments published papers which set a lower bound for the Higgs boson of at the 95% confidence level (CL), with a small number of events around 115 GeV.
July 2010 – data from CDF (Fermilab) and DØ (Tevatron) experiments exclude the Higgs boson in the range 158– at 95% CL.
24 April 2011 – media reports "rumors" of a find; these were debunked by May 2011. They had not been a hoax, but were based on unofficial, unreviewed results.
24 July 2011 – the LHC reported possible signs of the particle, the ATLAS Note concluding: "In the low mass range (c. 120–140 GeV) an excess of events with a significance of approximately 2.8 sigma above the background expectation is observed" and the BBC reporting that "interesting particle events at a mass of between 140 and 145 GeV" were found. These findings were repeated shortly thereafter by researchers at the Tevatron with a spokesman stating that: "There are some intriguing things going on around a mass of 140GeV." On 22 August 2011 it was reported that these anomalous results had become insignificant on the inclusion of more data from ATLAS and CMS and that the non-existence of the particle had been confirmed by LHC collisions to 95% certainty between 145 and 466 GeV (except for a few small islands around 250 GeV).
23–24 July 2011 – Preliminary LHC results exclude the ranges 155– (ATLAS) and 149– (CMS) at 95% CL.
27 July 2011 – preliminary CDF/DØ results extend the excluded range to 156– at 95% CL.
18 November 2011 – a combined analysis of ATLAS and CMS data further narrowed the window for the allowed values of the Higgs boson mass to 114–141 GeV.
13 December 2011 – experimental results were announced from the ATLAS and CMS experiments, indicating that if the Higgs boson exists, its mass is limited to the range 116–130 GeV (ATLAS) or 115–127 GeV (CMS), with other masses excluded at 95% CL. Observed excesses of events at around 124 GeV (CMS) and 125–126 GeV (ATLAS) are consistent with the presence of a Higgs boson signal, but also consistent with fluctuations in the background. The global statistical significances of the excesses are 1.9 sigma (CMS) and 2.6 sigma (ATLAS) after correction for the look elsewhere effect.
22 December 2011 – the DØ collaboration also sets limits on Higgs boson masses within the Minimal Supersymmetric Standard Model (an extension of the Standard Model), with an upper limit for production ranging from 90 to 300 GeV, and excluding tanβ>20–30 for Higgs boson masses below 180 GeV at 95% CL.
7 February 2012 – updating the December results, the ATLAS and CMS experiments constrain the Standard Model Higgs boson, if it exists, to the range 116–131 GeV and 115–127 GeV, respectively, with the same statistical significance as before.
7 March 2012 – the DØ and CDF collaborations announced that they found excesses that might be interpreted as coming from a Higgs boson with a mass in the region of 115 to in the full sample of data from Tevatron. The significance of the excesses is quantified as 2.2 standard deviations, corresponding to a 1 in 250 probability of being due to a statistical fluctuation. This is a lower significance, but consistent with and independent of the ATLAS and CMS data at the LHC. This new result also extends the range of Higgs-mass values excluded by the Tevatron experiments at 95% CL, which becomes 147-.
2 July 2012 – the ATLAS collaboration further analysed their 2011 data, excluding Higgs mass ranges of 111.4 GeV to 116.6 GeV, 119.4 GeV to 122.1 GeV, and 129.2 GeV to 541 GeV. Higgs bosons are probably located at 126 GeV with significance of 2.9 sigma. On the same day, the DØ and CDF collaborations also announced further analysis, increasing their confidence that the data between 115 and 140 GeV is corresponding to a Higgs boson to 2.9 sigma, excluding mass ranges at 100–103 and 147–180 GeV.
4 July 2012 – the CMS collaboration announced the discovery of a boson with mass within 4.9 σ (sigma) (up to 5 sigma depending on the analysed channel), and the ATLAS collaboration a boson with mass of ~126.5 GeV/c2.
31 July 2012 – the ATLAS collaboration further improved their analysis and announced the discovery of a boson with mass . Also CMS improved the significance to 5 sigma with the boson's mass at .
Statistical analysis
In 2012, the "5-sigma" criterion required by the scientists at the LHC, and its underlying frequentist interpretation of probability, triggered the interest of some statisticians, especially Bayesians: "five standard deviations, assuming normality, means a p-value of around 0.0000005 [...] Are the particle physics community completely wedded to frequentist analysis?". However, the research at LHC being already too advanced, the discussion didn't seem to have led to a Bayesian re-analysis of the data.
Notes
References
Standard Model
History of physics | Search for the Higgs boson | [
"Physics"
] | 5,057 | [
"Standard Model",
"Particle physics"
] |
36,772,719 | https://en.wikipedia.org/wiki/Tengion | Tengion, Inc. is an American development-stage regenerative medicine company founded in 2003 with financing from J&J Development Corporation, HealthCap and Oak Investment Partners, which is headquartered in Winston-Salem, North Carolina. Its goals are discovering, developing, manufacturing and commercializing a range of replacement organs and tissues, or neo-organs and neo-tissues, to address unmet medical needs in urologic, renal, gastrointestinal, and vascular diseases and disorders. The company creates these human neo-organs from a patient’s own cells or autologous cells, in conjunction with its Organ Regeneration Platform.
Tengion declared Chapter 7 bankruptcy in December 2014, and liquidated its assets. In March 2015 its assets, including tissue engineering samples, were bought back by its creditors and former executives in March 2015. The purchase was expedited. The new owners then formed Winston-Salem based RegenMedTX.
History
Founded in 2003 and formerly headquartered in East Norriton Township, Pennsylvania before moving to Winston-Salem, North Carolina in 2012, Tengion went public in 2010, after its stock has been approved for listing on the NASDAQ, through a $26 million IPO to help advance its research and development activities. Some of the groundbreaking regenerative medicine technologies of Dr. Anthony Atala, director of the Wake Forest Institute for Regenerative Medicine, were the core from where those research and development activities developed.
On September 4, 2012, Tengion received a notice from NASDAQ stating that the company had not regained compliance with NASDAQ Listing Rule 5550(b)(1) and that its common stock would cease trading on the NASDAQ Capital Market effective on September 6, 2012, and would begin trading on the OTCQB tier of the OTC Marketplace. The company was bought by former executives and creditors after declaring bankruptcy in 2014.
Products
All current Tengion's regenerative medicine product candidates are investigational and will not be commercially available until the completion of clinical trials and the review and approval of associated marketing applications by the Food and Drug Administration.
Product candidates in clinical development
Its most advanced candidate is the Neo-Urinary Conduit. A Phase I clinical trial of the Tengion Neo-Urinary Conduit was completed in some health care institutions, in patients with bladder cancer who require a total cystectomy. The trial ended in December 2014, however information on the results has not yet been made publicly available.
References
Companies based in North Carolina
Biotechnology companies of the United States
Companies formerly listed on the Nasdaq
Transplantation medicine
Tissue engineering
Regenerative biomedicine
Stem cells
Biotechnology companies established in 2003
Companies traded over-the-counter in the United States
Life sciences industry
Companies that filed for Chapter 7 bankruptcy in 2014
2003 establishments in North Carolina
Biotechnology companies disestablished in 2015
2015 disestablishments in North Carolina
American companies established in 2003
American companies disestablished in 2015 | Tengion | [
"Chemistry",
"Engineering",
"Biology"
] | 611 | [
"Biological engineering",
"Life sciences industry",
"Cloning",
"Chemical engineering",
"Tissue engineering",
"Medical technology"
] |
36,775,260 | https://en.wikipedia.org/wiki/Consoweld | Consoweld or Consoweld Laminated Plastic is a type of High Pressure Laminate, similar to Formica decorative laminate. It was produced by the Consoweld Corporation (formerly Consolidated Papers, Inc.) of Wisconsin Rapids in the 1950s. Consoweld laminate was originally developed for several different products from gaskets to caskets during WWII. Starting in 1953, the company saw an opportunity in civilian applications producing Decorative Laminate for partitions and countertops. Throughout the next several decades, Consoweld was a major producer of Decorative Laminate, on par with Formica, to the extent that Sears & Roebuck carried some of their products in their catalog. In 1996, the profitable company was sold to Libby Owens Ford (subsequently Trinova Corporation with a Sterling Engineered Products subsidiary.)
Uses
Consoweld, in common with similar products like Formica, had a wide variety of uses. During WWII, it was used in aircraft construction, house building and electrical applications. Starting in the 1950s, the main use, however, was as a worktop, tabletop, and wall partition, both commercially and for residential homes (the decorative finishes and patterns being more colorful). Midcentury design often featured Consoweld Decorative Laminate throughout a residential home, from kitchen countertops & tabletops to wall panels and furniture surfaces. Businesses also used Consoweld Decorative Laminate for countertops in Diners and for office furniture surfaces.
Although Consoweld advertisements often emphasized commercial use, especially early on, they had a wide selection of colorful colors and patterns for the home. As with many of the laminate products in the 1950’s, their laminate fell out of favor in the 1990s and soon faded into history. Despite this event, Consoweld Decorative Laminate can still be found in use even to this day on countertops and furniture surfaces throughout the country, surviving the test of time.
In 2010, over 2,300 sq/ft of new old stock 1950s Consoweld and Formica laminate was unearthed in a warehouse and put up for sale, all or none for $10,000. A detailed article was written about this find by Retrorenovation, describing the fantastic find. Soon, this find also faded into history.
In 2022, with the resurgent popularity of retro laminate styles being reintroduced like WilsonArt’s Boomerang, the previously mentioned large quantity of vintage laminate was found again in Portland, Oregan. It was brought down to Redlands, Ca. by Redlands Salvage and is now being sold on eBay and Etsy.
References
External links
http://www.travelwisconsin.com/history-heritage/wisconsin-river-papermaking-museum-193114
Plastics
Plastics companies
Home improvement
Design | Consoweld | [
"Physics",
"Engineering"
] | 577 | [
"Amorphous solids",
"Design",
"Unsolved problems in physics",
"Plastics"
] |
52,555,162 | https://en.wikipedia.org/wiki/Metal%20vapor%20synthesis | In chemistry, metal vapor synthesis (MVS) is a method for preparing metal complexes by combining freshly produced metal atoms or small particles with ligands. In contrast to the high reactivity of such freshly produced metal atoms, bulk metals typically are unreactive toward neutral ligands. The method has been used to prepare compounds that cannot be prepared by traditional synthetic methods, e.g. Ti(η6-toluene)2. The technique relies on a reactor that evaporates the metal, allowing the vapor to impinge on a cold reactor wall that is coated with the organic ligand. The metal evaporates upon being heated resistively or irradiated with an electron beam. The apparatus operates under high vacuum. In a common implementation, the metal vapor and the organic ligand are co-condensed at liquid nitrogen temperatures.
In several case where compounds are prepared by MVS, related preparations employ conventional routes. Thus, tris(butadiene)molybdenum was first prepared by co-condensation of butadiene and Mo vapor, but yields are higher for the reduction of molybdenum(V) chloride in the presence of the diene.
References
Chemical processes
Vacuum | Metal vapor synthesis | [
"Physics",
"Chemistry"
] | 246 | [
"Vacuum",
"Chemical processes",
"nan",
"Chemical process engineering",
"Matter"
] |
52,556,496 | https://en.wikipedia.org/wiki/Amyloid-related%20imaging%20abnormalities | Amyloid-related imaging abnormalities (ARIA) are abnormal differences seen in magnetic resonance imaging of the brain in patients with Alzheimer's disease. ARIA is associated with anti-amyloid drugs, particularly human monoclonal antibodies such as aducanumab. There are two types of ARIA: ARIA-E and ARIA-H. The phenomenon was first seen in trials of bapineuzumab.
ARIA-E
ARIA-E refers to cerebral edema, involving the breakdown of the tight endothelial junctions of the blood-brain barrier and subsequent accumulation of fluid. In a double-blind trial of the humanised monoclonal antibody solanezumab (n = 2042), sixteen patients (11 taking the drug, 5 taking a placebo), or 0.78% developed ARIA-E. A further 7 patients developed ARIA-E during an open-label extension of the trial.
The effect of ARIA-E depends on the severity and location of the edema. Symptoms may include headache, changes in mental state, confusion, vomiting, nausea, tremor and gait disturbances.
ARIA-H
ARIA-H refers to cerebral microhaemorrhages (mH), small haemorrhages on the brain, often accompanied by hemosiderosis. mH are usually seen as small, round and low intensity lesions and are small haemosiderin deposits. Some studies define mH as being less than or equal to 10mm, while others define the cut-off as ≤ 5mm. The prevalence of mH in healthy elderly people is approximately 6%, but this value increases to between 50% and 80% in elderly people with cerebrovascular disease.
Mechanism of action
Two non-exclusive mechanisms have been postulated. Firstly, in the context of aging and neurodegeneration, the integrity of the blood-brain barrier (BBB) can become compromised, resulting in increased permeability. Notably, amyloid plaques have been hypothesized to counteract this BBB leakage. However, upon the administration of antibodies, these plaques are targeted and subsequently eliminated, potentially uncovering the occurrence of micro-hemorrhage
Secondly, an alternate perspective posits that the introduction of antibodies into the bloodstream triggers an immune-inflammatory response as part of the treatment regimen. Regrettably, this orchestrated immune reaction might inadvertently precipitate micro-hemorrhages.
ARIA MRI Classification Criteria
References
External links
Anti-amyloid monoclonal antibodies
Neuroimaging
Magnetic resonance imaging | Amyloid-related imaging abnormalities | [
"Chemistry"
] | 521 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
52,558,881 | https://en.wikipedia.org/wiki/Tetens%20equation | The Tetens equation is an equation to calculate the saturation vapour pressure of water over liquid and ice. It is named after its creator, O. Tetens who was an early German meteorologist. He published his equation in 1930, and while the publication itself is rather obscure, the equation is widely known among meteorologists and climatologists because of its ease of use and relative accuracy at temperatures within the normal ranges of natural weather conditions.
The equation is structurally identical to the August-Roche-Magnus equation, but the coefficients differ.
Formula
Monteith and Unsworth (2008) provide Tetens' formula for temperatures above 0 °C:
where temperature is in degrees Celsius (°C) and saturation vapor pressure is in kilopascals (kPa). According to Monteith and Unsworth, "Values of saturation vapour pressure from Tetens' formula are within 1 Pa of exact values up to 35 °C."
Murray (1967) provides Tetens' equation for temperatures below 0 °C:
See also
Vapour pressure of water
Antoine equation
Arden Buck equation
Lee–Kesler method
Goff–Gratch equation
References
Meteorological concepts
Thermodynamic equations | Tetens equation | [
"Physics",
"Chemistry"
] | 241 | [
"Thermodynamic equations",
"Equations of physics",
"Thermodynamics"
] |
52,559,072 | https://en.wikipedia.org/wiki/Natural%20Cycles | Natural Cycles is the company behind the Natural Cycles birth control app. The app was the first to be certified as a contraceptive in the European Union and in August 2018 the Food and Drug Administration approved U.S. marketing for the contraceptive app as a Class II medical device. It remains the only digital form of birth control on the market in the United States and has also received regulatory clearances from Canada (Health Canada), Australia (TGA), Singapore (HSA), and South Korea (MFDS).
The Stockholm-based company, which was co-founded by particle physicists Dr. Elina Berglund Scherwitzl and Dr. Raoul Scherwitzl, has received $100 million in funding.
History
Berglund was a physicist partly based at CERN, collaborating with the team who discovered the Higgs boson, before co-founding the company with her husband Scherwitzl. Because the couple was in search of an alternative natural contraceptive themselves, Berglund used data analysis to develop an algorithm designed to pinpoint her ovulation.
The couple then decided to create an app with the underlying algorithm, Natural Cycles. Following several medical trials, the app became the first tech-based device to be certified for use as contraception in the European Union in February 2017 by the European inspection and certification organisation TÜV SÜD. In November 2017 Natural Cycles received a $30M investment in series B round led by EQT Ventures fund, with participation from existing investors Sunstone, E-ventures and Bonnier Growth Media (the VC arm of privately held Swedish media group, the Bonnier Group).
While the app is currently only certified in the European Union, where its users are concentrated in the United Kingdom and the Scandinavian countries, it is available worldwide. Natural Cycles offers a subscription product, which had over 800,000 users across 160 countries as of June 2018. 75 percent use the app as a contraceptive, and the rest use it to try to become pregnant.
The app works by having users take their temperature each morning immediately after waking and logging it into the app. This is done with a basal thermometer. The apps algorithm calculation is based on the observation that post-ovulation, progesterone warms the female body by up to 0.45 °C. Natural Cycles algorithm then determines, based on the temperature, whether the user is fertile or not. A red day means fertile (which is when one should abstain or use a condom); a green day means not fertile. For the app to remain effective, women need to follow the app's instructions correctly, and it does not protect its users from sexually transmitted diseases.
In 2019, the company completed a pilot program in Sweden that tested a feature to help women trying to get pregnant determine if they should seek fertility help. A new mode also became available in 2019 that helps users monitor pregnancy.
Research
Studies carried out by the app's creators have found it to be as effective in preventing pregnancies as the contraceptive pill for typical use (for perfect use, Natural Cycles effectiveness was lower than the contraceptive pill's). These studies, however, only consider women who were paying members and were within the age range 20-35.
Criticism
In 2018, Södersjukhuset, a hospital in Stockholm, Sweden, filed a complaint with the Medical Products Agency of Sweden after 37 women who had been using Natural Cycles as their primary method of contraception sought an abortion at the hospital after becoming pregnant unintentionally. Natural Cycles responded by saying the number of pregnancies was within the reported effectiveness rates. In the UK, the app came under investigation by the Advertising Standards Authority over supposedly misleading claims in its marketing; the complaint was upheld by the ASA in August 2018, concluding that the app misled consumers regarding being "highly accurate" and a "clinically tested alternative to birth control". A number of users and healthcare professionals have expressed concerns over the efficacy of the app.
In August 2018, Lauren Streicher, professor of clinical obstetrics at Northwestern University's Feinberg School of Medicine expressed concerns over the Food and Drug Administration's approval of the app. Streicher has claimed that the app is "problematic" as it relies on users' self-reported temperatures which must be taken as soon as they wake up each morning in order to be accurate. In an interview with Vox, Streicher claimed "The minute you rely on action, the efficacy goes down."
Natural Cycles has also been criticised for its marketing strategy of paying social media influencers to promote the app. In July 2018 researchers at the London School of Hygiene and Tropical Medicine published a study which claimed "Natural Cycles' marketing materials ought to be entirely transparent, more clear than they currently are about the limitations of their app and pregnancy risks".
See also
Calendar-based contraceptive methods
Natural family planning
References
Birth control
Women's health
Fertility awareness
Medical software | Natural Cycles | [
"Biology"
] | 1,010 | [
"Medical software",
"Medical technology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.