id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
2,123,965
https://en.wikipedia.org/wiki/Electrical%20wiring%20in%20Hong%20Kong
In Hong Kong, the main electrical wiring system used is BS 1363. In old buildings, the BS 546 system is also common. Due to its proximity to mainland China, electrical products from there are present in Hong Kong, especially those as a result of cross-border purchase carried out by mainland Chinese immigrants. Nevertheless, even if the product meets the safety requirements in China, they are not necessarily confirmed to BS 1363 standard, thus may have plug and socket compatibility issues when using them in Hong Kong. Before the enforcement of the Electrical Products (Safety) Regulation, many types of plugs could be found in Hong Kong. Using this old equipment sometimes leads to plug and socket mating problems. Nowadays, virtually all plugs fitted in equipment sold in Hong Kong are BS 1363 compatible. Case specific analysis Mainland China two-pin plugs The 2-pin plugs (with round or flat pins) in mainland China are compatible with the British converter plugs sold in Hong Kong. For shavers and electric toothbrushes that take less than 1A of current, British shaver adaptors can be also used. Mainland China three-pin plugs Mainland China three-pin plugs cannot be converted with a British standard adaptors, which are commonly sold in Hong Kong. One can replace the plug or the power cord as appropriate. This is also the case for Australian three-pin plugs. Schuko Special attention is to be paid to the earthing of Schuko plugs. Though British converter plugs for Schuko are occasionally found; they are not common. Plug replacement is often necessary. This problem is common for special equipment in optical shops and among the residents of European origin. Old plugs There are old electrical plugs that can be inserted into the British sockets BS 1363 or BS 546, but they do not meet the British Standards. Standards "BS 1363" or "BS546" are not marked on such old quasi-UK plugs. Some of them are missing the BS1362 plug fuse or the pins are lacking insulating sleeves, among other problems. Old sockets Sockets of BS546 are incompatible with BS 1363 plugs. Compliant adaptors are not available to convert a BS546 5Amp socket to a BS 1363 13Amp one. See also BS 1363 Common electrical adaptors in Hong Kong and the United Kingdom Domestic AC power plugs and sockets Double insulated Extension cable Ground and neutral Electrical wiring (UK) Technical standards in colonial Hong Kong References Mains power connectors Electrical standards Science and technology in Hong Kong Electrical wiring Standards of the People's Republic of China
Electrical wiring in Hong Kong
[ "Physics", "Engineering" ]
536
[ "Electrical standards", "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring" ]
2,124,555
https://en.wikipedia.org/wiki/Reversible%20inhibition%20of%20sperm%20under%20guidance
Reversible inhibition of sperm under guidance (RISUG), formerly referred to as the synthetic polymer styrene maleic anhydride (SMA), is the development name of a male contraceptive injection developed at IIT Kharagpur in India by the team of Dr. Sujoy K. Guha. RISUG has been patented in India, China, Bangladesh, and the United States. Phase III clinical trials in India were slowed by insufficient volunteers, and the results were published in 2023. Beginning in 2011, a contraceptive product based on RISUG, Vasalgel, was under development in the US by the Parsemus Foundation, who were unable to bring the product to market over the next decade. In 2023, the patent for Vasalgel was acquired by NEXT Life Sciences, which plans to bring the technology to market under the name Plan A for Men. Development Sujoy K. Guha developed RISUG after years of developing other inventions. He originally wanted to create an artificial heart that could pump blood using a strong electrical pulse. Using the 13-chamber model of a cockroach heart, he designed a softer pumping mechanism that would theoretically be safe to use in humans. As India's population grew throughout the 1970s, Guha modified his heart pump design to create a water pump that could work off of differences in ionic charges between salt water and fresh water in water treatment facilities. This filtration system did not require electricity and could potentially help large groups of people have access to clean water. India, however, decided that the population problem would be better served by developing more effective contraception. So Guha again modified his design to work safely inside the body, specifically inside the male genitalia. The non-toxic polymer of RISUG also uses differences in the charges of the semen to rupture the sperm as it flows through the vas deferens. Intellectual property rights to RISUG in the United States were acquired between 2010 and 2012 by the Parsemus Foundation, a not-for-profit organization, which has branded it as "Vasalgel". Vasalgel, which has a slightly different formulation than RISUG, underwent animal trials in the United States, but reversibility proved unsuccessful. Mechanisms RISUG works by an injection into the vas deferens, the vessel through which the sperm moves before ejaculation. RISUG is similar to vasectomy in that a local anesthetic is administered, an incision is made in the scrotum, and the vasa deferentia are injected with a polymer gel (rather than being cut and cauterized). In a matter of minutes, the injection coats the walls of the vasa with a clear gel made of 60 mg of the copolymer styrene/maleic anhydride (SMA) with 120 μL of the solvent dimethyl sulfoxide. The copolymer is made by irradiation of the two monomers with a dose of 0.2 to 0.24 megarad for every 40 g of copolymer and a dose rate of 30 to 40 rad/s. Dr Pradeep K. Jha, a senior scientist, worked on the effects of gamma dose rate and total dose interrelation on molecular designing and biological function of polymer. The source of irradiation is cobalt-60 gamma radiation. The effect the chemical has on sperm is not completely understood. Originally, researchers thought it lowered the pH of the environment enough to kill the sperm. Guha theorizes that the polymer surface has a negative and positive electric charge mosaic. Within an hour after placement the differential charge from the gel will rupture the sperm's cell membrane as it passes through the vas, deactivating it before it can exit from the body. Safety The thoroughness of carcinogenicity and toxicity testing in clinical trials had been questioned after phase I of clinical trials on the basis of presence of styrene and maleic anhydride in the formulation. In response, Guha argued that substances can be individually toxic in nature but harmless as compounds like pure chlorine, which can melt human flesh on its own, but, when combined with sodium, it becomes sodium chloride – the basic salt that people consume in their diets. When it did not persuade ICMR and the clinical trials did not resume by 1996, he went to the Supreme court and the next round of clinical trials resumed afterwards. In October 2002, India's Ministry of Health aborted the clinical trials due to reports of albumin in urine and scrotal swelling in phase III trial participants. Although the ICMR has reviewed and approved the toxicology data three times, WHO and Indian researchers say that the studies were not done according to recent international standards. Due to the lack of any evidence for adverse effects, trials were restarted in 2011. Guha says concerns over the safety and efficacy of the drug have mainly come from the NIH and WHO. Availability and marketing By November 2019, the ICMR had successfully completed clinical trials of the world's first injectable male contraceptive, which was then sent to the Drug Controller General of India (DCGI) for regulatory approval. The trials were over, including extended, phase III clinical trials, for which 303 candidates were recruited with 97.3% success rate and no reported side effects. In the developed world, the average time taken for a drug to go from concept to market is 10 to 15 years, whereas, it has been over four decades since Guha published his original paper on RISUG. RISUG aims to provide males with years-long fertility control, thereby overcoming compliance problems and avoiding ongoing costs associated with condoms and the female birth control pill, which must be taken daily. Pharmaceutical companies have expressed little interest in RISUG. One obstacle facing marketing of the product is that men generally perceive contraception as a woman's issue. Men may choose not to use alternative methods of contraception because there are fewer options for birth control for them than there are for women, or they may fear the side effects, or it may conflict with their cultural or religious beliefs. However, the same study published that in the year 2000, an international survey found that 83% of men were willing to use a male contraceptive. Despite this, pharmaceutical companies are reluctant to lose market share of a thriving global market for female contraceptives and condoms which bring billions of dollars of revenue each year. Initially, RISUG attracted some interest from pharmaceutical companies. However, considering that RISUG is an inexpensive, one-time procedure, manufacturers retracted. Smart RISUG Smart RISUG is a newer version of the male contraception that was published in 2009. The polymer adds iron oxide and copper particles to the original compound, giving it magnetic properties and the name "Smart RISUG". After injection the exact location of the polymer inside the vas deferens can be measured and visualized by X-ray and magnetic resonance imaging. The polymer location can also be externally controlled using a pulsed magnetic field. With this magnetic field, the polymer can change location inside the body to maximize sterility or can be removed to restore fertility. The polymer has magnetoelastic behavior that allows it to stretch and elongate to better line the vas deferens. The iron oxide component is necessary to prevent agglomeration. With the presence of iron particles, the polymer has lower protein binding and therefore prevents agglomeration. The copper particles in the compound allow the polymer to conduct heat. When an external microwave applies heat to the polymer, it can liquify the polymer again to be excreted to restore fertility. Smart RISUG is therefore a better choice for men who want to use RISUG as temporary birth control, since it does not require a second surgery to restore fertility. The addition of metal ions also increases the effectiveness of the spermicide. The low frequency electromagnetic field disintegrates the sperm cell membrane in the head region. This in turn causes both acrosin and hyaluronidase enzymes to leak out of the sperm, making the sperm infertile. The safety of Smart RISUG is uncertain and requires additional research. The spermicidal properties of the compound should not have negative effects on the lining of the vas deferens. Albino rats used to develop the new polymer did not have any adverse symptoms. The original compound had been tested for over 25 years in rats. References External links ICMR Website ICMR 2004 Annual Report Male genital procedures Experimental methods of birth control Contraception for males Biological engineering
Reversible inhibition of sperm under guidance
[ "Engineering", "Biology" ]
1,782
[ "Biological engineering" ]
2,125,904
https://en.wikipedia.org/wiki/Cadarache
Cadarache is the largest technological research and development centre for energy in Europe. It includes the CEA research activities and ITER. CEA Cadarache is one of the 10 research centres of the French Commission of Atomic and Alternative Energies. Established in the French département Bouches-du-Rhône, close to the village Saint-Paul-lès-Durance. CEA Cadarache, created in 1959, is located about 40 kilometres from Aix-en-Provence, approximately 60 kilometres (37 mi) north-east of the city of Marseille and stands near the borders of three other départements: the Alpes de Haute-Provence, the Var and the Vaucluse. It is one of the major sources of employment in the Provence-Alpes-Côte d'Azur region (PACA) and has one of the heaviest concentrations of specialised scientific staff. Cadarache began its research activities when President Charles de Gaulle launched France's atomic energy program in 1959. The centre is operated by the Commissariat à l'Énergie Atomique et aux énergies alternatives (CEA, en: Atomic Energy and Alternative Energy Commission). In 2005, Cadarache was selected to be the site of the International Thermonuclear Experimental Reactor (ITER), the world's largest nuclear fusion reactor. Construction of the ITER complex began in 2007, and it is projected to begin plasma-generating operations in the 2020s. Cadarache also plays host to a number of research reactors, such as the Jules Horowitz Reactor, which is expected to enter operation around 2030. Facilities The Cadarache center is the largest energy research site in Europe, hosting 19 Basic Nuclear Installations (BNI) and a secret BNI, including reactors, waste stockpiling and recycling facilities, bio-technology facilities and solar platforms. It employs over 5,000 people, and approximately 700 students and foreign collaborators carry out research in the facility's laboratories. ITER, the experimental nuclear fusion tokamak, is currently under construction at Cadarache and is expected to create its first plasma by 2025. When it becomes operational, ITER is hoped to be the first large-scale fusion reactor to produce more energy than is used to initiate its fusion reactions. Other nuclear installations at Cadarache include the Tore Supra tokamak – a predecessor to ITER – and the Jules Horowitz Reactor, a 100-megawatt research reactor which is planned to begin operation in 2020. Activities Numerous nuclear research activities are conducted at Cadarache, including mixed-oxide fuel (MOX) production, nuclear propulsion and fission reactor prototyping, nuclear fusion research and research into new forms of fission fuel. Nuclear waste is also treated and recycled at the site. Notable incidents A number of accidents, of varying severity, have occurred at Cadarache since its inception. Several incidents are listed below. 31 March 1994: A sodium explosion took place while the Rapsodie experimental reactor was being dismantled. The explosion was classified as a Class 2 incident by the ASN. 25 September 1998: A sodium fire occurred in a non-nuclear test facility, but caused no significant damage. 2 November 2004: A fire broke out, but caused no radioactive contamination. 6 November 2006: A fault in the equipment used to weigh MOX led to a grinder being loaded with more than the authorized amount of fissile material, presenting the possible threat of a spontaneous nuclear reaction. Initially classified as a low-priority Class-1 level incident, it was subsequently revised to Class 2 by the ASN. 1 October 2008 : A fire broke out in a non-nuclear installation. 6 October 2009: A higher quantity of plutonium than authorized was uncovered in the Plutonium Technology Workshop. The ASN classified the incident as Class 2, and suspended dismantling work on the workshop. After investigation, it was revealed that the cause of the incident was the accumulation between 1966 and 2004 of fine plutonium dust in the 450 glove boxes in the workshop. Following the incident, further inspections revealed that another installation, STAR, also showed quantities of plutonium in excess of the authorized amounts. The incident was classified level 1 by the ASN. Seismological risk Cadarache is situated on the Aix-en-Provence-Durance seismological fault, and lies close to another fault, Trévaresse. The Aix-Durance fault caused France's worst recorded earthquake in 1909. In a 2000 report, the ASN mandated the closure of six installations at Cadarache that did not meet aseismic construction standards; a similar report was issued by a French nuclear safety organization in 1994. By 2010, three of these had been shut down, with the remaining three to be shut down by 2015. See also National Ignition Facility Nuclear energy in France List of satellite map images with missing or unclear data References External links Cadarache ITER website (in French) Buildings and structures in Bouches-du-Rhône Fusion power Nuclear technology in France Nuclear history of France Nuclear research institutes Radioactive waste repositories Research institutes in France Military nuclear reactors Tokamaks Nuclear reprocessing sites
Cadarache
[ "Physics", "Chemistry", "Engineering" ]
1,059
[ "Nuclear research institutes", "Nuclear organizations", "Plasma physics", "Fusion power", "Nuclear fusion" ]
2,127,679
https://en.wikipedia.org/wiki/Number%20density
The number density (symbol: n or ρN) is an intensive quantity used to describe the degree of concentration of countable objects (particles, molecules, phonons, cells, galaxies, etc.) in physical space: three-dimensional volumetric number density, two-dimensional areal number density, or one-dimensional linear number density. Population density is an example of areal number density. The term number concentration (symbol: lowercase n, or C, to avoid confusion with amount of substance indicated by uppercase N) is sometimes used in chemistry for the same quantity, particularly when comparing with other concentrations. Definition Volume number density is the number of specified objects per unit volume: where N is the total number of objects in a volume V. Here it is assumed that N is large enough that rounding of the count to the nearest integer does not introduce much of an error, however V is chosen to be small enough that the resulting n does not depend much on the size or shape of the volume V because of large-scale features. Area number density is the number of specified objects per unit area, A: Similarly, linear number density is the number of specified objects per unit length, L: Column number density is a kind of areal density, the number or count of a substance per unit area, obtained integrating volumetric number density along a vertical path: It's related to column mass density, with the volumetric number density replaced by the volume mass density. Units In SI units, number density is measured in m−3, although cm−3 is often used. However, these units are not quite practical when dealing with atoms or molecules of gases, liquids or solids at room temperature and atmospheric pressure, because the resulting numbers are extremely large (on the order of 1020). Using the number density of an ideal gas at and as a yardstick: is often introduced as a unit of number density, for any substances at any conditions (not necessarily limited to an ideal gas at and ). Usage Using the number density as a function of spatial coordinates, the total number of objects N in the entire volume V can be calculated as where dV = dx dy dz is a volume element. If each object possesses the same mass m0, the total mass m of all the objects in the volume V can be expressed as Similar expressions are valid for electric charge or any other extensive quantity associated with countable objects. For example, replacing m with q (total charge) and m0 with q0 (charge of each object) in the above equation will lead to a correct expression for charge. The number density of solute molecules in a solvent is sometimes called concentration, although usually concentration is expressed as a number of moles per unit volume (and thus called molar concentration). Relation to other quantities Molar concentration For any substance, the number density can be expressed in terms of its amount concentration c (in mol/m3) as where is the Avogadro constant. This is still true if the spatial dimension unit, metre, in both n and c is consistently replaced by any other spatial dimension unit, e.g. if n is in cm−3 and c is in mol/cm3, or if n is in L−1 and c is in mol/L, etc. Mass density For atoms or molecules of a well-defined molar mass M (in kg/mol), the number density can sometimes be expressed in terms of their mass density ρm (in kg/m3) as Note that the ratio M/NA is the mass of a single atom or molecule in kg. Examples The following table lists common examples of number densities at and , unless otherwise noted. See also Columnar number density References and notes Density Population geography Scalar physical quantities Plasma parameters Concentration
Number density
[ "Physics", "Chemistry", "Mathematics" ]
774
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Mass", "Concentration", "Density", "Wikipedia categories named after physical quantities", "Matter" ]
2,127,941
https://en.wikipedia.org/wiki/IEC%2061850
IEC 61850 is an international standard defining communication protocols for intelligent electronic devices at electrical substations. It is a part of the International Electrotechnical Commission's (IEC) Technical Committee 57 reference architecture for electric power systems. The abstract data models defined in IEC 61850 can be mapped to a number of protocols. Current mappings in the standard are to Manufacturing Message Specification (MMS), GOOSE (Generic Object Oriented System Event) [see section 3, Terms and definitions, term 3.65 on page 14], SV (Sampled Values) or SMV (Sampled Measure Values), and soon to web services. In the previous version of the standard, GOOSE stood for "Generic Object Oriented Substation Event", but this old definition is still very common in IEC 61850 documentation. These protocols can run over TCP/IP networks or substation LANs using high speed switched Ethernet to obtain the necessary response times below four milliseconds for protective relaying. Standard documents IEC 61850 consists of the following parts: IEC TR 61850-1:2013 – Introduction and overview IEC TS 61850-2:2003 – Glossary IEC 61850-3:2013 – General requirements IEC 61850-4:2011 – System and project management IEC 61850-5:2013 – Communication requirements for functions and device models IEC 61850-6:2009 – Configuration language for communication in electrical substations related to IEDs IEC 61850-7-1:2011 – Basic communication structure – Principles and models IEC 61850-7-2:2010 – Basic communication structure – Abstract communication service interface (ACSI) IEC 61850-7-3:2010 – Basic communication structure – Common Data Classes IEC 61850-7-4:2010 – Basic communication structure – Compatible logical node classes and data classes IEC 61850-7-410:2012 – Basic communication structure – Hydroelectric power plants – Communication for monitoring and control IEC 61850-7-420:2009 – Basic communication structure – Distributed energy resources logical nodes IEC TR 61850-7-510:2012 – Basic communication structure – Hydroelectric power plants – Modelling concepts and guidelines IEC 61850-8-1:2011 – Specific communication service mapping (SCSM) – Mappings to MMS (ISO 9506-1 and ISO 9506-2) and to ISO/IEC 8802-3 IEC 61850-9-2:2011 – Specific communication service mapping (SCSM) – Sampled values over ISO/IEC 8802-3 IEC/IEEE 61850-9-3:2016 – Precision Time Protocol profile for power utility automation IEC 61850-10:2012 – Conformance testing IEC TS 61850-80-1:2016 – Guideline to exchanging information from a CDC-based data model using IEC 60870-5-101 or IEC 60870-5-104 IEC TR 61850-80-3:2015 – Mapping to web protocols – Requirements and technical choices IEC TS 61850-80-4:2016 – Translation from the COSEM object model (IEC 62056) to the IEC 61850 data model IEC TR 61850-90-1:2010 – Use of IEC 61850 for the communication between substations IEC TR 61850-90-2:2016 – Using IEC 61850 for communication between substations and control centres IEC TR 61850-90-3:2016 – Using IEC 61850 for condition monitoring diagnosis and analysis IEC TR 61850-90-4:2013 – Network engineering guidelines IEC TR 61850-90-5:2012 – Use of IEC 61850 to transmit synchrophasor information according to IEEE C37.118 IEC TR 61850-90-7:2013 – Object models for power converters in distributed energy resources (DER) systems IEC TR 61850-90-8:2016 – Object model for E-mobility IEC TR 61850-90-12:2015 – Wide area network engineering guidelines Features IEC 61850 features include: Data modelling – Primary process objects as well as protection and control functionality in the substation is modelled into different standard logical nodes which can be grouped under different logical devices. There are logical nodes for data/functions related to the logical device (LLN0) and physical device (LPHD). Reporting schemes – There are various reporting schemes (BRCB & URCB) for reporting data from server through a server-client relationship which can be triggered based on pre-defined trigger conditions. Fast transfer of events – Generic Substation Events (GSE) are defined for fast transfer of event data for a peer-to-peer communication mode. This is again subdivided into GOOSE & GSSE. Setting groups – The setting group control Blocks (SGCB) are defined to handle the setting groups so that user can switch to any active group according to the requirement. Sampled data transfer – Schemes are also defined to handle transfer of sampled values using Sampled Value Control blocks (SVCB) Commands – Various command types are also supported by IEC 61850 which include direct & select before operate (SBO) commands with normal and enhanced securities. Data storage – Substation Configuration Language (SCL) is defined for complete storage of configured data of the substation in a specific format. See also References External links Detailed Introduction to IEC 61850 IEC 61850 Technical Issues website UCA International Users Group IEC61850: A Protocol with Powerful Potential Smart High Voltage Substation Based on IEC 61850 Process Bus and IEEE 1588 Time Synchronization Test and evaluation system for multi-protocol sampled value protection schemes by Dr. David M.E. Ingram Electric power 61850 Smart grid
IEC 61850
[ "Physics", "Technology", "Engineering" ]
1,184
[ "Physical quantities", "Computer standards", "IEC standards", "Power (physics)", "Electric power", "Electrical engineering" ]
2,128,068
https://en.wikipedia.org/wiki/Information%20flow%20%28information%20theory%29
Information flow in an information theoretical context is the transfer of information from a variable to a variable in a given process. Not all flows may be desirable; for example, a system should not leak any confidential information (partially or not) to public observers—as it is a violation of privacy on an individual level, or might cause major loss on a corporate level. Introduction Securing the data manipulated by computing systems has been a challenge in the past years. Several methods to limit the information disclosure exist today, such as access control lists, firewalls, and cryptography. However, although these methods do impose limits on the information that is released by a system, they provide no guarantees about information propagation. For example, access control lists of file systems prevent unauthorized file access, but they do not control how the data is used afterwards. Similarly, cryptography provides a means to exchange information privately across a non-secure channel, but no guarantees about the confidentiality of the data are given once it is decrypted. In low level information flow analysis, each variable is usually assigned a security level. The basic model comprises two distinct levels: low and high, meaning, respectively, publicly observable information, and secret information. To ensure confidentiality, flowing information from high to low variables should not be allowed. On the other hand, to ensure integrity, flows to high variables should be restricted. More generally, the security levels can be viewed as a lattice with information flowing only upwards in the lattice. For example, considering two security levels and (low and high), if , flows from to , from to , and to would be allowed, while flows from to would not. Throughout this article, the following notation is used: variable (low) shall denote a publicly observable variable variable (high) shall denote a secret variable Where and are the only two security levels in the lattice being considered. Explicit flows and side channels Information flows can be divided in two major categories. The simplest one is explicit flow, where some secret is explicitly leaked to a publicly observable variable. In the following example, the secret in the variable h flows into the publicly observable variable l. var l, h l := h The other flows fall into the side channel category. For example, in the timing attack or in the power analysis attack, the system leaks information through, respectively, the time or power it takes to perform an action depending on a secret value. In the following example, the attacker can deduce if the value of h is one or not by the time the program takes to finish: var l, h if h = 1 then (* do some time-consuming work *) l := 0 Another side channel flow is the implicit information flow, which consists in leakage of information through the program control flow. The following program (implicitly) discloses the value of the secret variable h to the variable l. In this case, since the h variable is boolean, all the bits of the variable of h is disclosed (at the end of the program, l will be 3 if h is true, and 42 otherwise). var l, h if h = true then l := 3 else l := 42 Non-interference Non-interference is a policy that enforces that an attacker should not be able to distinguish two computations from their outputs if they only vary in their secret inputs. However, this policy is too strict to be usable in realistic programs. The classic example is a password checker program that, in order to be useful, needs to disclose some secret information: whether the input password is correct or not (note that the information that an attacker learns in case the program rejects the password is that the attempted password is not the valid one). Information flow control A mechanism for information flow control is one that enforces information flow policies. Several methods to enforce information flow policies have been proposed. Run-time mechanisms that tag data with information flow labels have been employed at the operating system level and at the programming language level. Static program analyses have also been developed that ensure information flows within programs are in accordance with policies. Both static and dynamic analysis for current programming languages have been developed. However, dynamic analysis techniques cannot observe all execution paths, and therefore cannot be both sound and precise. In order to guarantee noninterference, they either terminate executions that might release sensitive information or they ignore updates that might leak information. A prominent way to enforce information flow policies in a program is through a security type system: that is, a type system that enforces security properties. In such a sound type system, if a program type-checks, it meets the flow policy and therefore contains no improper information flows. Security type system In a programming language augmented with a security type system every expression carries both a type (such as boolean, or integer) and a security label. Following is a simple security type system from that enforces non-interference. The notation means that the expression has type . Similarly, means that the command is typable in the security context . Well-typed commands include, for example, . Conversely, the program is ill-typed, as it will disclose the value of variable into . Note that the rule is a subsumption rule, which means that any command that is of security type can also be . For example, can be both and . This is called polymorphism in type theory. Similarly, the type of an expression that satisfies can be both and according to and respectively. Declassification As shown previously, non-interference policy is too strict for use in most real-world applications. Therefore, several approaches to allow controlled releases of information have been devised. Such approaches are called information declassification. Robust declassification requires that an active attacker may not manipulate the system in order to learn more secrets than what passive attackers already know. Information declassification constructs can be classified in four orthogonal dimensions: what information is released, who is authorized to access the information, where the information is released, and when the information is released. What A what declassification policy controls which information (partial or not) may be released to a publicly observable variable. The following code example shows a declassify construct from. In this code, the value of the variable h is explicitly allowed by the programmer to flow into the publicly observable variable l. var l, h if l = 1 then l := declassify(h) Who A who declassification policy controls which principals (i.e., who) can access a given piece of information. This kind of policy has been implemented in the Jif compiler. The following example allows Bob to share its secret contained in the variable b with Alice through the commonly accessible variable ab. var ab (* {Alice, Bob} *) var b (* {Bob} *) if ab = 1 then ab := declassify(b, {Alice, Bob}) (* {Alice, Bob} *) Where A where declassification policy regulates where the information can be released, for example, by controlling in which lines of the source code information can be released. The following example makes use of the flow construct proposed in. This construct takes a flow policy (in this case, variables in H are allowed to flow to variables in L) and a command, which is run under the given flow policy. var l, h flow H L in l := h When A when declassification policy regulates when the information can be released. Policies of this kind can be used to verify programs that implement, for example, controlled release of secret information after payment, or encrypted secrets which should not be released in a certain time given polynomial computational power. Declassification approaches for implicit flows An implicit flow occurs when code whose conditional execution is based on private information updates a public variable. This is especially problematic when multiple executions are considered since an attacker could leverage the public variable to infer private information by observing how its value changes over time or with the input. The naïve approach No sensitive upgrade "No sensitive upgrade" halts the program whenever a High variable affects the value of a Low variable. Since it simply looks for expressions where an information leakage might happen, without looking at the context, it may halt a program that, despite having potential information leakage, never actually leaks information. In the following example x is High and y is Low. var x, y y := false if x = true then y := true return true In this case the program would be halted since—syntactically speaking—it uses the value of a High variable to change a Low variable, despite the program never leaking information. Permissive upgrade Permissive-upgrade introduces an extra security class P which will identify information leaking variables. When a High variable affects the value of a Low variable, the latter is labeled P. If a P labeled variable affects a Low variable the program would be halted. To prevent the halting the Low and P variables should be converted to High using a privatization function to ensure no information leakage can occur. On subsequent instances the program will run without interruption. Privatization inference Privatization inference extends permissive upgrade to automatically apply the privatization function to any variable that might leak information. This method should be used during testing where it will convert most variables. Once the program moves into production the permissive-upgrade should be used to halt the program in case of an information leakage and the privatization functions can be updated to prevent subsequent leaks. Application in computer systems Beyond applications to programming language, information flow control theories have been applied to operating systems, distributed systems, and cloud computing. References Information theory
Information flow (information theory)
[ "Mathematics", "Technology", "Engineering" ]
1,966
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
2,128,410
https://en.wikipedia.org/wiki/Delamination
Delamination is a mode of failure where a material fractures into layers. A variety of materials, including laminate composites and concrete, can fail by delamination. Processing can create layers in materials, such as steel formed by rolling and plastics and metals from 3D printing which can fail from layer separation. Also, surface coatings, such as paints and films, can delaminate from the coated substrate. In laminated composites, the adhesion between layers often fails first, causing the layers to separate. For example, in fiber-reinforced plastics, sheets of high strength reinforcement (e.g., carbon fiber, fiberglass) are bound together by a much weaker polymer matrix (e.g., epoxy). In particular, loads applied perpendicular to the high strength layers, and shear loads can cause the polymer matrix to fracture or the fiber reinforcement to debond from the polymer. Delamination also occurs in reinforced concrete when metal reinforcements near the surface corrode. The oxidized metal has a larger volume causing stresses when confined by the concrete. When the stresses exceed the strength of the concrete, cracks can form and spread to join with neighboring cracks caused by corroded rebar creating a fracture plane that runs parallel to the surface. Once the fracture plane has developed, the concrete at the surface can separate from the substrate. Processing can create layers in materials which can fail by delamination. In concrete, surfaces can flake off from improper finishing. If the surface is finished and densified by troweling while the underlying concrete is bleeding water and air, the dense top layer may separate from the water and air pushing upwards. In steels, rolling can create a microstructure when the microscopic grains are oriented in flat sheets which can fracture into layers. Also, certain 3D printing methods (e.g., Fused Deposition) builds parts in layers that can delaminate during printing or use. When printing thermoplastics with fused deposition, cooling a hot layer of plastic applied to a cold substrate layer can cause bending due to differential thermal contraction and layer separation. Inspection methods There are multiple nondestructive testing methods to detect delamination in structures including visual inspection, tap testing (i.e. sounding), ultrasound, radiography, and infrared imaging. Visual inspection is useful for detecting delaminations at the surface and edges of materials. However, a visual inspection may not detect delamination within a material without cutting the material open. Tap testing or sounding involves gently striking the material with a hammer or hard object to find delamination based on the resulting sound. In laminated composites, a clear ringing sound indicates a well bonded material whereas a duller sound indicates the presence of delamination due to the defect dampening the impact. Tap testing is well suited for finding large defects in flat panel composites with a honeycomb core whereas thin laminates may have small defects that are not discernible by sound. Using sound is also subjective and dependent on the inspector's quality of hearing as well as judgement. Any intentional variations in the part may also change the pitch of the produced sound, influencing the inspection. Some of these variations include ply overlaps, ply count change gores, core density change (if used), and geometry. In reinforced concretes intact regions will sound solid whereas delaminated areas will sound hollow. Tap testing large concrete structures is carried about either with a hammer or with a chain dragging device for horizontal surfaces like bridge decks. Bridge decks in cold climate countries which use de-icing salts and chemicals are commonly subject to delamination and as such are typically scheduled for annual inspection by chain-dragging as well as subsequent patch repairs of the surface. Delamination resistance testing methods Coating delamination tests ASTM provides standards for paint adhesion testing which provides qualitative measures for paints and coatings resistance to delamination from substrates. Tests include cross-cut test, scrape adhesion, and pull-off test. Interlaminar fracture toughness testing Fracture toughness is a material property that describes resistance to fracture and delamination. It is denoted by critical stress intensity factor or critical strain energy release rate . For unidirectional fiber reinforced polymer laminate composites, ASTM provides standards for determining mode I fracture toughness and mode II fracture toughness of the interlaminar matrix. During the tests load and displacement is recorded for analysis to determine the strain energy release rate from the compliance method. in terms of compliance is given by where is the change in compliance (ratio of ), is the thickness of the specimen, and is the change in crack length. Mode I interlaminar fracture toughness ASTM D5528 specifies the use of the double cantilever beam (DCB) specimen geometry for determining mode I interlaminar fracture toughness. A double cantilever beam specimen is created by placing a non-stick film between reinforcement layers in the center of the beam before curing the polymer matrix to create an initial crack of length . During the test the specimen is loaded in tension from the end of the initial crack side of the beam opening the crack. Using the compliance method, the critical strain energy release rate is given by where and are the maximum load and displacement respectively by determining when the load deflection curve has become nonlinear with a line drawn from the origin with a 5% increase in compliance. Typically, equation 2 overestimates the fracture toughness because the two cantilever beams of the DCB specimen will have a finite rotation at the crack. The finite rotation can be corrected for by calculating with a slightly longer crack with length giving The crack length correction can be calculated experimentally by plotting the least squares fit of the cube root of the compliance vs. crack length . The correction is the absolute value of the x intercept. Fracture toughness can also be corrected with the compliance calibration method where given by where is the slope of the least squares fit of vs. . Mode II interlaminar fracture toughness Mode II interlaminar fracture toughness can be determined by an edge notch flexure test specified by ASTM D7905. The specimen is prepared in a similar manner as the DCB specimen introducing an initial crack with length before curing the polymer matrix. If the test is performed with the initial crack (non-precracked method) the candidate fracture toughness is given by where is the thickness of the specimen and is the max load and is a fitting parameter. is determined by experimental results with a least squares fit of compliance vs. the crack length cubed with the form of . The candidate fracture toughness equals the mode II fracture toughness if strain energy release rate falls within certain percentage of at different crack lengths specified by ASTM. Interlaminar shear strength testing Interlaminar shear strength is used as an additional measure of the strength of the fiber-matrix bond in fiber-reinforced composites. Shear-induced delamination is experienced in various loading conditions where the bending moment across the composite changes rapidly, such as in pipes with changes in thickness or bends. Multiple test architectures have been proposed for use in measuring interlaminar shear strength, including the short beam shear test, Iosipescu test, rail shear test, and asymmetrical four-point bending test. The goal of each of these tests is to maximize the ratio of shear stress to tensile stress exhibited in the sample, promoting failure via delamination of the fiber-matrix interface instead of through fiber tension or buckling. The orthotropic symmetry of fiber composite materials makes a state of pure shear stress difficult to obtain in sample testing; thin cylindrical specimens can be used but are costly to manufacture. Sample geometries are thus chosen for ease of machining and optimization of the stress state when loaded. In addition to manufactured composites such as glass fiber-reinforced polymers, interlaminar shear strength is an important property in natural materials such as wood. The long, thin shape of floorboards, for example, may promote deformation that leads to vibrations. Asymmetric four-point bending Asymmetric four-point bending (AFPB) may be chosen to measure interlaminar shear strength over other procedures for a variety of reasons, including specimen machinability, test reproducibility, and equipment availability. For example, short-beam shear samples are constrained to a specific length-thickness ratio to prevent bending failure, and the shear stress distribution across the specimen is non-uniform, both of which contribute to a lack of reproducibility. Rail shear testing also produces a non-homogeneous shear stress state, making it appropriate for determining shear modulus, but not shear strength. The Iosipescu test requires special equipment in addition to the roller setup already used for other three- and four-point flexural tests. ASTM C1469 outlines a standard for AFPB testing of advanced ceramic joints, and the method has been proposed to be adapted for use with continuous ceramic matrix composites. Rectangular samples can be used with or without notches machined at the center; the addition of notches helps to control the position of the failure along the length of the sample, but improper or nonsymmetrical machining can result in the addition of undesired normal stresses which reduce the measured strength. The sample is then loaded in compression in its test fixture, with loading applied directly to the sample from 4 loading pins arranged in a parallelogram-like configuration. The load applied from the test fixture is transferred unevenly to the top two pins; the ratio of the inner pin load and outer pin load is defined as the loading factor , such that , where and are the lengths from the inner pin to the applied point load and from the outer pin to the applied point load, respectively. The normal stress in the sample is maximized at the locations of the inner pins, and is equivalent to , where is the total applied load on the sample, is the sample length, is the sample width (into the page as seen in a 2D free-body diagram), and is the sample thickness. The shear stress in the sample is maximized in between the inner span of the pins and is given by . The ratio of normal to shear stress in the sample is given by . This ratio is dependent both on the loading factor of the sample and its length-thickness ratio; both of these quantities are important in determining the mode of failure of the sample in testing. References Composite materials Mechanical failure modes
Delamination
[ "Physics", "Materials_science", "Technology", "Engineering" ]
2,151
[ "Structural engineering", "Mechanical failure modes", "Technological failures", "Composite materials", "Materials", "Mechanical failure", "Matter" ]
2,129,346
https://en.wikipedia.org/wiki/TILLING%20%28molecular%20biology%29
TILLING (Targeting Induced Local Lesions in Genomes) is a method in molecular biology that allows directed identification of mutations in a specific gene. TILLING was introduced in 2000, using the model plant Arabidopsis thaliana, and expanded on into other uses and methodologies by a small group of scientists including Luca Comai. TILLING has since been used as a reverse genetics method in other organisms such as zebrafish, maize, wheat, rice, soybean, tomato and lettuce. Overview The method combines a standard and efficient technique of mutagenesis using a chemical mutagen such as ethyl methanesulfonate (EMS) with a sensitive DNA screening-technique that identifies single base mutations (also called point mutations) in a target gene. The TILLING method relies on the formation of DNA heteroduplexes that are formed when multiple alleles are amplified by PCR and are then heated and slowly cooled. A “bubble” forms at the mismatch of the two DNA strands, which is then cleaved by a single stranded nuclease. The products are then separated by size on several different platforms (see below). Mismatches may be due to induced mutation, heterozygosity within an individual, or natural variation between individuals. EcoTILLING is a method that uses TILLING techniques to look for natural mutations in individuals, usually for population genetics analysis. DEcoTILLING is a modification of TILLING and EcoTILLING which uses an inexpensive method to identify fragments. Since the advent of NGS sequencing technologies, TILLING-by-sequencing has been developed based on Illumina sequencing of target genes amplified from multidimensionally pooled templates to identify possible single-nucleotide changes. Single strand cleavage enzymes There are several sources for single strand nucleases. The first widely used enzyme was mung bean nuclease, but this nuclease has been shown to have high non-specific activity, and only works at low pH, which can degrade PCR products and dye-labeled primers. The original source for single strand nuclease was from CEL1, or CJE (celery juice extract), but other products have entered the market including Frontier Genomics’ SNiPerase enzymes, which have been optimized for use on platforms that use labeled and unlabeled PCR products (see next section). Transgenomic isolated the single strand nuclease protein and sells it as a recombinant form. The advantage of the recombinant form is that unlike the enzyme mixtures, it does not contain non-specific nuclease activity, which can degrade the dyes on the PCR primers. The disadvantage is a substantially higher cost. Separation of cleaved products The first paper describing TILLING used HPLC to identify mutations (McCallum et al., 2000a). The method was made more high throughput by using the restriction enzyme Cel-I combined with the LICOR gel based system to identify mutations (Colbert et al., 2001). Advantages to using this system are that mutation sites can be easily confirmed and differentiated from noise. This is because different colored dyes can be used for the forward and reverse primers. Once the cleavage products have been run on a gel, it can be viewed in separate channels, and much like an RFLP, the fragment sizes within a lane in each channel should add up to the full length product size. Advantages to the LICOR system are separation of large fragments (~ 2kb), high sample throughput (96 samples loaded on paper combs), and freeware to identify the mutations (GelBuddy). Drawbacks to the LICOR system is having to pour slab gels and long run times (~4 hours). TILLING and EcoTILLING methods are now being used on capillary systems from, Advanced Analytical Technologies, ABI and Beckman. Several systems can be used to separate PCR products that are not labeled with dyes. Simple agarose electrophoresis systems will separate cleavage products inexpensively and with standard lab equipment. This was used to discover SNPs in chum salmon and was referred to as DEcoTILLING. The disadvantage of this system is reduced resolution compared to polyacrylamide systems. Elchrom Scientific sells Spreadex gels which are precast, can be high throughput and are more sensitive than standard polyacrylamide gels. Advanced Analytical Technologies Inc sells the AdvanCE FS96 dsDNA Fluorescent System which is a 96 capillary electrophoresis system that has several advantages over traditional methods; including ability to separate large fragments (up to 40kb), no desalting or precipitation step required, short run times (~30 minutes), sensitivity to 5pg/ul and no need for fluorescent labeled primers. TILLING centers Several TILLING centers exist over the world that focus on agriculturally important species: Rice – UC Davis (USA) Maize – Purdue University (USA) Brassica napus – University of British Columbia (CA) Brassica rapa – John Innes Centre (UK) Arabidopsis – Fred Hutchinson Cancer Research Soybean – Southern Illinois University (USA) Lotus and Medicago – John Innes Centre (UK)] Wheat – UC Davis (USA) Pea, Tomato - INRA (France) Tomato - RTGR, University of Hyderabad (India) References Scientific literature External links The Arabidopsis Tilling project Introduction to TILLING Zebrafish TILLING project Sanger Institute Zebrafish Mutation Project Biochemistry detection methods Genetics techniques Molecular biology
TILLING (molecular biology)
[ "Chemistry", "Engineering", "Biology" ]
1,153
[ "Biochemistry methods", "Genetics techniques", "Genetic engineering", "Chemical tests", "Biochemistry detection methods", "Molecular biology", "Biochemistry" ]
2,129,591
https://en.wikipedia.org/wiki/Inverse%20dynamics
Inverse dynamics is an inverse problem. It commonly refers to either inverse rigid body dynamics or inverse structural dynamics. Inverse rigid-body dynamics is a method for computing forces and/or moments of force (torques) based on the kinematics (motion) of a body and the body's inertial properties (mass and moment of inertia). Typically it uses link-segment models to represent the mechanical behaviour of interconnected segments, such as the limbs of humans or animals or the joint extensions of robots, where given the kinematics of the various parts, inverse dynamics derives the minimum forces and moments responsible for the individual movements. In practice, inverse dynamics computes these internal moments and forces from measurements of the motion of limbs and external forces such as ground reaction forces, under a special set of assumptions. Applications The fields of robotics and biomechanics constitute the major application areas for inverse dynamics. Within robotics, inverse dynamics algorithms are used to calculate the torques that a robot's motors must deliver to make the robot's end-point move in the way prescribed by its current task. The "inverse dynamics problem" for robotics was solved by Eduardo Bayo in 1987. This solution calculates how each of the numerous electric motors that control a robot arm must move to produce a particular action. Humans can perform very complicated and precise movements, such as controlling the tip of a fishing rod well enough to cast the bait accurately. Before the arm moves, the brain calculates the necessary movement of each muscle involved and tells the muscles what to do as the arm swings. In the case of a robot arm, the "muscles" are the electric motors which must turn by a given amount at a given moment. Each motor must be supplied with just the right amount of electric current, at just the right time. Researchers can predict the motion of a robot arm if they know how the motors will move. This is known as the forward dynamics problem. Until this discovery, they had not been able to work backwards to calculate the movements of the motors required to generate a particular complicated motion. Bayo's work began with the application of frequency-domain methods to the inverse dynamics of single-link flexible robots. This approach yielded non-causal exact solutions due to the right-half plane zeros in the hub-torque-to-tip transfer functions. Extending this method to the nonlinear multi-flexible-link case was of particular importance to robotics. When combined with passive joint control in a collaborative effort with a control group, Bayo's inverse dynamics approach led to exponentially stable tip-tracking control for flexible multi-link robots. Similarly, inverse dynamics in biomechanics computes the net turning effect of all the anatomical structures across a joint, in particular the muscles and ligaments, necessary to produce the observed motions of the joint. These moments of force may then be used to compute the amount of mechanical work performed by that moment of force. Each moment of force can perform positive work to increase the speed and/or height of the body or perform negative work to decrease the speed and/or height of the body. The equations of motion necessary for these computations are based on Newtonian mechanics, specifically the Newton–Euler equations of: Force equal mass times linear acceleration, and ''Moment equals mass moment of inertia times angular acceleration. These equations mathematically model the behavior of a limb in terms of a knowledge domain-independent, link-segment model, such as idealized solids of revolution or a skeleton with fixed-length limbs and perfect pivot joints. From these equations, inverse dynamics derives the torque (moment) level at each joint based on the movement of the attached limb or limbs affected by the joint. This process used to derive the joint moments is known as inverse dynamics because it reverses the forward dynamics equations of motion, the set of differential equations which yield the position and angle trajectories of the idealized skeleton's limbs from the accelerations and forces applied. From joint moments, a biomechanist could infer muscle forces that would lead to those moments based on a model of bone and muscle attachments, etc., thereby estimating muscle activation from kinematic motion. Correctly computing force (or moment) values from inverse dynamics can be challenging because external forces (e.g., ground contact forces) affect motion but are not directly observable from the kinematic motion. In addition, co-activation of muscles can lead to a family of solutions which are not distinguishable from the kinematic motion's characteristics. Furthermore, closed kinematic chains, such as swinging a bat or shooting a hockey puck, require the measurement of internal forces (in the bat or stick) be made before shoulder, elbow or wrist moments and forces can be derived. See also Kinematics Inverse kinematics: a problem similar to Inverse dynamics but with different goals and starting assumptions. While inverse dynamics asks for torques that produce a certain time-trajectory of positions and velocities, inverse kinematics only asks for a static set of joint angles such that a certain point (or a set of points) of the character (or robot) is positioned at a certain designated location. It is used in synthesizing the appearance of human motion, particularly in the field of video game design. Another use is in robotics, where joint angles of an arm must be calculated from the desired position of the end effector. Body segment parameters References External links Inverse dynamics Chris Kirtley's research roundup and tutorials on biomechanical aspects of human gait. Robot control Motor control Inverse problems 1987 in robotics
Inverse dynamics
[ "Mathematics", "Engineering", "Biology" ]
1,149
[ "Behavior", "Robotics engineering", "Applied mathematics", "Motor control", "Robot control", "Inverse problems" ]
21,666,983
https://en.wikipedia.org/wiki/Synchronous%20frame
A synchronous frame is a reference frame in which the time coordinate defines proper time for all co-moving observers. It is built by choosing some constant time hypersurface as an origin, such that has in every point a normal along the time line and a light cone with an apex in that point can be constructed; all interval elements on this hypersurface are space-like. A family of geodesics normal to this hypersurface are drawn and defined as the time coordinates with a beginning at the hypersurface. In terms of metric-tensor components , a synchronous frame is defined such that where Such a construct, and hence, choice of synchronous frame, is always possible though it is not unique. It allows any transformation of space coordinates that does not depend on time and, additionally, a transformation brought about by the arbitrary choice of hypersurface used for this geometric construct. Synchronization in an arbitrary frame of reference Synchronization of clocks located at different space points means that events happening at different places can be measured as simultaneous if those clocks show the same times. In special relativity, the space distance element dl is defined as the intervals between two very close events that occur at the same moment of time. In general relativity this cannot be done, that is, one cannot define dl by just substituting dt ≡ dx0 = 0 in the metric. The reason for this is the different dependence between proper time and time coordinate x0 ≡ t in different points of space., i.e., To find dl in this case, time can be synchronized over two infinitesimally neighboring points in the following way (Fig. 1): Bob sends a light signal from some space point B with coordinates to Alice who is at a very close point A with coordinates xα and then Alice immediately reflects the signal back to Bob. The time necessary for this operation (measured by Bob), multiplied by c is, obviously, the doubled distance between Alice and Bob. The line element, with separated space and time coordinates, is: where a repeated Greek index within a term means summation by values 1, 2, 3. The interval between the events of signal arrival and its immediate reflection back at point A is zero (two events, arrival and reflection are happening at the same point in space and time). For light signals, the space-time interval is zero and thus setting in the above equation, we can solve for dx0 obtaining two roots: which correspond to the propagation of the signal in both directions between Alice and Bob. If x0 is the moment of arrival/reflection of the signal to/from Alice in Bob's clock then, the moments of signal departure from Bob and its arrival back to Bob correspond, respectively, to x0 + dx0 (1) and x0 + dx0 (2). The thick lines on Fig. 1 are the world lines of Alice and Bob with coordinates xα and xα + dxα, respectively, while the red lines are the world lines of the signals. Fig. 1 supposes that dx0 (2) is positive and dx0 (1) is negative, which, however, is not necessarily the case: dx0 (1) and dx0 (2) may have the same sign. The fact that in the latter case the value x0 (Alice) in the moment of signal arrival at Alice's position may be less than the value x0 (Bob) in the moment of signal departure from Bob does not contain a contradiction because clocks in different points of space are not supposed to be synchronized. It is clear that the full "time" interval between departure and arrival of the signal in Bob's place is The respective proper time interval is obtained from the above relationship by multiplication by , and the distance dl between the two points – by additional multiplication by c/2. As a result: This is the required relationship that defines distance through the space coordinate elements. It is obvious that such synchronization should be done by exchange of light signals between points. Consider again propagation of signals between infinitesimally close points A and B in Fig. 1. The clock reading in B which is simultaneous with the moment of reflection in A lies in the middle between the moments of sending and receiving the signal in B; in this moment if Alice's clock reads y0 and Bob's clock reads x0 then via Einstein Synchronization condition, Substitute here to find the difference in "time" x0 between two simultaneous events occurring in infinitesimally close points as This relationship allows clock synchronization in any infinitesimally small space volume. By continuing such synchronization further from point A, one can synchronize clocks, that is, determine simultaneity of events along any open line. The synchronization condition can be written in another form by multiplying by g00 and bringing terms to the left hand side or, the "covariant differential" dx0 between two infinitesimally close points should be zero. However, it is impossible, in general, to synchronize clocks along a closed contour: starting out along the contour and returning to the starting point one would obtain a Δx0 value different from zero. Thus, unambiguous synchronization of clocks over the whole space is impossible. An exception are reference frames in which all components g0α are zeros. The inability to synchronize all clocks is a property of the reference frame and not of the spacetime itself. It is always possible in infinitely many ways in any gravitational field to choose the reference frame so that the three g0α become zeros and thus enable a complete synchronization of clocks. To this class are assigned cases where g0α can be made zeros by a simple change in the time coordinate which does not involve a choice of a system of objects that define the space coordinates. In the special relativity theory, too, proper time elapses differently for clocks moving relatively to each other. In general relativity, proper time is different even in the same reference frame at different points of space. This means that the interval of proper time between two events occurring at some space point and the time interval between the events simultaneous with those at another space point are, in general, different. Example: Uniformly rotating frame Consider a rest (inertial) frame expressed in cylindrical coordinates and time . The interval in this frame is given by Transforming to a uniformly rotating coordinate system using the relation modifies the interval to Of course, the rotating frame is valid only for since the frame speed would exceed speed of light beyond this radial location. The non-zero components of the metric tensor are and Along any open curve, the relation can be used to synchronize clocks. However, along any closed curve, synchronization is impossible because For instance, when , we have where is the projected area of the closed curve on a plane perpendicular to the rotation axis (plus or minus sign corresponds to contour traversing in, or opposite to the rotation direction). The proper time element in the rotating frame is given by indicating that time slows down as we move away from the axis. Similarly the spatial element can be calculated to find At a fixed value of and , the spatial element is which upon integration over a full circle shows that the ratio of circumference of a circle to its radius is given by which is greater than by . Space metric tensor can be rewritten in the form where is the three-dimensional metric tensor that determines the metric, that is, the geometrical properties of space. Equations give the relationships between the metric of the three-dimensional space and the metric of the four-dimensional spacetime . In general, however, depends on x0 so that changes with time. Therefore, it doesn't make sense to integrate dl: this integral depends on the choice of world line between the two points on which it is taken. It follows that in general relativity the distance between two bodies cannot be determined in general; this distance is determined only for infinitesimally close points. Distance can be determined for finite space regions only in such reference frames in which gik does not depend on time and therefore the integral along the space curve acquires some definite sense. The tensor is inverse to the contravariant 3-dimensional tensor . Indeed, writing equation in components, one has: Determining from the second equation and substituting it in the first proves that This result can be presented otherwise by saying that are components of a contravariant 3-dimensional tensor corresponding to metric : The determinants g and composed of elements and , respectively, are related to each other by the simple relationship: In many applications, it is convenient to define a 3-dimensional vector g with covariant components Considering g as a vector in space with metric , its contravariant components can be written as . Using and the second of , it is easy to see that From the third of , it follows Synchronous coordinates As concluded from , the condition that allows clock synchronization in different space points is that metric tensor components g0α are zeros. If, in addition, g00 = 1, then the time coordinate x0 = t is the proper time in each space point (with c = 1). A reference frame that satisfies the conditions is called synchronous frame. The interval element in this system is given by the expression with the spatial metric tensor components identical (with opposite sign) to the components gαβ: In synchronous frame time, time lines are normal to the hypersurfaces t = const. Indeed, the unit four-vector normal to such a hypersurface ni = ∂t/∂xi has covariant components nα = 0, n0 = 1. The respective contravariant components with the conditions are again nα = 0, n0 = 1. The components of the unit normal coincide with those of the four-vector u = dxi/ds which is tangent to the world line x1, x2, x3 = const. The u with components uα = 0, u0 = 1 automatically satisfies the geodesic equations: since, from the conditions , the Christoffel symbols and vanish identically. Therefore, in the synchronous frame the time lines are geodesics in the spacetime. These properties can be used to construct synchronous frame in any spacetime (Fig. 2). To this end, choose some spacelike hypersurface as an origin, such that has in every point a normal along the time line (lies inside the light cone with an apex in that point); all interval elements on this hypersurface are space-like. Then draw a family of geodesics normal to this hypersurface. Choose these lines as time coordinate lines and define the time coordinate t as the length s of the geodesic measured with a beginning at the hypersurface; the result is a synchronous frame. An analytic transformation to synchronous frame can be done with the use of the Hamilton–Jacobi equation. The principle of this method is based on the fact that particle trajectories in gravitational fields are geodesics. The Hamilton–Jacobi equation for a particle (whose mass is set equal to unity) in a gravitational field is where S is the action. Its complete integral has the form: Note that the complete integral contains as many arbitrary constants as the number of independent variables which in our case is . In the above equation, these correspond to the three parameters ξα and the fourth constant A being treated as an arbitrary function of the three ξα. With such a representation for S the equations for the trajectory of the particle can be obtained by equating the derivatives ∂S/∂ξα to zero, i.e. For each set of assigned values of the parameters ξα, the right sides of equations have definite constant values, and the world line determined by these equations is one of the possible trajectories of the particle. Choosing the quantities ξα, which are constant along the trajectory, as new space coordinates, and the quantity S as the new time coordinate, one obtains a synchronous frame; the transformation from the old coordinates to the new ones is given by equations . In fact, it is guaranteed that for such a transformation the time lines will be geodesics and will be normal to the hypersurfaces S = const. The latter point is obvious from the mechanical analogy: the four-vector ∂S/∂xi which is normal to the hypersurface coincides in mechanics with the four-momentum of the particle, and therefore coincides in direction with its four-velocity u i.e. with the four-vector tangent to the trajectory. Finally the condition g00 = 1 is obviously satisfied, since the derivative −dS/ds of the action along the trajectory is the mass of the particle, which was set equal to 1; therefore |dS/ds| = 1. The gauge conditions do not fix the coordinate system completely and therefore are not a fixed gauge, as the spacelike hypersurface at can be chosen arbitrarily. One still have the freedom of performing some coordinate transformations containing four arbitrary functions depending on the three spatial variables xα, which are easily worked out in infinitesimal form: Here, the collections of the four old coordinates (t, xα) and four new coordinates are denoted by the symbols x and , respectively. The functions together with their first derivatives are infinitesimally small quantities. After such a transformation, the four-dimensional interval takes the form: where In the last formula, the are the same functions gik(x) in which x should simply be replaced by . If one wishes to preserve the gauge also for the new metric tensor in the new coordinates , it is necessary to impose the following restrictions on the functions : The solutions of these equations are: where f0 and fα are four arbitrary functions depending only on the spatial coordinates . For a more elementary geometrical explanation, consider Fig. 2. First, the synchronous time line ξ0 = t can be chosen arbitrarily (Bob's, Carol's, Dana's or any of an infinitely many observers). This makes one arbitrarily chosen function: . Second, the initial hypersurface can be chosen in infinitely many ways. Each of these choices changes three functions: one function for each of the three spatial coordinates . Altogether, four (= 1 + 3) functions are arbitrary. When discussing general solutions gαβ of the field equations in synchronous gauges, it is necessary to keep in mind that the gravitational potentials gαβ contain, among all possible arbitrary functional parameters present in them, four arbitrary functions of 3-space just representing the gauge freedom and therefore of no direct physical significance. Another problem with the synchronous frame is that caustics can occur which cause the gauge choice to break down. These problems have caused some difficulties doing cosmological perturbation theory in synchronous frame, but the problems are now well understood. Synchronous coordinates are generally considered the most efficient reference system for doing calculations, and are used in many modern cosmology codes, such as CMBFAST. They are also useful for solving theoretical problems in which a spacelike hypersurface needs to be fixed, as with spacelike singularities. Einstein equations in synchronous frame Introduction of a synchronous frame allows one to separate the operations of space and time differentiation in the Einstein field equations. To make them more concise, the notation is introduced for the time derivatives of the three-dimensional metric tensor; these quantities also form a three-dimensional tensor. In the synchronous frame is proportional to the second fundamental form (shape tensor). All operations of shifting indices and covariant differentiation of the tensor are done in three-dimensional space with the metric γαβ. This does not apply to operations of shifting indices in the space components of the four-tensors Rik, Tik. Thus Tαβ must be understood to be gβγTγα + gβ0T0α, which reduces to gβγTγα and differs in sign from γβγTγα. The sum is the logarithmic derivative of the determinant γ ≡ |γαβ| = − g: Then for the complete set of Christoffel symbols one obtains: where are the three-dimensional Christoffel symbols constructed from γαβ: where the comma denotes partial derivative by the respective coordinate. With the Christoffel symbols , the components Rik = gilRlk of the Ricci tensor can be written in the form: Dots on top denote time differentiation, semicolons (";") denote covariant differentiation which in this case is performed with respect to the three-dimensional metric γαβ with three-dimensional Christoffel symbols , , and Pαβ is a three-dimensional Ricci tensor constructed from : It follows from that the Einstein equations (with the components of the energy–momentum tensor T00 = −T00, Tα0 = −T0α, Tαβ = γβγTγα) become in a synchronous frame: A characteristic feature of the synchronous frame is that it is not stationary: the gravitational field cannot be constant in such frame. In a constant field would become zero. But in the presence of matter the disappearance of all would contradict (which has a right side different from zero). In empty space from follows that all Pαβ, and with them all the components of the three-dimensional curvature tensor Pαβγδ (Riemann tensor) vanish, i.e. the field vanishes entirely (in a synchronous frame with a Euclidean spatial metric the space-time is flat). At the same time the matter filling the space cannot in general be at rest relative to the synchronous frame. This is obvious from the fact that particles of matter within which there are pressures generally move along lines that are not geodesics; the world line of a particle at rest is a time line, and thus is a geodesic in the synchronous frame. An exception is the case of dust (p = 0). Here the particles interacting with one another will move along geodesic lines; consequently, in this case the condition for a synchronous frame does not contradict the condition that it be comoving with the matter. Even in this case, in order to be able to choose a synchronously comoving frame, it is still necessary that the matter move without rotation. In the comoving frame the contravariant components of the velocity are u0 = 1, uα = 0. If the frame is also synchronous, the covariant components must satisfy u0 = 1, uα = 0, so that its four-dimensional curl must vanish: But this tensor equation must then also be valid in any other reference frame. Thus, in a synchronous but not comoving frame the condition curl v = 0 for the three-dimensional velocity v is additionally needed. For other equations of state a similar situation can occur only in special cases when the pressure gradient vanishes in all or in certain directions. Singularity in synchronous frame Use of the synchronous frame in cosmological problems requires thorough examination of its asymptotic behaviour. In particular, it must be known if the synchronous frame can be extended to infinite time and infinite space maintaining always the unambiguous labelling of every point in terms of coordinates in this frame. It was shown that unambiguous synchronization of clocks over the whole space is impossible because of the impossibility to synchronize clocks along a closed contour. As concerns synchronization over infinite time, let's first remind that the time lines of all observers are normal to the chosen hypersurface and in this sense are "parallel". Traditionally, the concept of parallelism is defined in Euclidean geometry to mean straight lines that are everywhere equidistant from each other but in arbitrary geometries this concept can be extended to mean lines that are geodesics. It was shown that time lines are geodesics in synchronous frame. Another, more convenient for the present purpose definition of parallel lines are those that have all or none of their points in common. Excluding the case of all points in common (obviously, the same line) one arrives to the definition of parallelism where no two time lines have a common point. Since the time lines in a synchronous frame are geodesics, these lines are straight (the path of light) for all observers in the generating hypersurface. The spatial metric is . The determinant of the metric tensor is the absolute value of the triple product of the row vectors in the matrix which is also the volume of the parallelepiped spanned by the vectors , , and (i.e., the parallelepiped whose adjacent sides are the vectors , , and ). If turns into zero then the volume of this parallelepiped is zero. This can happen when one of the vectors lies in the plane of the other two vectors so that the parallelepiped volume transforms to the area of the base (height becomes zero), or more formally, when two of the vectors are linearly dependent. But then multiple points (the points of intersection) can be labelled in the same way, that is, the metric has a singularity. The Landau group have found that the synchronous frame necessarily forms a time singularity, that is, the time lines intersect (and, respectively, the metric tensor determinant turns to zero) in a finite time. This is proven in the following way. The right-hand of the , containing the stress–energy tensors of matter and electromagnetic field, is a positive number because of the strong energy condition. This can be easily seen when written in components. for matter for electromagnetic field With the above in mind, the is then re-written as an inequality with the equality pertaining to empty space. Using the algebraic inequality becomes . Dividing both sides to and using the equality one arrives to the inequality Let, for example, at some moment of time. Because the derivative is positive, then the ratio decreases with decreasing time, always having a finite non-zero derivative and, therefore, it should become zero, coming from the positive side, during a finite time. In other words, becomes , and because , this means that the determinant becomes zero (according to not faster than ). If, on the other hand, initially, the same is true for increasing time. An idea about the space at the singularity can be obtained by considering the diagonalized metric tensor. Diagonalization makes the elements of the matrix everywhere zero except the main diagonal whose elements are the three eigenvalues and ; these are three real values when the discriminant of the characteristic polynomial is greater or equal to zero or one real and two complex conjugate values when the discriminant is less than zero. Then the determinant is just the product of the three eigenvalues. If only one of these eigenvalues becomes zero, then the whole determinant is zero. Let, for example, the real eigenvalue becomes zero (). Then the diagonalized matrix becomes a 2 × 2 matrix with the (generally complex conjugate) eigenvalues on the main diagonal. But this matrix is the diagonalized metric tensor of the space where ; therefore, the above suggests that at the singularity () the space is 2-dimensional when only one eigenvalue turns to zero. Geometrically, diagonalization is a rotation of the basis for the vectors comprising the matrix in such a way that the direction of basis vectors coincide with the direction of the eigenvectors. If is a real symmetric matrix, the eigenvectors form an orthonormal basis defining a rectangular parallelepiped whose length, width, and height are the magnitudes of the three eigenvalues. This example is especially demonstrative in that the determinant which is also the volume of the parallelepiped is equal to length × width × height, i.e., the product of the eigenvalues. Making the volume of the parallelepiped equal to zero, for example by equating the height to zero, leaves only one face of the parallelepiped, a 2-dimensional space, whose area is length × width. Continuing with the obliteration and equating the width to zero, one is left with a line of size length, a 1-dimensional space. Further equating the length to zero leaves only a point, a 0-dimensional space, which marks the place where the parallelepiped has been. An analogy from geometrical optics is comparison of the singularity with caustics, such as the bright pattern in Fig. 3, which shows caustics formed by a glass of water illuminated from the right side. The light rays are an analogue of the time lines of the free-falling observers localized on the synchronized hypersurface. Judging by the approximately parallel sides of the shadow contour cast by the glass, one can surmise that the light source is at a practically infinite distance from the glass (such as the sun) but this is not certain as the light source is not shown on the photo. So one can suppose that the light rays (time lines) are parallel without this being proven with certainty. The glass of water is an analogue of the Einstein equations or the agent(s) behind them that bend the time lines to form the caustics pattern (the singularity). The latter is not as simple as the face of a parallelepiped but is a complicated mix of various kinds of intersections. One can distinguish an overlap of two-, one-, or zero-dimensional spaces, i.e., intermingling of surfaces and lines, some converging to a point (cusp) such as the arrowhead formation in the centre of the caustics pattern. The conclusion that timelike geodesic vector fields must inevitably reach a singularity after a finite time has been reached independently by Raychaudhuri by another method that led to the Raychaudhuri equation, which is also called Landau–Raychaudhuri equation to honour both researchers. See also Normal coordinates Congruence (general relativity), for a derivation of the kinematical decomposition and of Raychaudhuri's equation. References Bibliography (English translation: ) ; Physical Review Letters, 6, 311 (1961) General relativity Coordinate systems Frames of reference Coordinate charts in general relativity Physical cosmology
Synchronous frame
[ "Physics", "Astronomy", "Mathematics" ]
5,575
[ "Astronomical sub-disciplines", "Coordinate systems", "Frames of reference", "Theoretical physics", "Classical mechanics", "Astrophysics", "General relativity", "Theory of relativity", "Coordinate charts in general relativity", "Physical cosmology" ]
21,668,293
https://en.wikipedia.org/wiki/Gamma-Ray%20Burst%20Optical/Near-Infrared%20Detector
The Gamma-Ray Burst Optical/Near-Infrared Detector (GROND) is an imaging instrument used to investigate Gamma-Ray Burst afterglows and for doing follow-up observations on exoplanets using transit photometry. It is operated at the 2.2-metre MPG/ESO telescope at ESO's La Silla Observatory in the southern part of the Atacama desert, about 600 kilometres north of Santiago de Chile and at an altitude of 2,400 metres. Discoveries On 13 September 2008, Swift detected gamma-ray burst 080913. GROND and VLT subsequently placed the GRB at 12.8 Gly distant, making it the most-distant GRB observed, as well as the second-most-distant object to be spectroscopically confirmed. On 15 September 2008, NASA's Fermi Gamma-ray Space Telescope detected gamma-ray burst 080916C. On 19 February 2009, NASA announced that the GROND team's work shows that the GRB was the most energetic yet observed, and 12.2 Gly distant. See also Red shift observations in astronomy Photometry (astronomy) Max Planck Institute for Extraterrestrial Physics References External links GROND page at the Max Planck Institute for Extraterrestrial Physics Optical telescopes Gamma-ray bursts
Gamma-Ray Burst Optical/Near-Infrared Detector
[ "Physics", "Astronomy" ]
270
[ "Physical phenomena", "Stellar phenomena", "Astronomical events", "Gamma-ray bursts" ]
26,038,396
https://en.wikipedia.org/wiki/Collision%20response
In the context of classical mechanics simulations and physics engines employed within video games, collision response deals with models and algorithms for simulating the changes in the motion of two solid bodies following collision and other forms of contact. Rigid body contact Two rigid bodies in unconstrained motion, potentially under the action of forces, may be modelled by solving their equations of motion using numerical integration techniques. On collision, the kinetic properties of two such bodies seem to undergo an instantaneous change, typically resulting in the bodies rebounding away from each other, sliding, or settling into relative static contact, depending on the elasticity of the materials and the configuration of the collision. Contact forces The origin of the rebound phenomenon, or reaction, may be traced to the behaviour of real bodies that, unlike their perfectly rigid idealised counterparts, do undergo minor compression on collision, followed by expansion, prior to separation. The compression phase converts the kinetic energy of the bodies into potential energy and to an extent, heat. The expansion phase converts the potential energy back to kinetic energy. During the compression and expansion phases of two colliding bodies, each body generates reactive forces on the other at the points of contact, such that the sum reaction forces of one body are equal in magnitude but opposite in direction to the forces of the other, as per the Newtonian principle of action and reaction. If the effects of friction are ignored, a collision is seen as affecting only the component of the velocities that are directed along the contact normal and as leaving the tangential components unaffected Reaction The degree of relative kinetic energy retained after a collision, termed the restitution, is dependent on the elasticity of the bodies‟ materials. The coefficient of restitution between two given materials is modeled as the ratio of the relative post-collision speed of a point of contact along the contact normal, with respect to the relative pre-collision speed of the same point along the same normal. These coefficients are typically determined empirically for different material pairs, such as wood against concrete or rubber against wood. Values for close to zero indicate inelastic collisions such as a piece of soft clay hitting the floor, whereas values close to one represent highly elastic collisions, such as a rubber ball bouncing off a wall. The kinetic energy loss is relative to one body with respect to the other. Thus the total momentum of both bodies with respect to some common reference is unchanged after the collision, in line with the principle of conservation of momentum. Friction Another important contact phenomenon is surface-to-surface friction, a force that impedes the relative motion of two surfaces in contact, or that of a body in a fluid. In this section we discuss surface-to-surface friction of two bodies in relative static contact or sliding contact. In the real world, friction is due to the imperfect microstructure of surfaces whose protrusions interlock into each other, generating reactive forces tangential to the surfaces. To overcome the friction between two bodies in static contact, the surfaces must somehow lift away from each other. Once in motion, the degree of surface affinity is reduced and hence bodies in sliding motion tend to offer lesser resistance to motion. These two categories of friction are respectively termed static friction and dynamic friction. Applied force It is a force which is applied to an object by another object or by a person. The direction of the applied force depends on how the force is applied. Normal force It is the support force exerted upon an object which is in contact with another stable object. Normal force is sometimes referred to as the pressing force since its action presses the surface together. Normal force is always directed towards the object and acts perpendicularly with the applied force. Frictional force It is the force exerted by a surface as an object moves across it or makes an effort to move across it. The friction force opposes the motion of the object. Friction results when two surfaces are pressed together closely, causing attractive intermolecular forces between the molecules of the two different surface. As such, friction depends upon the nature of the two surfaces and upon the degree to which they are pressed together. Friction always acts parallel to the surface in contact and opposite the direction of motion. The friction force can be calculated using the equation. Impulse-based contact model A force , dependent on time , acting on a body of assumed constant mass for a time interval generates a change in the body’s momentum , where is the resulting change in velocity. The change in momentum, termed an impulse and denoted by is thus computed as For fixed impulse , the equation suggests that , that is, a smaller time interval must be compensated by a stronger reaction force to achieve the same impulse. When modelling a collision between idealized rigid bodies, it is impractical to simulate the compression and expansion phases of the body geometry over the collision time interval. However, by assuming that a force can be found which is equal to everywhere except at , and such that the limit exists and is equal to , the notion of instantaneous impulses may be introduced to simulate an instantaneous change in velocity after a collision. Impulse-based reaction model The effect of the reaction force over the interval of collision may hence be represented by an instantaneous reaction impulse , computed as By deduction from the principle of action and reaction, if the collision impulse applied by the first body on the second body at a contact point is , the counter impulse applied by the second body on the first is . The decomposition into the impulse magnitude and direction along the contact normal and its negation allows for the derivation of a formula to compute the change in linear and angular velocities of the bodies resulting from the collision impulses. In the subsequent formulas, is always assumed to point away from body 1 and towards body 2 at the contact point. Assuming the collision impulse magnitude is given and using Newton's laws of motion the relation between the bodies' pre- and post- linear velocities are as follows where, for the th body, is the pre-collision linear velocity, is the post-collision linear velocity. Similarly for the angular velocities where, for the th body, is the angular pre-collision velocity, is the angular post-collision velocity, is the inertia tensor in the world frame of reference, and is offset of the shared contact point from the centre of mass. The velocities of the bodies at the point of contact may be computed in terms of the respective linear and angular velocities, using for . The coefficient of restitution relates the pre-collision relative velocity of the contact point to the post-collision relative velocity along the contact normal as follows Substituting equations (1a), (1b), (2a), (2b) and (3) into equation (4) and solving for the reaction impulse magnitude yields Computing impulse-based reaction Thus, the procedure for computing the post-collision linear velocities and angular velocities is as follows: Compute the reaction impulse magnitude in terms of , , , , , , , and using equation (5) Compute the reaction impulse vector in terms of its magnitude and contact normal using . Compute new linear velocities in terms of old velocities , masses and reaction impulse vector using equations (1a) and (1b) Compute new angular velocities in terms of old angular velocities , inertia tensors and reaction impulse using equations (2a) and (2b) Impulse-based friction model One of the most popular models for describing friction is the Coulomb friction model. This model defines coefficients of static friction and dynamic friction such that . These coefficients describe the two types of friction forces in terms of the reaction forces acting on the bodies. More specifically, the static and dynamic friction force magnitudes are computed in terms of the reaction force magnitude as follows The value defines a maximum magnitude for the friction force required to counter the tangential component of any external sum force applied on a relatively static body, such that it remains static. Thus, if the external force is large enough, static friction is unable to fully counter this force, at which point the body gains velocity and becomes subject to dynamic friction of magnitude acting against the sliding velocity. The Coulomb friction model effectively defines a friction cone within which the tangential component of a force exerted by one body on the surface of another in static contact, is countered by an equal and opposite force such that the static configuration is maintained. Conversely, if the force falls outside the cone, static friction gives way to dynamic friction. Given the contact normal and relative velocity of the contact point, a tangent vector , orthogonal to , may be defined such that where is the sum of all external forces on the body. The multi-case definition of is required for robustly computing the actual friction force for both the general and particular states of contact. Informally, the first case computes the tangent vector along the relative velocity component perpendicular to the contact normal . If this component is zero, the second case derives in terms of the tangent component of the external force . If there is no tangential velocity or external forces, then no friction is assumed, and may be set to the zero vector. Thus, is computed as Equations (6a), (6b), (7) and (8) describe the Coulomb friction model in terms of forces. By adapting the argument for instantaneous impulses, an impulse-based version of the Coulomb friction model may be derived, relating a frictional impulse , acting along the tangent , to the reaction impulse . Integrating (6a) and (6b) over the collision time interval yields where is the magnitude of the reaction impulse acting along contact normal . Similarly, by assuming constant throughout the time interval, the integration of (8) yields Equations (5) and (10) define an impulse-based contact model that is ideal for impulse-based simulations. When using this model, care must be taken in the choice of and as higher values may introduce additional kinetic energy into the system. See also Collision detection Notes References C. Vella, "Gravitas: An extensible physics engine framework using object-oriented and design pattern-driven software architecture principles," Master in Information Technology Thesis, University of Malta, Msida, 2008. Mechanics
Collision response
[ "Physics", "Engineering" ]
2,084
[ "Mechanics", "Mechanical engineering" ]
23,163,934
https://en.wikipedia.org/wiki/Laminar%E2%80%93turbulent%20transition
In fluid dynamics, the process of a laminar flow becoming turbulent is known as laminar–turbulent transition. The main parameter characterizing transition is the Reynolds number. Transition is often described as a process proceeding through a series of stages. Transitional flow can refer to transition in either direction, that is laminar–turbulent transitional or turbulent–laminar transitional flow. The process applies to any fluid flow, and is most often used in the context of boundary layers. History In 1883 Osborne Reynolds demonstrated the transition to turbulent flow in a classic experiment in which he examined the behaviour of water flow under different flow rates using a small jet of dyed water introduced into the centre of flow in a larger pipe. The larger pipe was glass, so the behaviour of the layer of dyed flow could be observed, and at the end of this pipe was a flow-control valve used to vary the water velocity inside the tube. When the velocity was low, the dyed layer remained distinct through the entire length of the large tube. When the velocity was increased, the layer broke up at a given point and diffused throughout the fluid's cross-section. The point at which this happened was the transition point from laminar to turbulent flow. Reynolds identified the governing parameter for the onset of this effect, which was a dimensionless constant later called the Reynolds number. Reynolds found that the transition occurred between Re = 2000 and 13000, depending on the smoothness of the entry conditions. When extreme care is taken, the transition can even happen with Re as high as 40000. On the other hand, Re = 2000 appears to be about the lowest value obtained at a rough entrance. Reynolds' publications in fluid dynamics began in the early 1870s. His final theoretical model published in the mid-1890s is still the standard mathematical framework used today. Examples of titles from his more groundbreaking reports are: Improvements in Apparatus for Obtaining Motive Power from Fluids and also for Raising or Forcing Fluids (1875) An experimental investigation of the circumstances which determine whether the motion of water in parallel channels shall be direct or sinuous and of the law of resistance in parallel channels (1883) On the dynamical theory of incompressible viscous fluids and the determination of the criterion (1895) Transition stages in a boundary layer A boundary layer can transition to turbulence through a number of paths. Which path is realized physically depends on the initial conditions such as initial disturbance amplitude and surface roughness. The level of understanding of each phase varies greatly, from near complete understanding of primary mode growth to a near-complete lack of understanding of bypass mechanisms. Receptivity The initial stage of the natural transition process is known as the Receptivity phase and consists of the transformation of environmental disturbances – both acoustic (sound) and vortical (turbulence) – into small perturbations within the boundary layer. The mechanisms by which these disturbances arise are varied and include freestream sound and/or turbulence interacting with surface curvature, shape discontinuities and surface roughness. These initial conditions are small, often unmeasurable perturbations to the basic state flow. From here, the growth (or decay) of these disturbances depends on the nature of the disturbance and the nature of the basic state. Acoustic disturbances tend to excite two-dimensional instabilities such as Tollmien–Schlichting waves (T-S waves), while vortical disturbances tend to lead to the growth of three-dimensional phenomena such as the crossflow instability. Numerous experiments in recent decades have revealed that the extent of the amplification region, and hence the location of the transition point on the body surface, is strongly dependent not only upon the amplitude and/or the spectrum of external disturbances but also on their physical nature. Some of the disturbances easily penetrate into the boundary layer whilst others do not. Consequently, the concept of boundary layer transition is a complex one and still lacks a complete theoretical exposition. Primary mode growth If the initial, environmentally-generated disturbance is small enough, the next stage of the transition process is that of primary mode growth. In this stage, the initial disturbances grow (or decay) in a manner described by linear stability theory. The specific instabilities that are exhibited in reality depend on the geometry of the problem and the nature and amplitude of initial disturbances. Across a range of Reynolds numbers in a given flow configuration, the most amplified modes can and often do vary. There are several major types of instability which commonly occur in boundary layers. In subsonic and early supersonic flows, the dominant two-dimensional instabilities are T-S waves. For flows in which a three-dimensional boundary layer develops such as a swept wing, the crossflow instability becomes important. For flows navigating concave surface curvature, Görtler vortices may become the dominant instability. Each instability has its own physical origins and its own set of control strategies - some of which are contraindicated by other instabilities – adding to the difficulty in controlling laminar-turbulent transition. Simple harmonic boundary layer sound in the physics of transition to turbulence Simple harmonic sound as a precipitating factor in the sudden transition from laminar to turbulent flow might be attributed to Elizabeth Barrett Browning. Her poem, Aurora Leigh (1856), revealed how musical notes (the pealing of a particular church bell), triggered wavering turbulence in the previously steady laminar-flow flames of street gaslights (“...gaslights tremble in the streets and squares”). Her instantly acclaimed poem might have alerted scientists (e.g., Leconte 1859) to the influence of simple harmonic (SH) sound as a cause of turbulence. A contemporary flurry of scientific interest in this effect culminated in Sir John Tyndall (1867) deducing that specific SH sounds, directed perpendicular to the flow had waves that blended with similar SH waves created by friction along the boundaries of tubes, amplifying them and triggering the phenomenon of high-resistance turbulent flow. His interpretation re-surfaced over 100 years later (Hamilton 2015). Walter Tollmien (1931) and Hermann Schlichting (1929) proposed that friction (viscosity) along a smooth flat boundary, created SH boundary layer (BL) oscillations that gradually increased in amplitude until turbulence erupted. Although contemporary wind tunnels failed to confirm the theory, Schubauer and Skramstad (1943) created a refined wind tunnel that deadened the vibrations and sounds that might impinge on the wind tunnel flat plate flow studies. They confirmed the development of SH long-crested BL oscillations, the dynamic shear waves of transition to turbulence. They showed that specific SH fluttering vibrations induced electromagnetically into a BL ferromagnetic ribbon could amplify similar flow-induced SH BL flutter (BLF) waves, precipitating turbulence at much lower flow rates. Furthermore, certain other specific frequencies interfered with the development of the SH BLF waves, preserving laminar flow to higher flow rates. An oscillation of a mass in a fluid is a vibration that creates a sound wave. SH BLF oscillations in boundary layer fluid along a flat plate must produce SH sound that reflects off the boundary perpendicular to the fluid laminae. In late transition, Schubauer and Skramstad found foci of amplification of BL oscillations, associated with bursts of noise (“turbulent spots”). Focal amplification of the transverse sound in late transition was associated with BL vortex formation. The focal amplified sound of turbulent spots along a flat plate with high energy oscillation of molecules perpendicularly through the laminae, might suddenly cause localized freezing of laminar slip. The sudden braking of “frozen” spots of fluid would transfer resistance to the high resistance at the boundary, and might explain the head-over-heels BL vortices of late transition. Osborne Reynolds described similar turbulent spots during transition in water flow in cylinders ("flashes of turbulence"). When many random vortices erupt as turbulence onsets, the generalized freezing of laminar slip (laminar interlocking) is associated with noise and a dramatic increase in resistance to flow. This might also explain the parabolic isovelocity profile of laminar flow abruptly changing to the flattened profile of turbulent flow – as laminar slip is replaced by laminar interlocking as turbulence erupts (Hamilton 2015). Secondary instabilities The primary modes themselves don't actually lead directly to breakdown, but instead lead to the formation of secondary instability mechanisms. As the primary modes grow and distort the mean flow, they begin to exhibit nonlinearities and linear theory no longer applies. Complicating the matter is the growing distortion of the mean flow, which can lead to inflection points in the velocity profile a situation shown by Lord Rayleigh to indicate absolute instability in a boundary layer. These secondary instabilities lead rapidly to breakdown. These secondary instabilities are often much higher in frequency than their linear precursors. See also Transition modeling References Boundary layers Aerodynamics Chaos theory Turbulence Transport phenomena Fluid dynamics
Laminar–turbulent transition
[ "Physics", "Chemistry", "Engineering" ]
1,868
[ "Transport phenomena", "Physical phenomena", "Turbulence", "Chemical engineering", "Aerodynamics", "Boundary layers", "Aerospace engineering", "Piping", "Fluid dynamics" ]
23,164,211
https://en.wikipedia.org/wiki/Shading%20coefficient
Shading coefficient (SC) is a measure of thermal performance of a glass unit (panel or window) in a building. It is the ratio of solar gain (due to direct sunlight) passing through a glass unit to the solar energy which passes through 3mm Clear Float Glass. It is an indicator of how well the glass is thermally insulating (shading) the interior when there is direct sunlight on the panel or window. The shading coefficient depends on the color of glass and degree of reflectivity. It also depends on the type of reflective metal oxides for the case of reflective glass. Sputter-coated reflective and/or sputter-coated low-emissivity glasses tend to have lower SC compared to the same pyrolitically-coated reflective and/or low-emissivity glass. The value ranges between 1.00 to 0.00, but experiments show that the value of the SC is typically between 0.98~0.10. The lower the rating, the less solar heat is transmitted through the glass, and the greater its shading ability. Solar properties play a significant role in the selection of glass, especially in regions or cardinal directions with high solar exposure. It becomes less significant in situations where direct sunlight is not a major factor (e.g., windows completely shaded by overhangs). Window design methods have moved away from Shading Coefficient to Solar Heat Gain Coefficient (SHGC), which is defined as the fraction of incident solar radiation that actually enters a building through the entire window assembly as heat gain (not just the glass portion). Though shading coefficient is still mentioned in manufacturer product literature and some industry computer software, it is no longer mentioned as an option in the handbook widely used by building energy engineers or model building codes. Industry technical experts recognized the limitations of SC and pushed towards SHGC before the early 1990s. A conversion from SC to SHGC is not necessarily straightforward, as they each take into account different heat transfer mechanisms and paths (window assembly vs. glass-only). To perform an approximate conversion from SC to SHGC, multiply the SC value by 0.87. References Glass engineering and science Glass architecture Glass physics Shading
Shading coefficient
[ "Physics", "Materials_science", "Engineering" ]
448
[ "Glass engineering and science", "Materials science", "Glass architecture", "Glass physics", "Condensed matter physics" ]
23,167,524
https://en.wikipedia.org/wiki/Kreft%27s%20dichromaticity%20index
Kreft's dichromaticity index (DI) is a measure for quantification of dichromatism. It is defined as the difference in hue angle (Δhab) between the color of the sample at the dilution, where the chroma (color saturation) is maximal, and the color of four times more diluted (or thinner) and four times more concentrated (or thicker) sample. The two hue angle differences are called the dichromaticity index towards lighter (Kreft's DIL) and dichromaticity index towards darker (Kreft's DID) respectively. Kreft's dichromaticity indexes DIL and DID for pumpkin seed oil, which is one of the most dichromatic substances, are −9 and −44, respectively. This means, that pumpkin seed oil changes its color from green-yellow to orange-red (for 44 degrees in Lab color space) when the thickness of the observed layer is increased from cca 0.5 mm to 2 mm; and it changes slightly towards green (for 9 degrees) if its thickness is reduced for four-fold. The color of pumpkin oil at increasing thickness or concentration presented in CIELAB colorspace diagram. Straight lines are vectors showing hue (angle) and chroma (length) of the color at maximal chroma (toward the square mark), and the colors of four-fold less or more diluted or thick pumpkin oil (DIL and DID). Note that DID is −44.1 degrees and DIL corresponds to −8.97 degrees. Dichromaticity (DIL and DID) of selected substances, calculated from their VIS absorption spectra by the computer algorithm “Dichromaticity index calculator”: Maximal chroma: chroma at concentration (thickness) where the color of the substance has maximal chroma (saturation). Angle at maximal chroma: the hue, which is represented by the angle of the vector to the color with maximal chroma in the CIELAB colorspace diagram. References External links Free Dichromaticity Index Calculator software Optics Color Index numbers Scales Spectroscopy
Kreft's dichromaticity index
[ "Physics", "Chemistry", "Mathematics" ]
447
[ "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Optics", "Molecular physics", "Instrumental analysis", "Mathematical objects", " molecular", "Atomic", "Index numbers", "Spectroscopy", "Numbers", " and optical physics" ]
5,324,019
https://en.wikipedia.org/wiki/Grand%20potential
The grand potential or Landau potential or Landau free energy is a quantity used in statistical mechanics, especially for irreversible processes in open systems. The grand potential is the characteristic state function for the grand canonical ensemble. Definition Grand potential is defined by where U is the internal energy, T is the temperature of the system, S is the entropy, μ is the chemical potential, and N is the number of particles in the system. The change in the grand potential is given by where P is pressure and V is volume, using the fundamental thermodynamic relation (combined first and second thermodynamic laws); When the system is in thermodynamic equilibrium, ΦG is a minimum. This can be seen by considering that dΦG is zero if the volume is fixed and the temperature and chemical potential have stopped evolving. Landau free energy Some authors refer to the grand potential as the Landau free energy or Landau potential and write its definition as: named after Russian physicist Lev Landau, which may be a synonym for the grand potential, depending on system stipulations. For homogeneous systems, one obtains . Homogeneous systems (vs. inhomogeneous systems) In the case of a scale-invariant type of system (where a system of volume has exactly the same set of microstates as systems of volume ), then when the system expands new particles and energy will flow in from the reservoir to fill the new volume with a homogeneous extension of the original system. The pressure, then, must be constant with respect to changes in volume: and all extensive quantities (particle number, energy, entropy, potentials, ...) must grow linearly with volume, e.g. In this case we simply have , as well as the familiar relationship for the Gibbs free energy. The value of can be understood as the work that can be extracted from the system by shrinking it down to nothing (putting all the particles and energy back into the reservoir). The fact that is negative implies that the extraction of particles from the system to the reservoir requires energy input. Such homogeneous scaling does not exist in many systems. For example, when analyzing the ensemble of electrons in a single molecule or even a piece of metal floating in space, doubling the volume of the space does double the number of electrons in the material. The problem here is that, although electrons and energy are exchanged with a reservoir, the material host is not allowed to change. Generally in small systems, or systems with long range interactions (those outside the thermodynamic limit), . See also Gibbs energy Helmholtz energy References External links Grand Potential (Manchester University) Thermodynamics Lev Landau
Grand potential
[ "Physics", "Chemistry", "Mathematics" ]
544
[ "Thermodynamics", "Dynamical systems" ]
5,324,495
https://en.wikipedia.org/wiki/Mixed%20acid%20fermentation
In biochemistry, mixed acid fermentation is the metabolic process by which a six-carbon sugar (e.g. glucose, ) is converted into a complex and variable mixture of acids. It is an anaerobic (non-oxygen-requiring) fermentation reaction that is common in bacteria. It is characteristic for members of the Enterobacteriaceae, a large family of Gram-negative bacteria that includes E. coli. The mixture of end products produced by mixed acid fermentation includes lactate, acetate, succinate, formate, ethanol and the gases and . The formation of these end products depends on the presence of certain key enzymes in the bacterium. The proportion in which they are formed varies between different bacterial species. The mixed acid fermentation pathway differs from other fermentation pathways, which produce fewer end products in fixed amounts. The end products of mixed acid fermentation can have many useful applications in biotechnology and industry. For instance, ethanol is widely used as a biofuel. Therefore, multiple bacterial strains have been metabolically engineered in the laboratory to increase the individual yields of certain end products. This research has been carried out primarily in E. coli and is ongoing. Variations of mixed acid fermentation occur in a number of bacterial species, including bacterial pathogens such as Haemophilus influenzae where mostly acetate and succinate are produced and lactate can serve as a growth substrate. Mixed acid fermentation in E. coli E. coli use fermentation pathways as a final option for energy metabolism, as they produce very little energy in comparison to respiration. Mixed acid fermentation in E. coli occurs in two stages. These stages are outlined by the biological database for E. coli, EcoCyc. The first of these two stages is a glycolysis reaction. Under anaerobic conditions, a glycolysis reaction takes place where glucose is converted into pyruvate:       glucose → 2 pyruvate There is a net production of 2 ATP and 2 NADH molecules per molecule of glucose converted. ATP is generated by substrate-level phosphorylation. NADH is formed from the reduction of NAD. In the second stage, pyruvate produced by glycolysis is converted to one or more end products via the following reactions. In each case, both of the NADH molecules generated by glycolysis are reoxidized to NAD+. Each alternative pathway requires a different key enzyme in E. coli. After the variable amounts of different end products are formed by these pathways, they are secreted from the cell. Lactate formation Pyruvate produced by glycolysis is converted to lactate. This reaction is catalysed by the enzyme lactate dehydrogenase (LDHA).       pyruvate + NADH + H+ → lactate + NAD+ Acetate formation Pyruvate is converted into acetyl-coenzyme A (acetyl-CoA) by the enzyme pyruvate dehydrogenase. This acetyl-CoA is then converted into acetate in E. coli, whilst producing ATP by substrate-level phosphorylation. Acetate formation requires two enzymes: phosphate acetyltransferase and acetate kinase.       acetyl-CoA + phosphate → acetyl-phosphate + CoA       acetyl-phosphate + ADP → acetate + ATP Ethanol formation Ethanol is formed in E. coli by the reduction of acetyl coenzyme A using NADH. This two-step reaction requires the enzyme alcohol dehydrogenase (ADHE).       acetyl-CoA + NADH + H+ → acetaldehyde + NAD+ + CoA       acetaldehyde + NADH + H+ → ethanol + NAD+ Formate formation Formate is produced by the cleavage of pyruvate. This reaction is catalysed by the enzyme pyruvate-formate lyase (PFL), which plays an important role in regulating anaerobic fermentation in E. coli.       pyruvate + CoA → acetyl-CoA + formate Succinate formation Succinate is formed in E. coli in several steps. Phosphoenolpyruvate (PEP), a glycolysis pathway intermediate, is carboxylated by the enzyme PEP carboxylase to form oxaloacetate. This is followed by the conversion of oxaloacetate to malate by the enzyme malate dehydrogenase. Fumarate hydratase then catalyses the dehydration of malate to produce fumarate.       phosphoenolpyruvate + HCO3 → oxaloacetate + phosphate       oxaloacetate + NADH + H+ → malate + NAD+       malate → fumarate + H2O The final reaction in the formation of succinate is the reduction of fumarate. It is catalysed by the enzyme fumarate reductase.       fumarate + NADH + H+ → succinate + NAD+ This reduction is an anaerobic respiration reaction in E. coli, as it uses electrons associated with NADH dehydrogenase and the electron transport chain. ATP is generated by using an electrochemical gradient and ATP synthase. This is the only case in the mixed acid fermentation pathway where ATP is not produced via substrate-level phosphorylation. Vitamin K2, also known as menaquinone, is very important for electron transport to fumarate in E. coli. Hydrogen and carbon dioxide formation Formate can be converted to hydrogen gas and carbon dioxide in E. coli. This reaction requires the enzyme formate-hydrogen lyase. It can be used to prevent the conditions inside the cell becoming too acidic.       formate → H2 and CO2 Methyl red test The methyl red (MR) test can detect whether the mixed acid fermentation pathway occurs in microbes when given glucose. A pH indicator is used that turns the test solution red if the pH drops below 4.4. If the fermentation pathway has taken place, the mixture of acids it has produced will make the solution very acidic and cause a red colour change. The methyl red test belongs to a group known as the IMViC tests. Metabolic engineering Multiple bacterial strains have been metabolically engineered to increase the individual yields of end products formed by mixed acid fermentation. For instance, strains for the increased production of ethanol, lactate, succinate and acetate have been developed due to the usefulness of these products in biotechnology. The major limiting factor for this engineering is the need to maintain a redox balance in the mixture of acids produced by the fermentation pathway. For ethanol production Ethanol is the most commonly used biofuel and can be produced on large scale via fermentation. The maximum theoretical yield for the production of ethanol was achieved around 20 years. A plasmid that carried the pyruvate decarboxylase and alcohol dehydrogenase genes from the bacteria Z. mobilis was used by scientists. This was inserted into E. coli and resulted in an increased yield of ethanol. The genome of this E. coli strain, KO11, has more recently been sequenced and mapped. For acetate production The E. coli strain W3110 was genetically engineered to generate 2 moles of acetate for every 1 mole of glucose that undergoes fermentation. This is known as a homoacetate pathway. For lactate production Lactate can be used to produce a bioplastic called polylactic acid (PLA). The properties of PLA depend on the ratio of the two optical isomers of lactate (D-lactate and L-lactate). D-lactate is produced by mixed acid fermentation in E. coli. Early experiments engineered the E. coli strain RR1 to produce either one of the two optical isomers of lactate. Later experiments modified the E. coli strain KO11, originally developed to enhance ethanol production. Scientists were able to increase the yield of D-lactate from fermentation by performing several deletions. For succinate production Increasing the yield of succinate from mixed acid fermentation was first done by overexpressing the enzyme PEP carboxylase. This produced a succinate yield that was approximately 3 times greater than normal. Several experiments using a similar approach have followed. Alternative approaches have altered the redox and ATP balance to optimize the succinate yield. Related fermentation pathways There are a number of other fermentation pathways that occur in microbes. All these pathways begin by converting pyruvate, but their end products and the key enzymes they require are different. These pathways include: Ethanol fermentation Lactic acid fermentation Propionic acid fermentation Butanol fermentation Butanediol fermentation External links Mixed acid fermentation EcoCyc Summary of Fermentation References Anaerobic digestion Fermentation
Mixed acid fermentation
[ "Chemistry", "Engineering", "Biology" ]
1,875
[ "Cellular respiration", "Biochemistry", "Anaerobic digestion", "Environmental engineering", "Water technology", "Fermentation" ]
5,326,179
https://en.wikipedia.org/wiki/Polhode
The details of a spinning body may impose restrictions on the motion of its angular velocity vector, . The curve produced by the angular velocity vector on the inertia ellipsoid, is known as the polhode, coined from Greek meaning "path of the pole". The surface created by the angular velocity vector is termed the body cone. History The concept of polhode motion dates back to the 17th century, and Corollary 21 to Proposition 66 in Section 11, Book 1, of Isaac Newton's Principia Mathematica. Later Leonhard Euler derived a set of equations that described the dynamics of rigid bodies in torque-free motion. In particular, Euler and his contemporaries Jean d’Alembert, Louis Lagrange, and others noticed small variations in latitude due to wobbling of the Earth around its polar spin axis. A portion of this wobble (later to be called the Earth’s polhode motion) was due to the natural, torque-free behavior of the rotating Earth. Assuming that the Earth was a completely rigid body, they calculated the period of Earth’s polhode wobble to be about 9–10 months. During the mid 19th century, Louis Poinsot developed a geometric interpretation of the physics of rotating bodies that provided a visual counterpart to Euler’s algebraic equations. Poinsot was a contemporary of Léon Foucault, who invented the gyroscope and whose pendulum experiments provided incontrovertible evidence that the Earth rotates. In the fashion of the day, Poinsot coined the terms polhode and its counterpart, herpolhode, to describe this wobble in the motion of rotating rigid bodies. Poinsot derived these terms from the ancient Greek pólos (pivot or end of an axis) + hodós (path or way)—thus, polhode is the path of the pole. Poinsot’s geometric interpretation of Earth’s polhode motion was still based on the assumption that the Earth was a completely rigid rotating body. It was not until 1891 that the American astronomer, Seth Carlo Chandler, made measurements showing that there was a periodic motion of 14 months in the Earth’s wobble and suggesting that this was the polhode motion. Initially, Chandler’s measurement, now referred to as the “Chandler wobble”, was dismissed because it was significantly greater than the long-accepted 9–10 month period calculated by Euler, Poinsot, et al. and because Chandler was unable convincingly to explain this discrepancy. However, within months, another American astronomer, Simon Newcomb, realized that Chandler was correct and provided a plausible reason for Chandler’s measurements. Newcomb realized that the Earth’s mass is partly rigid and partly elastic, and that the elastic component has no effect on the Earth’s polhode period, because the elastic part of the Earth’s mass stretches so that it is always symmetrical about the Earth’s spin axis. The rigid part of the Earth’s mass is not symmetrically distributed, and this is what causes the Chandler Wobble, or more precisely, the Earth’s polhode path. Description Every solid body inherently has three principal axes through its center of mass, and each of these axes has a corresponding moment of inertia. The moment of inertia about an axis is a measurement of how difficult it is to accelerate the body about that axis. The closer the concentration of mass to the axis, the smaller the torque required to get it spinning at the same rate about that axis. The moment of inertia of a body depends on the mass distribution of the body and on the arbitrarily selected axis about which the moment of inertia is defined. The moments of inertia about two of the principal axes are the maximum and minimum moments of inertia of the body about any axis. The third is perpendicular to the other two and has a moment of inertia somewhere between the maximum and minimum. If energy is dissipated while an object is rotating, this will cause the polhode motion about the axis of maximum inertia (also called the major principal axis) to damp out or stabilize, with the polhode path becoming a smaller and smaller ellipse or circle, closing in on the axis. A body is never stable when spinning about the intermediate principal axis, and dissipated energy will cause the polhode to start migrating to the object’s axis of maximum inertia. The transition point between two stable axes of rotation is called the separatrix along which the angular velocity passes through the axis of intermediate inertia. Rotation about the axis of minimum inertia (also called the minor principal axis) is also stable, but given enough time, any perturbations due to energy dissipation or torques would cause the polhode path to expand, in larger and larger ellipses or circles, and eventually migrate through the separatrix and its axis of intermediate inertia to its axis of maximum inertia. It is important to note that these changes in the orientation of the body as it spins may not be due to external torques, but rather result from energy dissipated internally as the body is spinning. Even if angular momentum is conserved (no external torques), internal energy can be dissipated during rotation if the body is not perfectly rigid, and any rotating body will continue to change its orientation until it has stabilized around its axis of maximum inertia, where the amount of energy corresponding to its angular momentum is least. See also Gravity Probe B Herpolhode Poinsot's construction References Rigid bodies Mechanics
Polhode
[ "Physics", "Engineering" ]
1,165
[ "Mechanics", "Mechanical engineering" ]
5,329,800
https://en.wikipedia.org/wiki/Circadian%20rhythm%20sleep%20disorder
Circadian rhythm sleep disorders (CRSD), also known as circadian rhythm sleep–wake disorders (CRSWD), are a family of sleep disorders that affect the timing of sleep. CRSDs cause a persistent pattern of sleep/wake disturbances that arise either by dysfunction in one's biological clock system, or by misalignment between one's endogenous oscillator and externally imposed cues. As a result of this misalignment, those affected by circadian rhythm sleep disorders can fall asleep at unconventional time points in the day, or experience excessive daytime sleepiness if they resist. These occurrences often lead to recurring instances of disrupted rest and wakefulness, where individuals affected by the disorder are unable to go to sleep and awaken at "normal" times for work, school, and other social obligations. Delayed sleep phase disorder, advanced sleep phase disorder, non-24-hour sleep–wake disorder and irregular sleep–wake rhythm disorder represent the four main types of CRSD. Overview Humans, like most living organisms, have various biological rhythms. These biological clocks control processes that fluctuate daily (e.g., body temperature, alertness, hormone secretion), generating circadian rhythms. Among these physiological characteristics, the sleep–wake propensity can also be considered one of the daily rhythms regulated by the biological clock system. Humans' sleeping cycles are tightly regulated by a series of circadian processes working in tandem, allowing for the experience of moments of consolidated sleep during the night and a long wakeful moment during the day. ipRGCs (intrinsically photosensitive Retinal Ganglion Cells), for example, are involved in modulating the circadian rhythm because of the expression of melanopsin, which absorbs light in the blue part (around 480 nm). Conversely, disruptions to these processes and the communication pathways between them can lead to problems in sleeping patterns, which are collectively referred to as circadian rhythm sleep disorders. Normal rhythm A circadian rhythm is an entrainable, endogenous, biological activity that has a period of roughly twenty-four hours. This internal time-keeping mechanism is centralized in the suprachiasmatic nucleus (SCN) of humans, and allows for the internal physiological mechanisms underlying sleep and alertness to become synchronized to external environmental cues, like the light-dark cycle. The SCN also sends signals to peripheral clocks in other organs, like the liver, to control processes such as glucose metabolism. Although these rhythms will persist in constant light or dark conditions, different zeitgebers (time givers such as the light-dark cycle) give context to the clock and allow it to entrain and regulate expression of physiological processes to adjust to the changing environment. Genes that help control light-induced entrainment include positive regulators BMAL1 and CLOCK and negative regulators PER1 and CRY. A full circadian cycle can be described as a twenty-four hour circadian day, where circadian time zero (CT 0) marks the beginning of a subjective day for an organism and CT 12 marks the start of subjective night. Humans with regular circadian function have been shown to maintain regular sleep schedules, regulate daily rhythms in hormone secretion, and sustain oscillations in core body temperature. Even in the absence of zeitgebers, humans will continue to maintain a roughly 24-hour rhythm in these biological activities. Regarding sleep, normal circadian function allows people to maintain balance rest and wakefulness that allows people to work and maintain alertness during the day's activities, and rest at night. Some misconceptions regarding circadian rhythms and sleep commonly mislabel irregular sleep as a circadian rhythm sleep disorder (CRSD). In order to be diagnosed with CRSD, there must be either a misalignment between the timing of the circadian oscillator and the surrounding environment, or failure in the clock entrainment pathway. Among people with typical circadian clock function, there is variation in chronotypes, or preferred wake and sleep times, of individuals. Although chronotype varies from individual to individual, as determined by rhythmic expression of clock genes, people with typical circadian clock function will be able to entrain to environmental cues. For example, if a person wishes to shift the onset of a biological activity, like waking time, light exposure during the late subjective night or early subjective morning can help advance one's circadian cycle earlier in the day, leading to an earlier wake time. Diagnosis The International Classification of Sleep Disorders classifies Circadian Rhythm Sleep Disorder as a type of sleep dyssomnia. Although studies suggest that 3% of the adult population has a CRSD, many people are often misdiagnosed with insomnia instead of a CRSD. Of adults diagnosed with sleep disorders, an estimated 10% have a CRSD and of adolescents with sleep disorders, an estimated 16% may have a CRSD. Patients diagnosed with circadian rhythm sleep disorders typically express a pattern of disturbed sleep, whether that be excessive sleep that intrudes on working schedules and daily functions, or insomnia at desired times of sleep. Note that having a preference for extreme early or late wake times is not related to a circadian rhythm sleep disorder diagnosis. There must be distinct impairment of biological rhythms that affects the person's desired work and daily behavior. For a CRSD diagnosis, a sleep specialist gathers the history of a patient's sleep and wake habits, body temperature patterns, and dim-light melatonin onset (DLMO). Gathering this data gives insight into the patient's current schedule, as well as the physiological phase markers of the patient's biological clock. The start of the CRSD diagnostic process is a thorough sleep history assessment. A standard questionnaire is used to record the sleep habits of the patient, including typical bedtime, sleep duration, sleep latency, and instances of waking up. The professional will further inquire about other external factors that may impact sleep. Prescription drugs that treat mood disorders like tricyclic antidepressants, selective serotonin reuptake inhibitors and other antidepressants are associated with abnormal sleep behaviors. Other daily habits like work schedule and timing of exercise are also recorded—because they may impact an individual's sleep and wake patterns. To measure sleep variables candidly, patients wear actigraphy watches that record sleep onset, wake time, and many other physiological variables. Patients are similarly asked to self-report their sleep habits with a week-long sleep diary to document when they go to bed, when they wake up, etc. to supplement the actigraphy data. Collecting this data allows sleep professionals to carefully document and measure patient's sleep habits and confirm patterns described in their sleep history. Other additional ways to classify the nature of a patient's sleep and biological clock are the morningness-eveningness questionnaire (MEQ) and the Munich ChronoType Questionnaire, both of which have fairly strong correlations with accurately reporting phase advanced or delayed sleep. Questionnaires like the Pittsburgh Sleep Quality Index (PSQI) and the Insomnia Severity Index (ISI) help gauge the severity of sleep disruption. Specifically, these questionnaires can help the professional assess the patient's problems with sleep latency, undesired early-morning wakefulness, and problems with falling or staying asleep. Tayside children's sleep questionnaire is a ten-item questionnaire for sleep disorders in children aged between one and five years old. Types Currently, the International Classification of Sleep Disorders (ICSD-3) lists 6 disorders under the category of circadian rhythm sleep disorders. CRSDs can be categorized into two groups based on their underlying mechanisms: The first category is composed of disorders where the endogenous oscillator has been altered, known as intrinsic type disorders. The second category consists of disorders in which the external environment and the endogenous circadian clock are misaligned, called extrinsic type CRSDs. Intrinsic Delayed sleep phase disorder (DSPD): Individuals who have been diagnosed with delayed sleep phase disorder have sleep–wake times that are delayed when compared to normal functioning individuals. People with DSPD typically have very long periods of sleep latency when they attempt to go to sleep during conventional sleeping times. Similarly, they also have trouble waking up at conventional times. Advanced sleep phase disorder (ASPD): People with advanced sleep phase disorder exhibit characteristics opposite to those with delayed sleep phase disorder. These individuals have advanced sleep–wake times, so they tend to go to bed and wake up much earlier as compared to normal individuals. ASPD is less common than DSPD, and is most prevalent within older populations. Familial Advanced Sleep Phase Syndrome (FASPS) is linked to an autosomal dominant mode of inheritance. It is associated with a missense mutation in human PER2 that replaces serine for glycine at position 662 (S662G). Irregular sleep–wake rhythm disorder (ISWRD) is characterized by a normal 24 h sleeping period. However, individuals with this disorder experience fragmented and highly disorganized sleep that can manifest in the form of waking frequently during the night and taking naps during the day, yet still maintaining sufficient total time asleep. People with ISWRD often experience a range of symptoms from insomnia to excessive daytime sleepiness. Non-24-hour sleep–wake disorder (N24SWD): Most common in individuals that are blind and unable to detect light, is characterized by chronic patterns of sleep/wake cycles that are not entrained to the 24 h light–dark environmental cycle. As a result of this, individuals with this disorder will usually experience a gradual yet predictable delay of sleep onset and waking times. Patients with DSPD may develop this disorder if their condition is untreated. Extrinsic Shift work sleep disorder (SWSD): Approximately 9% of Americans who work night or irregular work shifts are believed to experience shift work sleep disorder. Night shift work directly opposes the environmental cues that entrain our biological clock, so this disorder arises when an individual's clock is unable to adjust to the socially imposed work schedule. Shift work sleep disorder can lead to severe cases of insomnia as well as excessive daytime sleepiness. Jet lag: Jet lag is best characterized by difficulty falling asleep or staying asleep as a result of misalignment between one's internal circadian system and external, or environmental cues. It is typically associated with rapid travel across multiple time zones. Alzheimer's disease CRSD has been frequently associated with excessive daytime sleepiness and nighttime insomnia in patients diagnosed with Alzheimer's disease (AD), representing a common characteristic among AD patients as well as a risk factor of progressive functional impairments. On one hand, it has been stated that people with AD have melatonin alteration and high irregularity in their circadian rhythm that lead to a disrupted sleep–wake cycle, probably due to damage on hypothalamic SCN regions typically observed in AD. On the other hand, disturbed sleep and wakefulness states have been related to worsening of an AD patient's cognitive abilities, emotional state and quality of life. Moreover, the abnormal behavioural symptoms of the disease negatively contribute to overwhelming patients' relatives and caregivers as well. However, the impact of sleep–wake disturbances on the subjective experience of a person with AD is not yet fully understood. Therefore, further studies exploring this field have been highly recommended, mainly considering the increasing life expectancy and significance of neurodegenerative diseases in clinical practices. Treatment Possible treatments for circadian rhythm sleep disorders include: Chronotherapy has been shown to effectively treat delayed sleep phase disorder; it acts by systematically delaying an individual's bedtime until their sleep–wake times coincide with the conventional 24 h day. Light therapy utilizes bright light exposure to induce phase advances and delays in sleep and wake times. This therapy requires 30–60 minutes of exposure to a bright () white, blue, or natural light at a set time until the circadian clock is aligned with the desired schedule. Treatment is initially administered either upon awakening or before sleeping, and if successful may be continued indefinitely or performed less frequently. Though proven very effective in the treatment of individuals with DSPD and ASPD, the benefits of light therapy on N24SWD, shift work disorder, and jet lag have not been studied as extensively. Hypnotics have also been used clinically alongside bright light exposure therapy and pharmacotherapy for the treatment of CRSDs such as Advanced Sleep Phase Disorder. Additionally, in conjunction with cognitive behavioral therapy, short-acting hypnotics also present an avenue for treating co-morbid insomnia in patients with circadian sleep disorders. Melatonin, a naturally occurring biological hormone with circadian rhythmicity, has been shown to promote sleep and entrainment to external cues when administered in drug form (0.5–5.0 mg). Melatonin administered in the evening causes phase advances in sleep–wake times while maintaining duration and quality of sleep. Similarly, when administered in the early morning, melatonin can cause phase delays. It has been shown most effective in cases of shift work sleep disorder and delayed phase sleep disorder, but has not been proven particularly useful in cases of jet lag. Dark therapy, for example, the use of blue-blocking goggles, is used to block blue and blue-green wavelength light from reaching the eye during evening hours so as not to hinder melatonin production. See also Chronobiology Familial sleep traits Light effects on circadian rhythm Phase response curve Sleep diary Sleep medicine References External links Circadian Sleep Disorders Network An American Academy of Sleep Medicine Review: Circadian Rhythm Sleep Disorders: Part I, Basic Principles, Shift Work and Jet Lag Disorders. PDF, 24 pages. November 2007. An American Academy of Sleep Medicine Review: Circadian Rhythm Sleep Disorders: Part II, Advanced Sleep Phase Disorder, Delayed Sleep Phase Disorder, Free-Running Disorder, and Irregular Sleep–Wake Rhythm. PDF, 18 pages. November 2007. An American Academy of Sleep Medicine Report: Practice Parameters for the Clinical Evaluation and Treatment of Circadian Rhythm Sleep Disorders, November 1, 2007 NASA Sleep–Wake Actigraphy and Light Exposure During Spaceflight-Long Experiment Sleep disorders Circadian rhythm Neurophysiology Sleep physiology Circadian rhythm sleep–wake disorders
Circadian rhythm sleep disorder
[ "Biology" ]
2,955
[ "Behavior", "Sleep physiology", "Circadian rhythm", "Sleep disorders", "Sleep" ]
30,759,885
https://en.wikipedia.org/wiki/TeraChem
TeraChem is a computational chemistry software program designed for CUDA-enabled Nvidia GPUs. The initial development started at the University of Illinois at Urbana-Champaign and was subsequently commercialized. It is currently distributed by PetaChem, LLC, located in Silicon Valley. As of 2020, the software package is still under active development. Core features TeraChem is capable of fast ab initio molecular dynamics and can utilize density functional theory (DFT) methods for nanoscale biomolecular systems with hundreds of atoms. All the methods used are based on Gaussian orbitals, in order to improve performance on contemporary (2010s) computer hardware. Press coverage Chemical and Engineering News (C&EN) magazine of the American Chemical Society first mentioned the development of TeraChem in Fall 2008. Recently, C&EN magazine has a feature article covering molecular modeling on GPU and TeraChem. According to the 2010 post at the Nvidia blog, TeraChem has been tested to deliver 8-50 times better performance than General Atomic and Molecular Structure System (GAMESS). In that benchmark, TeraChem was executed on a desktop machine with four (4) Tesla GPUs and GAMESS was running on a cluster of 256 quad core CPUs. TeraChem is available for free via GPU Test Drive. Major release history 2017 TeraChem version 1.93P Support for Maxwell and Pascal GPUs (e.g. Titan X-Pascal, P100) Use of multiple basis sets for different elements $multibasis Use of polarizable continuum methods for ground and excited states 2016 TeraChem version 1.9 Support for Maxwell cards (e.g., GTX 980, Titan X) Effective core potentials (and gradients) Time-dependent density functional theory Continuum solvation models (COSMO) 2012 TeraChem version 1.5 Full support of polarization functions: energy, gradients, ab initio dynamics and range-corrected DFT functionals (CAMB3LYP, wPBE, wB97x) 2011 TeraChem version 1.5a (pre-release) Alpha version with the full support of d-functions: energy, gradients, ab initio dynamics TeraChem version 1.43b-1.45b Beta version with polarization functions for energy calculation (HF/DFT levels) as well as other improvements. TeraChem version 1.42 This version was first deployed at National Center for Supercomputing Applications' (NCSA) Lincoln supercomputer for National Science Foundation (NSF) TeraGrid users as announced in NCSA press release. 2010 TeraChem version 1.0 TeraChem version 1.0b The very first initial beta release was reportedly downloaded more than 4,000 times. Publication list Charge Transfer and Polarization in Solvated Proteins from Ab Initio Molecular Dynamics I. S. Ufimtsev, N. Luehr and T. J. Martinez Journal of Physical Chemistry Letters, Vol. 2, 1789-1793 (2011) Excited-State Electronic Structure with Configuration Interaction Singles and Tamm-Dancoff Time-Dependent Density Functional Theory on Graphical Processing Units C. M. Isborn, N. Luehr, I. S. Ufimtsev and T. J. Martinez Journal of Chemical Theory and Computation, Vol. 7, 1814-1823 (2011) Dynamic Precision for Electron Repulsion Integral Evaluation on Graphical Processing Units (GPUs) N. Luehr, I. S. Ufimtsev, and T. J. Martinez Journal of Chemical Theory and Computation, Vol. 7, 949-954 (2011) Quantum Chemistry on Graphical Processing Units. 3. Analytical Energy Gradients and First Principles Molecular Dynamics I. S. Ufimtsev and T. J. Martinez Journal of Chemical Theory and Computation, Vol. 5, 2619-2628 (2009) Quantum Chemistry on Graphical Processing Units. 2. Direct Self-Consistent Field Implementation I. S. Ufimtsev and T. J. Martinez Journal of Chemical Theory and Computation, Vol. 5, 1004-1015 (2009) Quantum Chemistry on Graphical Processing Units. 1. Strategies for Two-Electron Integral Evaluation I. S. Ufimtsev and T. J. Martinez Journal of Chemical Theory and Computation, Vol. 4, 222-231 (2008) Graphical Processing Units for Quantum Chemistry I. S. Ufimtsev and T. J. Martinez Computing in Science and Engineering, Vol. 10, 26-34 (2008) Preparation and characterization of stable aqueous higher-order fullerenes Nirupam Aich, Joseph R V Flora and Navid B Saleh Nanotechnology, Vol. 23, 055705 (2012) Filled Pentagons and Electron Counting Rule for Boron Fullerenes Kregg D. Quarles, Cherno B. Kah, Rosi N. Gunasinghe, Ryza N. Musin, and Xiao-Qian Wang Journal of Chemical Theory Computation, Vol. 7, 2017–2020 (2011) Sensitivity Analysis of Cluster Models for Calculating Adsorption Energies for Organic Molecules on Mineral Surfaces M. P. Andersson and S. L. S. Stipp Journal of Physical Chemistry C, Vol. 115, 10044–10055 (2011) Dispersion corrections in the boron buckyball and nanotubes Rosi N. Gunasinghe, Cherno B. Kah, Kregg D. Quarles, and Xiao-Qian Wang Applied Physics Letters 98, 261906 (2011) * Structural and electronic stability of a volleyball-shaped B80 fullerene Xiao-Qian Wang Physical Review B 82, 153409 (2010) Ab Initio Molecular Dynamics Simulations of Ketocyanine Dyes in Organic Solvents Andrzej Eilmes Lecture Notes in Computer Science, 7136/2012, 276-284 (2012) State Equation of a Model Methane Clathrate Cage Ruben Santamaria, Juan-Antonio Mondragon-Sanchez and Xim Bokhimi J. Phys. Chem. A, ASAP (2012) See also Quantum chemistry computer programs Molecular design software Molecule editor Comparison of software for molecular mechanics modeling List of software for Monte Carlo molecular modeling References Molecular modelling Computational chemistry Computational chemistry software Electronic structure methods
TeraChem
[ "Physics", "Chemistry" ]
1,322
[ "Quantum chemistry", "Computational chemistry software", "Chemistry software", "Molecular physics", "Quantum mechanics", "Computational physics", "Theoretical chemistry", "Electronic structure methods", "Computational chemistry", "Molecular modelling" ]
29,248,521
https://en.wikipedia.org/wiki/C14H16O10
{{DISPLAYTITLE:C14H16O10}} The molecular formula C14H16O10 (molar mass: 344.27 g/mol, exact mass: 344.0743 u) may refer to: Anthocyanone A, a degradation product of malvidin found in wine Theogallin, a phenolic compound found in tea Molecular formulas
C14H16O10
[ "Physics", "Chemistry" ]
82
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
29,250,676
https://en.wikipedia.org/wiki/Streptococcal%20pyrogenic%20exotoxin
Streptococcal pyrogenic exotoxins also known as erythrogenic toxins, are exotoxins secreted by strains of the bacterial species Streptococcus pyogenes. SpeA and speC are superantigens, which induce inflammation by nonspecifically activating T cells and stimulating the production of inflammatory cytokines. SpeB, the most abundant streptococcal extracellular protein, is a cysteine protease. Pyrogenic exotoxins are implicated as the causative agent of scarlet fever and streptococcal toxic shock syndrome. There is no consensus on the exact number of pyrogenic exotoxins. Serotypes A, B, and C are the most extensively studied and recognized by all sources, but others note up to thirteen distinct types, categorizing speF through speM as additional superantigens. Erythrogenic toxins are known to damage the plasma membranes of blood capillaries under the skin and produce a red skin rash (characteristic of scarlet fever). Past studies have shown that multiple variants of erythrogenic toxins may be produced, depending on the strain of S. pyogenes in question. Some strains may not produce a detectable toxin at all. Bacteriophage T12 infection of S. pyogenes enables the production of speA, and increases virulence. History Discovery and nomenclature SpeB was identified in 1919 as an ectoenzyme secreted by certain strains of streptococci. It was originally studied as two separate toxins, streptococcal pyrogenic exotoxin B and streptococcal cysteine proteinase, until it was shown that both proteins were encoded by the speB gene and that the attributed pyrogenic activities were due to contamination by SpeA and SpeC. Pyrogenic, in streptococcal pyrogenic exotoxin, means "causes fever." Erythrogenic refers to the typical red rash of scarlet fever. In older literature, these toxins are also referred to as scarlatina toxins or scarlet fever toxins due to their role as the causative agents of the disease. SpeB is known as streptococcal pyrogenic exotoxin B, streptopain and streptococcal cysteine proteinase as a result of its original misidentification as two separate toxins, and is neither an exotoxin nor pyrogenic. Structure Location of genes The speB and speJ genes are located in the core bacterial chromosome of all strains of S. pyogenes. However, despite its presence and high levels of conservation in the nucleotide sequence, 25–40% of these strains do not express the SpeB toxin in significant amounts. In contrast, speA, speC and speH-M are encoded by bacteriophages. There is a lack of consensus over the location of the speG gene, which has been attributed to both the core chromosome and lysogenic phages. Protein structure SpeB is a 28 kDa protein with three major forms, mSpeB1, mSpeB2 and mSpeB3, which are categorized by variations the primary amino acid sequence. Three amino acids, C192, H340, and W357, are vital for enzymatic activity in all variants. The toxin contains a canonical papain-like domain, and mSpeB2 has an additional human integrin binding domain. All superantigenic streptococcal pyrogenic exotoxins contain two major conserved protein domains that are linked by an α-helix, which consist of an amino-terminal oligosaccharide/oligonucleotide binding fold and a carboxy-terminal β-grasp domain, as well as a dodecapeptide binding region. SpeA also has a cystine loop, a low-affinity α-chain MHC II binding site, and the Vβ-TCR binding site. SpeC, SpeG, SpeH and SpeJ contains a Zn2+-dependent high β-chain MHC II binding site in addition to the low affinity site present in SpeA, and lacks the cystine loop. SpeH also has an additional α3-β8 loop that mediates the specificity of the toxin's Vβ-TCR binding site. Processing and regulation The speB gene encodes for an amino acid sequence that becomes the 40 kDa zymogen, known as SpeBz, after cleavage of the signal sequence. SpeBz undergoes autocatalysis through at least eight intermediates to create the 28 kDa SpeBm. Finally cystine-192 and histidine-340 form a catalytic dyad. Each step is tightly regulated by multiple factors, allowing sophisticated temporal expression of the mature proteinase. Mechanisms of action SpeA and speC SpeA and SpeC bind to MHC Class II molecules, are presented to T cells, and bind to the variable region of the beta chain of T-cell receptors (TCRs). Once activated, the T cells release pro-inflammatory cytokines and chemokines. The interactions with TCRs are characterized by low affinities and fast dissociation, allowing the toxin to activate multiple T cells in succession. The lack of specificity allow the activation of up to 50% of the T cells in the body. SpeB SpeB cleaves degrades multiple proteins through hydrolysis, including cytokines, extracellular matrix proteins and immunoglobulin. It requires three amino acids before the cleavage site, known as P1, P2 and P3. Of these, SpeB has a preference for hydrophobic P2 and positively charged P1 residues, with greater importance of the P2 amino acid. Roles in virulence, pathogenesis and infection SpeB Streptococcal cysteine proteinase has roles in immune evasion and apoptosis, as well as potential influence on bacterial internalization. There is contradictory evidence regarding the effect of SpeB on virulence. Some studies have reported increased protease levels in strains that cause scarlet fever in comparison to those associated with streptococcal toxic shock syndrome, while others show decreased expression in more virulent strains. SpeB degrades immunoglobulins and cytokines, as well as through cleavage of C3b, inhibiting recruitment of phagocytic cells and the complement activation pathway. This results in decreased inflammation and neutrophil levels around the site of infection, preventing clearance and through phagocytosis and promoting the survival of S. pyogenes. The toxin also induces apoptosis in host cells after GAS internalization. Evidence suggests that this may take place through extrinsic and intrinsic caspase pathways. The receptor-binding pathway and Fas-mediated apoptotic signaling pathway have been implicated in this process. The induction of apoptosis results in necrotizing fasciitis. References External links Todar's Online Textbook of Bacteriology Streptococcal Pyrogenic Exotoxin A1 Bacterial toxins Scarlet fever Proteins
Streptococcal pyrogenic exotoxin
[ "Chemistry" ]
1,520
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
29,252,190
https://en.wikipedia.org/wiki/Rescue%20fusion%20hybridization
Rescue fusion hybridization is a process used to manufacture some therapeutic cancer vaccines in which individual tumor cells obtained through biopsy are fused with an antibody-secreting cell to form a heterohybridoma. This cell then secretes the unique idiotype, or immunoglobulin antigen characteristic of the individual tumor, which is purified for use as the vaccine. It is used to produce the BiovaxID vaccine for follicular lymphoma. References Vaccination Oncology Chemical processes
Rescue fusion hybridization
[ "Chemistry", "Biology" ]
106
[ "Biochemistry", "Biotechnology stubs", "Chemical processes", "Biochemistry stubs", "nan", "Chemical process engineering", "Vaccination" ]
29,260,402
https://en.wikipedia.org/wiki/Toric%20code
The toric code is a topological quantum error correcting code, and an example of a stabilizer code, defined on a two-dimensional spin lattice. It is the simplest and most well studied of the quantum double models. It is also the simplest example of topological order—Z2 topological order (first studied in the context of Z2 spin liquid in 1991). The toric code can also be considered to be a Z2 lattice gauge theory in a particular limit. It was introduced by Alexei Kitaev. The toric code gets its name from its periodic boundary conditions, giving it the shape of a torus. These conditions give the model translational invariance, which is useful for analytic study. However, some experimental realizations require open boundary conditions, allowing the system to be embedded on a 2D surface. The resulting code is typically known as the planar code. This has identical behaviour to the toric code in most, but not all, cases. Error correction and computation The toric code is defined on a two-dimensional lattice, usually chosen to be the square lattice, with a spin-½ degree of freedom located on each edge. They are chosen to be periodic. Stabilizer operators are defined on the spins around each vertex and plaquette (or face i.e. a vertex of the dual lattice) of the lattice as follows, Where here we use to denote the edges touching the vertex , and to denote the edges surrounding the plaquette . The stabilizer space of the code is that for which all stabilizers act trivially, hence for any state in this space it holds that For the toric code, this space is four-dimensional, and so can be used to store two qubits of quantum information. This can be proven by considering the number of independent stabilizer operators. The occurrence of errors will move the state out of the stabilizer space, resulting in vertices and plaquettes for which the above condition does not hold. The positions of these violations is the syndrome of the code, which can be used for error correction. The unique nature of the topological codes, such as the toric code, is that stabilizer violations can be interpreted as quasiparticles. Specifically, if the code is in a state such that, , a quasiparticle known as an anyon can be said to exist on the vertex . Similarly violations of the are associated with so called anyons on the plaquettes. The stabilizer space therefore corresponds to the anyonic vacuum. Single spin errors cause pairs of anyons to be created and transported around the lattice. When errors create an anyon pair and move the anyons, one can imagine a path connecting the two composed of all links acted upon. If the anyons then meet and are annihilated, this path describes a loop. If the loop is topologically trivial, it has no effect on the stored information. The annihilation of the anyons, in this case, corrects all of the errors involved in their creation and transport. However, if the loop is topologically non-trivial, though re-annihilation of the anyons returns the state to the stabilizer space, it also implements a logical operation on the stored information. The errors, in this case, are therefore not corrected but consolidated. Consider the noise model for which bit and phase errors occur independently on each spin, both with probability p. When p is low, this will create sparsely distributed pairs of anyons which have not moved far from their point of creation. Correction can be achieved by identifying the pairs that the anyons were created in (up to an equivalence class), and then re-annihilating them to remove the errors. As p increases, however, it becomes more ambiguous as to how the anyons may be paired without risking the formation of topologically non-trivial loops. This gives a threshold probability, under which the error correction will almost certainly succeed. Through a mapping to the random-bond Ising model, this critical probability has been found to be around 11%. Other error models may also be considered, and thresholds found. In all cases studied so far, the code has been found to saturate the Hashing bound. For some error models, such as biased errors where bit errors occur more often than phase errors or vice versa, lattices other than the square lattice must be used to achieve the optimal thresholds. These thresholds are upper limits and are useless unless efficient algorithms are found to achieve them. The most well-used algorithm is minimum weight perfect matching. When applied to the noise model with independent bit and flip errors, a threshold of around 10.5% is achieved. This falls only a little short of the 11% maximum. However, matching does not work so well when there are correlations between the bit and phase errors, such as with depolarizing noise. The means to perform quantum computation on logical information stored within the toric code has been considered, with the properties of the code providing fault-tolerance. It has been shown that extending the stabilizer space using 'holes', vertices or plaquettes on which stabilizers are not enforced, allows many qubits to be encoded into the code. However, a universal set of unitary gates cannot be fault-tolerantly implemented by unitary operations and so additional techniques are required to achieve quantum computing. For example, universal quantum computing can be achieved by preparing magic states via encoded quantum stubs called tidBits used to teleport in the required additional gates when replaced as a qubit. Furthermore, preparation of magic states must be fault tolerant, which can be achieved by magic state distillation on noisy magic states. A measurement based scheme for quantum computation based upon this principle has been found, whose error threshold is the highest known for a two-dimensional architecture. Hamiltonian and self-correction Since the stabilizer operators of the toric code are quasilocal, acting only on spins located near each other on a two-dimensional lattice, it is not unrealistic to define the following Hamiltonian, The ground state space of this Hamiltonian is the stabilizer space of the code. Excited states correspond to those of anyons, with the energy proportional to their number. Local errors are therefore energetically suppressed by the gap, which has been shown to be stable against local perturbations. However, the dynamic effects of such perturbations can still cause problems for the code. The gap also gives the code a certain resilience against thermal errors, allowing it to be correctable almost surely for a certain critical time. This time increases with , but since arbitrary increases of this coupling are unrealistic, the protection given by the Hamiltonian still has its limits. The means to make the toric code, or the planar code, into a fully self-correcting quantum memory is often considered. Self-correction means that the Hamiltonian will naturally suppress errors indefinitely, leading to a lifetime that diverges in the thermodynamic limit. It has been found that this is possible in the toric code only if long range interactions are present between anyons. Proposals have been made for realization of these in the lab Another approach is the generalization of the model to higher dimensions, with self-correction possible in 4D with only quasi-local interactions. Anyon model As mentioned above, so called and quasiparticles are associated with the vertices and plaquettes of the model, respectively. These quasiparticles can be described as anyons, due to the non-trivial effect of their braiding. Specifically, though both species of anyons are bosonic with respect to themselves, the braiding of two 's or 's having no effect, a full monodromy of an and an will yield a phase of . Such a result is not consistent with either bosonic or fermionic statistics, and hence is anyonic. The anyonic mutual statistics of the quasiparticles demonstrate the logical operations performed by topologically non-trivial loops. Consider the creation of a pair of anyons followed by the transport of one around a topologically nontrivial loop, such as that shown on the torus in blue on the figure above, before the pair are reannhilated. The state is returned to the stabilizer space, but the loop implements a logical operation on one of the stored qubits. If anyons are similarly moved through the red loop above a logical operation will also result. The phase of resulting when braiding the anyons shows that these operations do not commute, but rather anticommute. They may therefore be interpreted as logical and Pauli operators on one of the stored qubits. The corresponding logical Pauli's on the other qubit correspond to an anyon following the blue loop and an anyon following the red. No braiding occurs when and pass through parallel paths, the phase of therefore does not arise and the corresponding logical operations commute. This is as should be expected since these form operations acting on different qubits. Due to the fact that both and anyons can be created in pairs, it is clear to see that both these quasiparticles are their own antiparticles. A composite particle composed of two anyons is therefore equivalent to the vacuum, since the vacuum can yield such a pair and such a pair will annihilate to the vacuum. Accordingly, these composites have bosonic statistics, since their braiding is always completely trivial. A composite of two anyons is similarly equivalent to the vacuum. The creation of such composites is known as the fusion of anyons, and the results can be written in terms of fusion rules. In this case, these take the form, Where denotes the vacuum. A composite of an and an is not trivial. This therefore constitutes another quasiparticle in the model, sometimes denoted , with fusion rule, From the braiding statistics of the anyons we see that, since any single exchange of two 's will involve a full monodromy of a constituent and , a phase of will result. This implies fermionic self-statistics for the 's. Generalizations The use of a torus is not required to form an error correcting code. Other surfaces may also be used, with their topological properties determining the degeneracy of the stabilizer space. In general, quantum error correcting codes defined on two-dimensional spin lattices according to the principles above are known as surface codes. It is also possible to define similar codes using higher-dimensional spins. These are the quantum double models and string-net models, which allow a greater richness in the behaviour of anyons, and so may be used for more advanced quantum computation and error correction proposals. These not only include models with Abelian anyons, but also those with non-Abelian statistics. Experimental progress The most explicit demonstration of the properties of the toric code has been in state based approaches. Rather than attempting to realize the Hamiltonian, these simply prepare the code in the stabilizer space. Using this technique, experiments have been able to demonstrate the creation, transport and statistics of the anyons and measurement of the topological entanglement entropy. More recent experiments have also been able to demonstrate the error correction properties of the code. For realizations of the toric code and its generalizations with a Hamiltonian, much progress has been made using Josephson junctions. The theory of how the Hamiltonians may be implemented has been developed for a wide class of topological codes. An experiment has also been performed, realizing the toric code Hamiltonian for a small lattice, and demonstrating the quantum memory provided by its degenerate ground state. Other theoretical and experimental works towards realizations are based on cold atoms. A toolkit of methods that may be used to realize topological codes with optical lattices has been explored, as have experiments concerning minimal instances of topological order. Such minimal instances of the toric code has been realized experimentally within isolated square plaquettes. Progress is also being made into simulations of the toric model with Rydberg atoms, in which the Hamiltonian and the effects of dissipative noise can be demonstrated. Experiments in Rydberg atom arrays have also successfully realized the toric code with periodic boundary conditions in two dimensions by coherently transporting arrays of entangled atoms. References External links https://skepsisfera.blogspot.com/2010/04/kitaevs-toric-code.html Quantum information science Fault-tolerant computer systems Quantum phases Condensed matter physics
Toric code
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
2,555
[ "Quantum phases", "Reliability engineering", "Phases of matter", "Quantum mechanics", "Computer systems", "Materials science", "Fault-tolerant computer systems", "Condensed matter physics", "Matter" ]
29,262,224
https://en.wikipedia.org/wiki/Periodic%20summation
In mathematics, any integrable function can be made into a periodic function with period P by summing the translations of the function by integer multiples of P. This is called periodic summation: When is alternatively represented as a Fourier series, the Fourier coefficients are equal to the values of the continuous Fourier transform, at intervals of . That identity is a form of the Poisson summation formula. Similarly, a Fourier series whose coefficients are samples of at constant intervals (T) is equivalent to a periodic summation of which is known as a discrete-time Fourier transform. The periodic summation of a Dirac delta function is the Dirac comb. Likewise, the periodic summation of an integrable function is its convolution with the Dirac comb. Quotient space as domain If a periodic function is instead represented using the quotient space domain then one can write: The arguments of are equivalence classes of real numbers that share the same fractional part when divided by . Citations See also Dirac comb Circular convolution Discrete-time Fourier transform Functions and mappings Signal processing
Periodic summation
[ "Mathematics", "Technology", "Engineering" ]
224
[ "Mathematical analysis", "Functions and mappings", "Telecommunications engineering", "Computer engineering", "Signal processing", "Mathematical objects", "Mathematical relations" ]
29,262,539
https://en.wikipedia.org/wiki/Fold%20number
Fold number refers to how many double folds that are required to cause rupture of a paper test piece under standardized conditions. Fold number is defined in ISO 5626:1993 as the antilogarithm of the mean folding endurance: where f is the fold number, Fi is the folding endurance for each test piece and n is total number of test pieces used. In the introduction of ISO 5626:1993 it is emphasized that fold number, as defined in that very International Standard, does not equal the mean number of double folds observed. The latter is however still the definition used in some countries. If the numerical value of the folding endurance is not rounded off, these will however be equal. In the former Swedish standard SS 152005 ("Pappersordlista") from 1992, with paper related terms defined in Swedish and English, fold number is explained as "the number of double folds which a test strip withstands under specified conditions before a break occurs in the strip"; that is, not the antilogarithm of the mean folding endurance. See also Folding endurance Double fold References Paper Materials testing
Fold number
[ "Materials_science", "Engineering" ]
228
[ "Materials testing", "Materials science" ]
37,325,129
https://en.wikipedia.org/wiki/Strontium%20oxalate
Strontium oxalate is a compound with the chemical formula . Strontium oxalate can exist either in a hydrated form () or as the acidic salt of strontium oxalate (). Strontium oxalate is soluble in 20 000 parts of water; in 1 900 parts of 3.5% acetic acid, in 115 parts of the 23% acid, but less soluble in the 35% acid; readily soluble in diluted HCl or nitric acid. Use in pyrotechnics With the addition of heat, strontium oxalate will decompose based on the following reaction: Strontium oxalate is a good agent for use in pyrotechnics since it decomposes readily with the addition of heat. When it decomposes into strontium oxide, it produces a red flame color. Since this reaction produces carbon monoxide, which can undergo a further reduction with magnesium oxide, strontium oxalate is an excellent red flame color producing agent in the presence of magnesium. If it is not in the presence of magnesium, strontium carbonate has been found to be a better option to produce an even greater effect. References Strontium compounds Oxalates Inorganic compounds
Strontium oxalate
[ "Chemistry" ]
261
[ "Inorganic compounds" ]
37,325,306
https://en.wikipedia.org/wiki/Manganese%28II%29%20phosphate
Manganese(II) phosphate is an inorganic compound with the chemical formula . It has industrial importance as a constituent of manganese based phosphate conversion coatings. Formation Manganese phosphates often combine with iron phosphates through paragenesis. In the process of paragenesis, each mineral affects the manner in which the other is crystallized. The minerals combine in isomorphic series, meaning that they crystallize in similar forms, producing a sequence of solids. Some examples of “isomorphic series” formed by Mn and Fe include heterosite , purpurite , and triplite . Processing The immersion method is used for the processing of manganese phosphate coatings. The steps of immersion include degreasing/cleaning, water rinses, pickling in mineral acid, activation, manganese phosphating, an optional final drying, and lubrication using oils or emulsion. The first step of this method is to degrease and clean, typically with strong alkaline cleaners. Pickling in mineral acid is a useful process described by the removal of oxide, resulting in a clean surface. The water pre-rinse activation allows the cleaning and pickling to take place while avoiding the formation of coarse-crystalline phosphate. Manganese phosphating is performed, typically in a bath of dilute phosphoric acid that is at approximately for about 5–20 minutes (the time varies with the state of the surface being coated). The elements that were coated are then allowed to dry, possibly in an oven. The final step of manganese phosphate coating is to lubricate the materials with oil by immersing them in the oil bath and then letting them drain. This results in different thicknesses of oil when different concentrations and types of oil are used. Manganese phosphate can also be used for coatings (which are processed as described above). These coatings are useful for protection against corrosion and wear over time, due to their toughness. Manganese phosphate coating can often be found in the oil and gas industry, firearms and ordnance, aerospace, gears and bearings, and in marine equipment. They are also common in engines, screws, nuts and bolts, and washers. References Manganese(II) compounds Phosphates
Manganese(II) phosphate
[ "Chemistry" ]
460
[ "Phosphates", "Salts" ]
37,325,928
https://en.wikipedia.org/wiki/HD%20154088
HD 154088 is a seventh magnitude metal-rich K-type main sequence star that lies approximately 58 light-years away in the constellation of Ophiuchus. The star is orbited by a hot Super-Earth. Properties HD 154088 is a modestly bright star that lies at the bottom of Ophiuchus, near to the border with Scorpius and near to the plane of the Milky Way. The star was recognised as a high proper motion star during the last century, and early Earth-based parallax measurements such as that of the Gliese Catalogue of Nearby Stars indicated a distance of about 50 light-years. The star has a spectral type of K0V, indicating that it is a main sequence star that is about 350 degrees cooler than the Sun. On the Hertzsprung-Russell diagram (left), the star lies slightly above the main sequence. This is because the star is very metal-rich; with an Fe/H of 0.3 dex the star has about twice the solar abundance of iron, which makes HD 154088 fall into the somewhat vague group of super metal-rich (SMR) stars. The giant planet occurrence rate of Fe/H = 0.3 stars is on the order of 30%, but HD 154088 is not currently known to host any giant planets. HD 154088 has a pronounced magnetic field. It also has a magnetic cycle similar to the Sun, though its length is not well constrained. The survey in 2015 have ruled out the existence of any additional stellar companions at projected distances from 8 to 119 astronomical units. Planetary system A planet orbiting HD 154088 discovered with the HARPS spectrograph was announced in September 2011. With a minimum mass of 6 Earth masses, the companion falls into the regime of Super-Earths. HD 154088 is also being observed under the Keck Eta-Earth radial velocity survey. HD 154088 b is a close match for planet candidate 1 (orbital period = 18.1 days, minimum mass = 6.5 M🜨), so they may be the same detection. The planet existence was finally confirmed in 2021. References Ophiuchus Durchmusterung objects 154088 083541 0652 K-type main-sequence stars
HD 154088
[ "Astronomy" ]
469
[ "Ophiuchus", "Constellations" ]
33,273,473
https://en.wikipedia.org/wiki/Nitrogen%20nutrition%20in%20the%20arbuscular%20mycorrhizal%20system
Nitrogen nutrition in the arbuscular mycorrhizal system refers to... Role of nitrogen Nitrogen is a vital macronutrient for plants, necessary for the biosynthesis of many basic cellular components, such as DNA, RNA and proteins. Nitrogen is obtained by plants through roots from inorganic or organic sources, such as amino acids. In agricultural settings, nitrogen may be a limiting factor for plant growth and yield, and in total, as a critical cellular component that a plant deficient in this nitrogen will shunt resources away from its shoot in order to expand its root system so that it can acquire more nitrogen. Arbuscular mycorrhizal fungi are divided into two parts depending on where the mycelium is located. The intra-radical mycelia (IRM) are found within the root itself while the extra-radical mycelium (ERM) are tiny hyphal threads which reach far out into the soil. The IRM is the site of nutrient exchange between the symbionts, while the ERM effectively serves as an extension of the plant's root system by increasing the surface area available for nutrient acquisition, including nitrogen, which can be taken up in the form of ammonium, nitrate or from organic sources. Working with an in vitro system, studies have shown that as much as 29% to 50% of the root nitrogen was taken up via the fungus. This is also true in in planta studies, such as an experiment in which the researchers showed that 75% of the nitrogen in a young maize leaf originated from the ERM. Mechanism of action The precise mechanism(s) by which nitrogen is taken up from the soil by the ERM, transported to the IRM, and then turned over to the plant are still under investigation. Toward elucidating the mechanisms through which nitrogen transfer is completed, the sum of numerous studies have provided the necessary tools to study this process. For example, the detection and measurement of gene expression has enabled researchers to determine which genes are up-regulated in the plant and fungus under various nitrogen conditions. Another important tool is the use of the nitrogen isotope [[15N]], which can be distinguished from the more common 14N isotope. Nitrogen-containing compounds thus labeled can be tracked and measured as they move through the fungus and into the plant, as well as how they are incorporated into nitrogen-containing molecules. The current model, first put forth in 2005, proposes that the nitrogen taken up by the fungus is converted in the ERM to arginine, which is then transported to the IRM, where it is released as ammonium into the apoplast for the plant to use. A growing body of data has supported and expanded upon this model. Support has been found primarily in two ways: labeling experiments and the study of gene expression, as demonstrated in a 2010 paper by Tian et al. When labeled nitrogen compounds were added to the ERM compartment of an in vitro bsystem, six fungal genes encoding enzymes involved in the incorporation of inorganic nitrogen into glutamine and its subsequent conversion to arginine were rapidly up-regulated. After a delay, gene expression in the IRM began to show increasing levels of mRNA for genes involved in the breakdown of arginine into urea and the subsequent cleaving of ammonium from the urea molecule. This change in gene expression takes place concurrently with the arrival of 15N labeled arginine from the ERM compartment. Once inside the ERM, the nitrogen molecule may have to travel many centimeters to reach the root. While much progress has been made on either end of the transfer of nitrogen, the mechanism by which the arginine actually moves from the ERM to the IRM remains unresolved. AM fungi are non-septate and lack cell walls between cells, forming one long filament. However, passive flow through the continuous cytoplasm is too slow to explain the transport of nutrients. The mechanism by which the newly manufactured arginine is transported to the plant requires further investigation. Community and ecology A single plant with its associated fungus is not an isolated entity. It has been shown that mycelia from the roots of one plant actually colonize the roots of nearby plants, creating an underground network of plants of the same or different species. This network is known as a common mycorrhizal network (CMN). It has been demonstrated that nitrogen is transferred between plants via the hyphal network, sometimes in large amounts. For example, Cheng and Baumgartner found that about 25% of the labeled nitrogen supplied to a source plant, in this case a grass species, was transferred to the sink plant, grapevine. It is widely believed that these hyphal networks are important to local ecosystems and may have agricultural implications. Some plants, called legumes, can form simultaneous symbiotic relationships with both AM fungi and the nitrogen-fixing bacteria Rhizobia. In fact, both organisms trigger the same pathways in plants during early colonization, indicating that the two very different responses could share a common origin. While the bacteria can supply nitrogen, they cannot provide other benefits of AM fungi; AM actually enhances bacterial colonization, probably by supplying extra phosphorus for the formation of the bacterial habitat within the plant, and thus contributing indirectly to the plant's nitrogen status. It is not known if there is signaling between the two, or only between the plant and each microbe. There is almost certainly competition between the bacterial and fungal partners, whether directly or indirectly, due to the fact that both are dependent on the plant as their sole source of energy. The plant must strive to strike a delicate balance between the maintenance of both partners based on its nutrient status. Alternate theories A large body of research has shown that AM fungi can, and do, transfer nitrogen to plants and transfer nitrogen between plants, including crop plants. However, it has not been shown conclusively that there is a growth benefit from AM due to nitrogen. Some researchers doubt that AM contribute significantly to plant N status in nature. In one field study, there was negligible transfer between soybeans and corn. Furthermore, AM sometimes appears to be parasitic. This has primarily been seen under conditions of high nitrogen, which is not the usual state in a natural environment. However, it has been shown that in at least one case, colonization by AM fungi under nitrogen-limiting conditions lead to decreased shoot biomass, implying that the relationship does the plant more harm than good. Likewise in a multi-plant system it would be very difficult to find the advantage to the source plants when their nutrients are being shunted to sink plants. These findings are at odds with the observed phenomenon that under conditions of low phosphorus, the degree of AM colonization is inversely proportional to nitrogen availability. Since the plant must supply all of the energy needed to grow and sustain the fungus, it seems counter-intuitive that it would do so without some benefit to itself. Further studies are definitely needed to delineate the details of the relationship between the symbionts, including a gradient of interaction that runs from mutualism to parasitism. References Nitrogen cycle Symbiosis Mycology Soil biology
Nitrogen nutrition in the arbuscular mycorrhizal system
[ "Chemistry", "Biology" ]
1,466
[ "Behavior", "Symbiosis", "Biological interactions", "Mycology", "Nitrogen cycle", "Soil biology", "Metabolism" ]
33,278,970
https://en.wikipedia.org/wiki/Linear%20transport%20theory
In mathematical physics Linear transport theory is the study of equations describing the migration of particles or energy within a host medium when such migration involves random absorption, emission and scattering events. Subject to certain simplifying assumptions, this is a common and useful framework for describing the scattering of light (radiative transfer) or neutrons (neutron transport). Given the laws of individual collision events (in the form of absorption coefficients and scattering kernels/phase functions) the problem of linear transport theory is then to determine the result of a large number of random collisions governed by these laws. This involves computing exact or approximate solutions of the transport equation, and there are various forms of the transport equation that have been studied. Common varieties include steady-state vs time-dependent, scalar vs vector (the latter including polarization), and monoenergetic vs multi-energy (multi-group). See also Radiative transfer Neutron transport References Mathematical physics
Linear transport theory
[ "Physics", "Mathematics" ]
192
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
43,028,598
https://en.wikipedia.org/wiki/ULLtraDIMM
The ULLtraDIMM is a solid state storage device from SanDisk that connects flash storage directly onto the DDR3 memory bus. Unlike traditional PCIe Flash Storage devices, the ULLtraDIMM is plugged directly into an industry standard RDIMM memory bus slot in a server. This design and connection location provides deterministic (consistent) known latency to enable applications to be streamlined for improved performance. The ULLtraDIMM is compatible with the JEDEC MO-269 DDR3 RDIMM specification. The ULLtraDIMM supports support both 1.35 V and 1.5 V operation from 800–1333 MHz, and 1.5 V @ 1600 MHz DDR3 transfer rates. DDR3 ECC bits are used to verify the integrity of the data being sent across memory bus. The ULLtraDIMM will verify that correct ECC is received and, if there are errors, the device driver will re-run the transfer. The CPU treats the ECC from the ULLtraDIMM in the same manner as ECC from a memory DIMM, single symbol errors are corrected. The DDR3 ECC bits are not stored in the flash array. A separate ECC scheme is used for protecting data in the flash array. Memory interleaving of standard RAM is not affected by the presence of ULLtraDIMMs. UEFI/BIOS updates are required to properly recognize an ULLtraDIMM in the system as a block device and not halt the bootstrap sequence. References Computer peripherals Computer storage devices File system management Non-volatile memory Solid-state computer storage Solid-state computer storage media
ULLtraDIMM
[ "Technology" ]
343
[ "Computer peripherals", "Recording devices", "Components", "Computer storage devices" ]
43,037,038
https://en.wikipedia.org/wiki/Thaine%27s%20theorem
In mathematics, Thaine's theorem is an analogue of Stickelberger's theorem for real abelian fields, introduced by . Thaine's method has been used to shorten the proof of the Mazur–Wiles theorem , to prove that some Tate–Shafarevich groups are finite, and in the proof of Mihăilescu's theorem . Formulation Let and be distinct odd primes with not dividing . Let be the Galois group of over , let be its group of units, let be the subgroup of cyclotomic units, and let be its class group. If annihilates then it annihilates . References See in particular Chapter 14 (pp. 91–94) for the use of Thaine's theorem to prove Mihăilescu's theorem, and Chapter 16 "Thaine's Theorem" (pp. 107–115) for proof of a special case of Thaine's theorem. See in particular Chapter 15 (pp. 332–372) for Thaine's theorem (section 15.2) and its application to the Mazur–Wiles theorem. Cyclotomic fields Theorems in algebraic number theory
Thaine's theorem
[ "Mathematics" ]
247
[ "Theorems in algebraic number theory", "Theorems in number theory" ]
43,038,259
https://en.wikipedia.org/wiki/Ohlson%20O-score
The Ohlson O-score for predicting bankruptcy is a multi-factor financial formula postulated in 1980 by Dr. James Ohlson of the New York University Stern Accounting Department as an alternative to the Altman Z-score for predicting financial distress. Calculation of the O-score The Ohlson O-Score is the result of a 9-factor linear combination of coefficient-weighted business ratios which are readily obtained or derived from the standard periodic financial disclosure statements provided by publicly traded corporations. Two of the factors utilized are widely considered to be dummies as their value and thus their impact upon the formula typically is 0. When using an O-score to evaluate the probability of company’s failure, then exp(O-score) is divided by 1 + exp(O-score). The calculation for Ohlson O-score appears below: where TA = total assets GNP = gross national product price index level (in USD, 1968 = 100) TL = total liabilities WC = working capital CL = current liabilities CA = current assets X = 1 if TL > TA, 0 otherwise NI = net income FFO = funds from operations Y = 1 if a net loss for the last two years, 0 otherwise Interpretation The original model for the O-score was derived from the study of a pool of just over 2000 companies, whereas by comparison its predecessor the Altman Z-score considered just 66 companies. As a result, the O-score is significantly more accurate a predictor of bankruptcy within a 2-year period. The original Z-score was estimated to be over 70% accurate with its later variants reaching as high as 90% accuracy. The O-score is more accurate than this. However, no mathematical model is 100% accurate, so while the O-score may forecast bankruptcy or solvency, factors both inside and outside of the formula can impact its accuracy. Furthermore, later bankruptcy prediction models such as the hazard based model proposed by Campbell, Hilscher, and Szilagyi in 2011 have proven more accurate still. For the O-score, any results larger than 0.5 suggest that the firm will default within two years. See also Altman Z-score Beneish M-score Piotroski F-score References Financial ratios Bankruptcy Mathematical finance Credit risk
Ohlson O-score
[ "Mathematics" ]
468
[ "Metrics", "Applied mathematics", "Quantity", "Financial ratios", "Mathematical finance" ]
43,040,208
https://en.wikipedia.org/wiki/Frequent%20subtree%20mining
In computer science, frequent subtree mining is the problem of finding all patterns in a given database whose support (a metric related to its number of occurrences in other subtrees) is over a given threshold. It is a more general form of the maximum agreement subtree problem. Definition Frequent subtree mining is the problem of trying to find all of the patterns whose "support" is over a certain user-specified level, where "support" is calculated as the number of trees in a database which have at least one subtree isomorphic to a given pattern. Formal definition The problem of frequent subtree mining has been formally defined as: Given a threshold minfreq, a class of trees , a transitive subtree relation between trees , a finite set of trees , the frequent subtree mining problem is the problem of finding all trees such that no two trees in are isomorphic and where is an anti-monotone function such that if then TreeMiner In 2002, Mohammed J. Zaki introduced TreeMiner, an efficient algorithm for solving the frequent subtree mining problem, which used a "scope list" to represent tree nodes and which was contrasted with PatternMatcher, an algorithm based on pattern matching. Definitions Induced sub-trees A sub-tree is an induced sub-tree of if and only if and . In other words, any two nodes in S that are directly connected by an edge is also directly connected in T. For any node A and B in S, if node A is the parent of node B in S, then node A must also be the parent of node B in T. Embedded sub-trees A sub-tree is an embedded sub-tree of if and only if and two endpoint nodes of any edge in S are on the same path from the root to a leaf node in T. In other words, for any node A and B in S, if node A is the parent of node B in S, then node A must be an ancestor of node B in T. Any induced sub-trees are also embedded sub-trees, and thus the concept of embedded sub-trees is a generalization of induced sub-trees. As such embedded sub-trees characterizes the hidden patterns in a tree that are missing in traditional induced sub-tree mining. A sub-tree of size k is often called a k-sub-tree. Support The support of a sub-tree is the number of trees in a database that contains the sub-tree. A sub-tree is frequent if its support is not less than a user-specified threshold (often denoted as minsup). The goal of TreeMiner is to find all embedded sub-trees that have support at least the minimum support. String representation of trees There are several different ways of encoding a tree structure. TreeMiner uses string representations of trees for efficient tree manipulation and support counting. Initially the string is set to . Starting from the root of the tree, node labels are added to the string in depth-first search order. -1 is added to the string whenever the search process backtracks from a child to its parent. For example, a simple binary tree with root labelled A, a left child labelled B and right child labelled C can be represented by a string A B -1 C -1. Prefix equivalence class Two k-sub-trees are said to be in the same prefix equivalence class if the string representation of them are identical up to the (k-1)-th node. In other words, all elements in a prefix equivalence class only differ by the last node. For example, two trees with string representation A B -1 C -1 and A B -1 D -1 are in the prefix equivalence class A B with elements (C, 0) and (D,0). An element of a prefix class is specified by the node label paired with the 0-based depth first index of the node it is attached to. In this example, both elements of prefix class A B are attached to the root, which has an index of 0. Scope The scope of a node A is given by a pair of numbers where l and r are the minimum and maximum node index in the sub-tree rooted at A. In other words, l is the index of A, and r is the index of the rightmost leaf among the descendants of A. As such the index of any descendant of A must lie in the scope of A, which will be a very useful property when counting the support of sub-trees. Algorithm Candidate generation Frequent sub-tree patterns follow the anti-monotone property. In other words, the support of a k-sub-tree is less than or equal to the support of its (k-1)-sub-trees. Only super patterns of known frequent patterns can possibly be frequent. By utilizing this property, k-sub-trees candidates can be generated based on frequent (k-1)-sub-trees through prefix class extension. Let C be a prefix equivalence class with two elements (x,i) and (y,j). Let C' be the class representing the extension of element (x,i). The elements of C' are added by performing join operation on the two (k-1)-sub-trees in C. The join operation on (x,i) and (y,j) is defined as the following. If , then add (y,j) to C'. If , then add (y,j) and (y, ni) to C' where ni the depth-first index of x in C If , no possible element can be added to C' This operation is repeated for any two ordered, but not necessarily distinct elements in C to construct the extended prefix classes of k-sub-trees. Scope-list representation TreeMiner performs depth first candidate generation using scope-list representation of sub-trees to facilitate faster support counting. A k-sub-tree S can be representation by a triplet (t,m,s) where t is the tree id the sub-tree comes from, m is the prefix match label, and s the scope of the last node in S. Depending on how S occurs in different trees across the database, S can have different scope-list representation. TreeMiner defines scope-list join that performs class extension on scope-list representation of sub-trees. Two elements (x,i) and (y,j) can be joined if there exists two sub-trees and that satisfy either of the following conditions. In-scope test: , which corresponds to the case when . Out-scope test: , which correspond to the case when . By keeping track of distinct tree ids used in the scope-list tests, the support of sub-trees can be calculated efficiently. Applications Domains in which frequent subtree mining is useful tend to involve complex relationships between data entities: for instance, the analysis of XML documents often requires frequent subtree mining. Another domain where this is useful is the web usage mining problem: since the actions taken by users when visiting a web site can be recorded and categorized in many different ways, complex databases of trees need to be analyzed with frequent subtree mining. Other domains in which frequent subtree mining is useful include computational biology, RNA structure analysis, pattern recognition, bioinformatics, and analysis of the KEGG GLYCAN database. Challenges Checking whether a pattern (or a transaction) supports a given subgraph is an NP-complete problem, since it is an NP-complete instance of the subgraph isomorphism problem. Furthermore, due to combinatorial explosion, according to Lei et al., "mining all frequent subtree patterns becomes infeasible for a large and dense tree database". References Computational problems in graph theory
Frequent subtree mining
[ "Mathematics" ]
1,589
[ "Computational problems in graph theory", "Computational mathematics", "Graph theory", "Computational problems", "Mathematical relations", "Mathematical problems" ]
34,802,130
https://en.wikipedia.org/wiki/Renato%20Zenobi
Renato Zenobi (born 1961 in Zurich) is a Swiss chemist. He is Professor of Chemistry at ETH Zurich. Throughout his career, Zenobi has contributed to the field of analytical chemistry. Biography Zenobi received his M.Sc. degree from ETH Zurich (Switzerland) in 1986 and a Ph.D. from Stanford University (United States) in 1990. He was a postdoctoral fellow at the University of Pittsburgh (United States; 1990–1991) and at the University of Michigan (United States; 1991). In 1992 he worked as Werner Fellow at the École Polytechnique Fédérale de Lausanne (Switzerland). He became assistant professor at the ETH Zurich in 1995, was promoted to associate professor in 1997, and to full professor in 2000. Between 2010 and 2021, he was one of the associate editors of the journal Analytical Chemistry (ACS). Research and achievements Zenobi's research areas include: laser-based analytical chemistry, electrospray and laser-assisted mass spectrometry, laser-surface interactions, near-field optical microscopy / spectroscopy as well as Single-Cell Analysis. He has made contributions to the understanding of the ion formation mechanism in matrix-assisted laser desorption/ionization (MALDI) mass spectrometry (MS) and to the development of analytical tools for the nanoscale, most notably by inventing tip-enhanced Raman spectroscopy (TERS). Awards 2015 Fresenius Prize (German Chemical Society / GDCh) 2014 RUSNANO Prize 2014 Thomson Medal (International Mass Spectrometry Foundation) 2012 Fresenius Lectureship (German Chemical Society / GDCh) 2010 Honorary Professorship, Chinese Academy of Sciences (Changchun / CIAC) 2010 Honorary Professorship, Changchun University of Chinese Medicine 2010 Honorary Adjunct Professorship, Hunan University (China) 2010 Mayent/Rothschild Fellowship, Institut Curie (Paris, France) 2009 Honorary Lifetime Membership Israel Chemical Society (Israel) 2009 Schulich Graduate Lectureship Technion, Haifa (Israel) 2007 Honorary Professorship, East China Institute of Technology (Fuzhou, China) 2006 Michael Widmer Award (Novartis Pharma & Swiss Chemical Society) 2006 Hobart H. Willard Lectureship (University of Michigan, Ann Arbor) 2006 Michael Widmer Award (Novartis Pharma & Swiss Chemical Society) 2005 Theophilus Redwood Lecturer (Royal Society of Chemistry) 1998 H.E. Merck Award for Analytical Chemistry 1993 Ruzicka Prize, awarded by ETH Zürich 1991 Alfred-Werner Fellowship 1990 Andrew Mellon Postdoctoral Fellowship 1989 Thomas Hirschfeld Award (Federation of Analytical Chemistry and Spectroscopy Societies, USA) Key Publications In the area of Near-Field Optics & Tip-enhanced Raman Spectroscopy: R. M. Stöckle, C. Fokas, V. Deckert, R. Zenobi, B. Sick. B. Hecht, and U. P. Wild, High Quality Near-Field Optical Probes by Tube Etching, Appl. Phys. Lett. 75, 160-162 (1999). R. Stöckle, Y. D. Suh, V. Deckert, and R. Zenobi, Nanoscale chemical analysis by Tip-enhanced Raman Scattering, Chem. Phys. Lett. 318, 131-136 (2000). B. Hecht, B. Sick, U. P. Wild, V. Deckert, R. Zenobi, O. F. Martin, and D.W. Pohl, Scanning Near-Field Optical Microscopy and Spectroscopy with Aperture Probes: Fundamentals and Applications, J. Chem. Phys. 112, 7761-7774 (2000). W. Zhang, B.-S. Yeo, T. Schmid, and R. Zenobi, Single Molecule Tip-enhanced Raman Spectroscopy with Silver tips, J. Phys. Chem. C 111, 1733-1738 (2007). B.-S. Yeo, J. Stadler, T. Schmid, and R. Zenobi, Tip-Enhanced Raman Spectroscopy – its Status, Challenges, and Future Directions, Chem. Phys. Lett. 472, 1-13 (2009), “Fontiers” article. In the area of Mass Spectrometric Analyses of Complex Samples: M. Kalberer et al., First Identification of Polymers as Major Component of Atmospheric Organic Aerosols, Science 303, 1659-1662 (2004). R. Zenobi, Single-Cell Metabolomics: Analytical and Biological Perspectives, Science 342, 1243259 (2013). In the area of Mass Spectrometric Studies of Noncovalent Interactions: J. M. Daniel, S. D. Friess, S. Rajagopalan, S. Wendt, and R. Zenobi, Quantitative Determination of Noncovalent Binding Interactions using Soft Ionization Mass Spectrometry, Int. J. Mass Spectrom. 216, 1-27 (2002). V. Frankevich, K. Barylyuk, K. Chingin. R. Nieckarz, and R. Zenobi, Native Molecules in the Gas Phase? The Case of Green Fluorescent Protein, ChemPhysChem 14, 929-935 (2013). In the area of MALDI Mass Spectrometry: R. Knochenmuss and R. Zenobi, MALDI Ionization: In-Plume Processes, Chem. Rev. 103, 441-452 (2003). R. J. Wenzel, U. Matter, L. Schultheis, and R. Zenobi, Sensitive Analysis of Megadalton Ions using Cryodetection MALDI Time-of-flight Mass Spectrometry, Anal. Chem. 77, 4329 -4373 (2005). In the area of Ambient Mass Spectrometry: H. Chen, A. Wortmann, W. Zhang, and R. Zenobi, Rapid in-vivo Finterprinting of Non-volatile Compounds in Breath by Extractive Electrospray Ionization Quadrupole Time-of-Flight Mass Spectrometry, Angew. Chem. Internat. Ed. 119, 580-583 (2007). H. Chen, S. Yang, A. Wortmann, and R. Zenobi, Neutral Desorption Sampling of Living Objects for Rapid Analysis by Extractive Electrospray Ionization Mass Spectrometry, Angew. Chem. 119, 7735-7738 (2007); Angew. Chem. Int. Ed. 46, 7591-7594 (2007). H. Chen, G. Gamez, and R. Zenobi, What can we Learn from Ambient Ionization Techniques? J. Am. Soc. Mass Spectrom. (Critical Insight) 20, 1947-1963 (2009) L. Zhu, G. Gamez, H. Chen, K. Chingin, and R. Zenobi, Rapid Detection of Melamine in Untreated Milk and Wheat Gluten by Ultrasound-assisted Extractive Electrospray Ionization Mass Spectrometry (EESI-MS), Chem. Comm. 559-561 (2009). See also Electrospray ionization Matrix-assisted laser desorption/ionization Micro-arrays for mass spectrometry References Sources http://www.loc.ethz.ch/people/professoren/zenobi_EN http://pubs.acs.org/page/ancham/editors.html External links Academic staff of ETH Zurich Living people Mass spectrometrists Swiss chemists 1961 births University of Michigan fellows Thomson Medal recipients ETH Zurich alumni
Renato Zenobi
[ "Physics", "Chemistry" ]
1,639
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
34,802,592
https://en.wikipedia.org/wiki/Siemens%20mercury%20unit
The Siemens mercury unit is an obsolete unit of electrical resistance. It was defined by Werner von Siemens in 1860 as the resistance of a mercury column with a length of one metre and uniform cross-section of held at a temperature of zero degrees Celsius. It is equivalent to approximately 0.953 ohm. Glass tube cross sections are typically irregularly conical rather than perfect cylinders, which presented a problem in constructing precise measuring devices. One could make many tubes and test them for conical regularity, discarding the least regular ones; their regularity can be measured by inserting a small drop of mercury into one end of the tube, then measuring its length while sucking it along. The cross-sectional area at each end can then be measured by filling the tube with pure mercury at a fixed temperature, weighing it, and comparing that weight to the relative lengths of the mercury drop at each end. The tube can then be used for measurement by applying a formula obtained from these measurements that corrects for its conical shape. The Siemens mercury unit was superseded in 1881 by the ohm; the name "siemens" was later reused for a unit of electric conductance. References External links Units named after Siemens Units of electrical resistance Mercury (element) Obsolete units of measurement Werner von Siemens
Siemens mercury unit
[ "Physics", "Mathematics" ]
257
[ "Electrical resistance and conductance", "Obsolete units of measurement", "Physical quantities", "Quantity", "Units of electrical resistance", "Units of measurement" ]
34,808,530
https://en.wikipedia.org/wiki/Intravoxel%20incoherent%20motion
Intravoxel incoherent motion (IVIM) imaging is a concept and a method initially introduced and developed by Le Bihan et al. to quantitatively assess all the microscopic translational motions that could contribute to the signal acquired with diffusion MRI. In this model, biological tissue contains two distinct environments: molecular diffusion of water in the tissue (sometimes referred to as 'true diffusion'), and microcirculation of blood in the capillary network (perfusion). The concept introduced by D. Le Bihan is that water flowing in capillaries (at the voxel level) mimics a random walk (“pseudo-diffusion” ) (Fig.1), as long as the assumption that all directions are represented in the capillaries (i.e. there is no net coherent flow in any direction) is satisfied. It is responsible for a signal attenuation in diffusion MRI, which depends on the velocity of the flowing blood and the vascular architecture. Similarly to molecular diffusion, the effect of pseudodiffusion on the signal attenuation depends on the b value. However, the rate of signal attenuation resulting from pseudodiffusion is typically an order of magnitude greater than molecular diffusion in tissues, so its relative contribution to the diffusion-weighted MRI signal becomes significant only at very low b values, allowing diffusion and perfusion effects to be separated. Model In the presence of the magnetic field gradient pulses of a diffusion MRI sequence, the MRI signal gets attenuated due to diffusion and perfusion effects. In a simple model, this signal attenuation, S/So, can be written as: [1] where is the volume fraction of incoherently flowing blood in the tissue (“flowing vascular volume”), the signal attenuation from the IVIM effect and is the signal attenuation from molecular diffusion in the tissue. Assuming blood water flowing in the randomly oriented vasculature changes several times direction (at least 2) during the measurement time (model 1), one has for : [2] where is the diffusion-sensitization of the MRI sequence, is the sum of the pseudo-diffusion coefficient associated to the IVIM effect and , the diffusion coefficient of water in blood: [3] where is the mean capillary segment length and is the blood velocity. If blood water flows without changing direction (either because flow is slow or measurement time is short) while capillary segments are randomly and isotropically oriented (model 2), becomes: [4] where is a parameter linked to the gradient pulse amplitude and time course (similar to the b value). In both cases, the perfusion effect results in a curvature of the diffusion attenuation plot towards b=0 (Fig.2). In a simple approach and under some approximations, the ADC calculated from 2 diffusion-weighted images acquired with b0=0 and b1, as ADC = ln(S(b0)/S (b1)), is: [5] where is the tissue diffusion coefficient. The ADC thus only depends on the flowing vascular volume (tissue vascularity) and not on the blood velocity and capillary geometry, which is a strong advantage. The contribution of perfusion to the ADC is larger when using small b values. On the other hand, set of data obtained from images acquired with a multiple b values can be fitted with Eq.[1] using either model 1 (Eq.[2,3]) or model 2(Eq.[4]) to estimate and/or blood velocity. The late part of the curve (towards high b values, generally above 1000 s/mm²) also presents some degree of curvature (Fig.2). This is because diffusion in biological tissues is not free (Gaussian), but can be hindered by many obstacles (in particular cell membranes) or even restricted (i.e. intracellular). Several models have been proposed to describe this curvature at higher b-values, mainly the “biexponential” model which assumes the presence of 2 water compartments with fast and slow diffusion (where neither compartment is the from IVIM), the relative 'fast' and 'slow' labels referring to restricted and hindered diffusion, rather than pseudodiffusion/perfusion and true (hindered) diffusion. Another alternative is the “kurtosis” model which quantifies the deviation from free (Gaussian) diffusion in the parameter (Eq. [7]). Biexponential model: [6] Where and are the relative fractions and diffusion coefficients of the fast and slow compartments. This general formulation of a biexponential decay of diffusion-weighted imaging signal with b-value can be used for IVIM, which requires sampling of low b-values (<100 s/mm²) to capture pseudodiffusion decay, or for restriction imaging, which requires higher b-value acquisitions (>1000 s/mm²) to capture restricted diffusion. Kurtosis model: [7] where is the tissue intrinsic diffusion coefficient and the Kurtosis parameter (deviation from Gaussian diffusion). Both models can be related assuming some hypotheses about the tissue structure and the measurement conditions. Separation of perfusion from diffusion requires good signal-to-noise ratios and there are some technical challenges to overcome (artifacts, influence of other bulk flow phenomena, etc.). Also the “perfusion” parameters accessible with the IVIM method somewhat differs from the “classical” perfusion parameters obtained with tracer methods: “Perfusion” can be seen with the physiologist eyes (blood flow) or the radiologist eyes (vascular density). Indeed, there is room to improve the IVIM model and understand better its relationship with the functional vascular architecture and its biological relevance. Applications IVIM MRI was initially introduced to evaluate perfusion and produce maps of brain perfusion, for brain activation studies (before the introduction of BOLD fMRI) and clinical applications (stroke, brain tumors). Recent work has proven the validity of the IVIM concept from fMRI, with an increase in the IVIM perfusion parameters in brain activated regions, and the potential of the approach to aid in our understanding of the different vascular contributions to the fMRI signal. IVIM MRI has also been used in the context of fMRI in a negative way. A limitation of BOLD fMRI is its spatial resolution, as flow increase in somewhat large arteries or veins feed or drain large neuronal territories. By inserting “diffusion” gradient pulses in the MRI sequence (corresponding to low b-values), one may crush the contribution of the largest vessels (with high D* values associated with fast flow) in the BOLD signal and improve the spatial resolution of the activation maps. Several groups have relied on this trick, though not always considering referring to the IVIM concept. This IVIM concept has also been borrowed to improve other applications, for instance, arterial spin labeling (ASL) or to suppress signal from extracellular flowing fluid in perfused cell systems. However, IVIM MRI has recently undergone a striking revival for applications not in the brain, but throughout the body as well. Following earlier encouraging results in the kidneys, or even the heart, IVIM MRI really took off for liver applications. For instance, Luciani et al. found that D* was significantly reduced in cirrhotic patients, which, according to the IVIM model, points out to reduce blood velocity (and flow). (Another theoretical, rather unlikely interpretation would be that capillary segments become longer or more straight in those patients with liver fibrosis). The perfusion fraction, f, which is linked to blood volume in the IVIM model, remained normal, confirming earlier results by Yamada et al. Though, blood volume is expected to be reduced in liver cirrhosis. One has to keep in mind that IVIM imaging has a differential sensitivity to vessel types, according to the range of motion sensitization (b values) which are used. Signal from large vessels with rapid flow disappears quickly with very low b values, while smaller vessels with slower flow might still contribute to the IVIM signal acquired with b values larger than 200 s/mm². It has also been shown that the parameter f, often related to perfusion fraction, is sensitive to differential spin-spin relaxation rates in the two model compartments (blood/tissue) and can thus be overestimated in highly perfused tissue. Correction of this effect is achieved by additional images at a different echo time. Many more applications are now under investigation, especially for imaging of patients suspected of cancer in the body (prostate, liver, kidney, pancreas, etc.) and human placenta. A key feature of IVIM diffusion MRI is that it does not involve contrast agents, and it may appear as an interesting alternative for perfusion MRI in some patients at risk for Nephrogenic Systemic Fibrosis (NSF). References Magnetic resonance imaging
Intravoxel incoherent motion
[ "Chemistry" ]
1,853
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
34,808,927
https://en.wikipedia.org/wiki/Bioproducts%20engineering
Bioproducts engineering or bioprocess engineering refers to engineering of bio-products from renewable bioresources. This pertains to the design and development of processes and technologies for the sustainable manufacture of bioproducts (materials, chemicals and energy) from renewable biological resources. Bioproducts engineers harness the molecular building blocks of renewable resources to design, develop and manufacture environmentally friendly industrial and consumer products. From biofuels, renewable energy, and bioplastics to paper products and "green" building materials such as bio-based composites, Bioproducts engineers are developing sustainable solutions to meet the world's growing materials and energy demand. Conventional bioproducts and emerging bioproducts are two broad categories used to categorize bioproducts. Examples of conventional bio-based products include building materials, pulp and paper, and forest products. Examples of emerging bioproducts or biobased products include biofuels, bioenergy, starch-based and cellulose-based ethanol, bio-based adhesives, biochemicals, biodegradable plastics, etc. Bioproducts Engineers play a major role in the design and development of "green" products including biofuels, bioenergy, biodegradable plastics, biocomposites, building materials, paper and chemicals. Bioproducts engineers also develop energy efficient, environmentally friendly manufacturing processes for these products as well as effective end-use applications. Bioproducts engineers play a critical role in a sustainable 21st century bio-economy by using renewable resources to design, develop, and manufacture the products we use every day. The career outlook for bioproducts engineers is very bright with employment opportunities in a broad range of industries, including pulp and paper, alternative energy, renewable plastics, and other fiber, forest products, building materials and chemical-based industries. Commonly referred to as bioprocess engineering, bioprocess engineering is a specialization of biotechnology, biological engineering, chemical engineering or of agricultural engineering. It deals with the design and development of equipment and processes for the manufacturing of products such as food, feed, pharmaceuticals, nutraceuticals, chemicals, and polymers and paper from biological materials. Bioprocees engineering is a conglomerate of mathematics, biology and industrial design, and consists of various spectrums like designing of fermentors, study of fermentors (mode of operations etc.). It also deals with studying various biotechnological processes used in industries for large scale production of biological product for optimization of yield in the end product and the quality of end product. Bio process engineering may include the work of mechanical, electrical and industrial engineers to apply principles of their disciplines to processes based on using living cells or sub component of such cells. See also Biochemicals Biofact (biology) Biogas Biomass Biomass (ecology) Biorefining Bioresource engineering Forest Non-timber forest product Outline of forestry Colleges and universities University of Minnesota (Bioproducts and Biosystems Engineering) SUNY-ESF (Bioprocess Engineering Program) Université de Sherbrooke UC Berkeley Savannah Technical College East Carolina University Institute of Chemical Technology (ICT) Mumbai Jadavpur University Universidade Federal do Rio de Janeiro Universidade Estadual do Rio Grande do Sul University of Stellenbosch NC State University Virginia Tech Washington State University University of Washington University of Maine References Further reading Bowyer, J.L., Ramaswamy, S. Bioenergy development: Alignment is essential, Part 1, Bioenergy Technologies Tappi Publication, January 2009, p14-17 Bowyer, J.L., Ramaswamy, S. Bioenergy development: Alignment is essential for Bioenergy Development, Part II, Exploring possible scenarios resulting from a supply gap, and possible effects of bioenergy development in environmental quality Tappi Publication, March 2009, p16-19 External links U.S. DOE Biomass Program Energy Title (Title IX) of the Farm Security and Rural Investment Act of 2002 Sustainable agriculture Biotechnology Sustainable business Sustainable forest management
Bioproducts engineering
[ "Biology" ]
842
[ "nan", "Biotechnology" ]
34,811,905
https://en.wikipedia.org/wiki/Verinice
Verinice is a free and open source information security management system (ISMS) application which can help in creating and maintaining systems for information and security management. Verinice was written and is maintained primarily by a German company named SerNet Service Network GmbH. Verinice is licensed under GNU General Public License (version 3 or later). Its main users are usually small and medium companies, some big enterprises and government agencies. In Germany, Verinice is the recommended ISMS tool from German Association of the Automotive Industry (VDA) for its members like Volkswagen, Daimler AG, Fiat and other big manufacturers. VDA is also sponsoring Verinice development since release 1.2. Verinice supports the operating systems Windows, Linux and OS X and has licensed the IT Baseline Protection Catalogs from the Federal Office for Information Security. Other tools for creating ISMS CertVision NormTracker HiScout GRC Suite GS Tool SecuMax CRISAM ibi systems iris eramba References External links module for data privacy from federal commissioner for data protection and information freedom Interest group of German car manufacturers baseline protection module for car manufacturers Verinice Download Data security
Verinice
[ "Engineering" ]
238
[ "Cybersecurity engineering", "Data security" ]
34,812,675
https://en.wikipedia.org/wiki/Event%20structure
In mathematics and computer science, an event structure describes sequences of events that can be triggered by combinations of other events, with certain forbidden combinations of events. Different sources provide more or less flexible mathematical formalizations of the way events can be triggered and which combinations are forbidden. The most general of these formalizations is given by Glynn Winskel. Winskel formalizes an event structure can be formalized as a triple , in which: is a set of events, not necessarily finite. is a family of finite subsets of , the subsets that are deemed to be consistent (not forbidden). If is one of these consistent sets, then every subset of must also be consistent. That is, must be closed under the operation of taking subsets. is a binary relation from consistent sets to elements of . The relation , for and is interpreted as meaning that when the events so far form set , this enables to be the next event. When , it is required that for every consistent superset (with and ). According to Winskel's definitions, a configuration of an event structure is a subset of all of whose finite subsets are consistent and whose events are all secured. Here, an event is secured when it belongs to a finite sequence of events from the configuration, each of which is enabled by the subset of earlier events from the same sequence. The nlab simplifies these definitions in two ways: It replaces the family of consistent events by an irreflexive symmetric relation called incompatibility (or conflict), such that a finite set of events is consistent if and only if it contains no incompatible pair. And (either separately or with both simplifications together) it replaces the enabling relation by a partial order relation on called causal dependency, such that each event has finitely many predecessors, all of which must have happened earlier to enable the event. For the event structures with both simplifications, which nlab calls prime event structures, the configurations are the downward-closed subsets of the partial order that include no incompatible pairs. See also Antimatroid, a system of events ordered by enabling subsets but without a consistency requirement References Causality Domain theory
Event structure
[ "Physics", "Mathematics" ]
443
[ "Order theory", "Domain theory" ]
34,814,274
https://en.wikipedia.org/wiki/Group%20I%20pyridoxal-dependent%20decarboxylases
In molecular biology, the group I pyridoxal-dependent decarboxylases, also known as glycine cleavage system P-proteins, are a family of enzymes consisting of glycine cleavage system P-proteins (glycine dehydrogenase (decarboxylating)) from bacterial, mammalian and plant sources. The P protein is part of the glycine decarboxylase multienzyme complex (GDC) also annotated as glycine cleavage system or glycine synthase. The P protein binds the alpha-amino group of glycine through its pyridoxal phosphate cofactor, carbon dioxide is released and the remaining methylamine moiety is then transferred to the lipoamide cofactor of the H protein. GDC consists of four proteins P, H, L and T. Pyridoxal-5'-phosphate-dependent amino acid decarboxylases can be divided into four groups based on amino acid sequence. Group I comprises glycine decarboxylases. See also Group II pyridoxal-dependent decarboxylases Group III pyridoxal-dependent decarboxylases Group IV pyridoxal-dependent decarboxylases References Protein families
Group I pyridoxal-dependent decarboxylases
[ "Biology" ]
270
[ "Protein families", "Protein classification" ]
34,814,563
https://en.wikipedia.org/wiki/Group%20II%20pyridoxal-dependent%20decarboxylases
In molecular biology, group II pyridoxal-dependent decarboxylases are a family of enzymes including aromatic-L-amino-acid decarboxylase (L-dopa decarboxylase or tryptophan decarboxylase) that catalyse the decarboxylation of tryptophan to tryptamine, tyrosine decarboxylase that converts tyrosine into tyramine and histidine decarboxylase that catalyses the decarboxylation of histidine to histamine. Pyridoxal-5'-phosphate-dependent amino acid decarboxylases can be divided into four groups based on amino acid sequence: group II includes glutamate, histidine, tyrosine, and aromatic-L-amino-acid decarboxylases. See also Group I pyridoxal-dependent decarboxylases Group III pyridoxal-dependent decarboxylases Group IV pyridoxal-dependent decarboxylases References Protein families
Group II pyridoxal-dependent decarboxylases
[ "Biology" ]
232
[ "Protein families", "Protein classification" ]
34,816,085
https://en.wikipedia.org/wiki/Web%20%28differential%20geometry%29
In mathematics, a web permits an intrinsic characterization in terms of Riemannian geometry of the additive separation of variables in the Hamilton–Jacobi equation. Formal definition An orthogonal web on a Riemannian manifold (M,g) is a set of n pairwise transversal and orthogonal foliations of connected submanifolds of codimension 1 and where n denotes the dimension of M. Note that two submanifolds of codimension 1 are orthogonal if their normal vectors are orthogonal and in a nondefinite metric orthogonality does not imply transversality. Alternative definition Given a smooth manifold of dimension n, an orthogonal web (also called orthogonal grid or Ricci’s grid) on a Riemannian manifold (M,g) is a set of n pairwise transversal and orthogonal foliations of connected submanifolds of dimension 1. Remark Since vector fields can be visualized as stream-lines of a stationary flow or as Faraday’s lines of force, a non-vanishing vector field in space generates a space-filling system of lines through each point, known to mathematicians as a congruence (i.e., a local foliation). Ricci’s vision filled Riemann’s n-dimensional manifold with n congruences orthogonal to each other, i.e., a local orthogonal grid. Differential geometry of webs A systematic study of webs was started by Blaschke in the 1930s. He extended the same group-theoretic approach to web geometry. Classical definition Let be a differentiable manifold of dimension N=nr. A d-web W(d,n,r) of codimension r in an open set is a set of d foliations of codimension r which are in general position. In the notation W(d,n,r) the number d is the number of foliations forming a web, r is the web codimension, and n is the ratio of the dimension nr of the manifold M and the web codimension. Of course, one may define a d-web of codimension r without having r as a divisor of the dimension of the ambient manifold. See also Foliation Parallelization (mathematics) Notes References Differential geometry Manifolds
Web (differential geometry)
[ "Mathematics" ]
480
[ "Topological spaces", "Topology", "Manifolds", "Space (mathematics)" ]
40,100,778
https://en.wikipedia.org/wiki/Software%20map
A software map represents static, dynamic, and evolutionary information of software systems and their software development processes by means of 2D or 3D map-oriented information visualization. It constitutes a fundamental concept and tool in software visualization, software analytics, and software diagnosis. Its primary applications include risk analysis for and monitoring of code quality, team activity, or software development progress and, generally, improving effectiveness of software engineering with respect to all related artifacts, processes, and stakeholders throughout the software engineering process and software maintenance. Motivation and concepts Software maps are applied in the context of software engineering: Complex, long-term software development projects are commonly faced by manifold difficulties such as the friction between completing system features and, at the same time, obtaining a high degree of code quality and software quality to ensure software maintenance of the system in the future. In particular, "Maintaining complex software systems tends to be costly because developers spend a significant part of their time with trying to understand the system’s structure and behavior." The key idea of software maps is to cope with that challenge and optimization problems by providing effective communication means to close the communication gap among the various stakeholders and information domains within software development projects and obtaining insights in the sense of information visualization. Software maps take advantage of well-defined cartographic map techniques using the virtual 3D city model metaphor to express the underlying complex, abstract information space. The metaphor is required "since software has no physical shape, there is no natural mapping of software to a two-dimensional space". Software maps are non-spatial maps that have to convert the hierarchy data and its attributes into a spatial representation. Applications Software maps generally allow for comprehensible and effective communication of course, risks, and costs of software development projects to various stakeholders such as management and development teams. They communicate the status of applications and systems currently being developed or further developed to project leaders and management at a glance. "A key aspect for this decision-making is that software maps provide the structural context required for correct interpretation of these performance indicators". As an instrument of communication, software maps act as open, transparent information spaces which enable priorities of code quality and the creation of new functions to be balanced against one another and to decide upon and implement necessary measures to improve the software development process. For example, they facilitate decisions as to where in the code an increase in quality would be beneficial both for speeding up current development activities and for reducing risks of future maintenance problems. Due to their high degree of expressiveness (e.g., information density) and their instantaneous, automated generation, the maps additionally serve to reflect the current status of system and development processes, bridging an essential information gap between management and development teams, improve awareness about the status, and serve as early risk detection instrument. Contents Software maps are based on objective information as determined by the KPI driven code analysis as well as by imported information from software repository systems, information from the source codes, or software development tools and programming tools. In particular, software maps are not bound to a specific programming language, modeling language, or software development process model. Software maps use the hierarchy of the software implementation artifacts such as source code files as a base to build a tree mapping, i.e., a rectangular area that represents the whole hierarchy, subdividing the area into rectangular sub-areas. A software map, informally speaking, looks similar to a virtual 3D city model, whereby artifacts of the software system appear as virtual, rectangular 3D buildings or towers, which are placed according to their position in the software implementation hierarchy. Software maps can express and combine information about software development, software quality, and system dynamics by mapping that information onto visual variables of the tree map elements such as footprint size, height, color or texture. They can systematically be specified, automatically generated, and organized by templates. Mapping software system example Software maps "combine thematic information about software development processes (evolution), software quality, structure, and dynamics and display that information in a cartographic manner". For example: The height of a virtual building can be proportional to the complexity of the code unit (e.g., single or combined software metrics). The ground area of a virtual 3D building can be proportional to the number of lines of code in the module or (e.g., non-comment lines-of-code NCLOC). The color can express the current development status, i.e., how many developers are changing/editing the code unit. With this exemplary configuration, the software map shows crucial points in the source code with relations to aspects of the software development process. For example, it becomes obvious at a glance what to change in order to: implement changes quickly; evaluate quickly the impact of changes in one place on functionality elsewhere; reduce entanglements that lead to uncontrolled processes in the application; find errors faster; discover and eliminate bad programming style. Software maps represent key tools in the scope of automated software diagnosis software diagnostics. As business intelligence tools and recommendation systems Software maps can be used, in particular, as analysis and presentation tool of business intelligence systems, specialized in the analysis of software related data. Furthermore, software maps "serve as recommendation systems for software engineering". Software maps are not limited by software-related information: They can include any hierarchical system information as well, for example, maintenance information about complex technical artifacts. Visualization techniques Software maps are investigated in the domain of software visualization. The visualization of software maps is commonly based on tree mapping, "a space-filling approach to the visualization of hierarchical information structures" or other hierarchy mapping approaches. Layout algorithms To construct software maps, different layout approaches are used to generate the basic spatial mapping of components, such as: Tree-map algorithms that initially map the software hierarchy into a recursively nested rectangular area. Voronoi-map algorithms that initially map the software hierarchy by generating a Voronoi map. Layout stability The spatial arrangement computed by layouts such as defined by tree maps strictly depends on the hierarchy. If software maps have to be generated frequently for an evolving or changing system, the usability of software maps is affected by non-stable layouts, that is, minor changes to the hierarchy may cause significant changes to the layout. In contrast to regular Voronoi treemap algorithms, which do not provide deterministic layouts, the layout algorithm for Voronoi treemaps can be extended to provides a high degree of layout similarity for varying hierarchies. Similar approaches exist for the tree-map based case. History Software maps methods and techniques belong to the scientific discipline of software visualization and information visualization. They form a key concept and technique within the fields of software diagnosis. They also have applications in software mining and software analytics. Software maps have been extensively developed and researched by, e.g., at the Hasso Plattner Institute for IT systems engineering, in particular for large-scale, complex IT systems and applications. References External links Scientific conference VISSOFT (IEEE Working Conference on Software Visualization) Interactive Rendering of Complex 3D-Treemaps Multiscale Visual Comparison of Execution Traces Interactive Software Maps for Web-Based Source Code Analysis Extending Recommendation Systems with Software Maps A Visual Analysis Approach to Support Perfective Software Maintenance ViewFusion: Correlating Structure and Activity Views for Execution Traces A Visual Analysis and Design Tool for Planning Software Reengineerings Interactive Areal Annotations for 3D Treemaps of Large-Scale Software Systems Visualization of Execution Traces and its Application to Software Maintenance Understanding Complex Multithreaded Software Systems by Using Trace Visualization Visualization of Multithreaded Behavior to Facilitate Maintenance of Complex Software Systems Visualizing Massively Pruned Execution Traces to Facilitate Trace Exploration Projecting Code Changes onto Execution Traces to Support Localization of Recently Introduced Bugs SyncTrace: Visual Thread-Interplay Analysis Software maintenance Software metrics Software development process Software development Software quality Quality Infographics Visualization (graphics)
Software map
[ "Mathematics", "Technology", "Engineering" ]
1,617
[ "Software testing", "Metrics", "Quantity", "Computer occupations", "Software metrics", "Software engineering", "Software maintenance", "Software development" ]
40,102,361
https://en.wikipedia.org/wiki/Primecoin
Primecoin (Abbreviation: XPM) is a cryptocurrency that implements a proof-of-work system that searches for chains of prime numbers. History Primecoin was launched in 2013 by Sunny King, who also founded Peercoin. Unlike other cryptocurrencies, which are mined using algorithms that solved mathematical problems with no extrinsic value, mining Primecoin involves producing chains of prime numbers (Cunningham and bi-twin chains). These are useful to scientists and mathematicians and meet the requirements for a proof of work system of being hard to compute but easy to verify and having an adjustable difficulty. Shortly after its launch, some trade journals reported that the rush of over 18,000 new users seeking to mine Primecoin overwhelmed providers of dedicated servers. It was ranked as being one of the top ten currencies before 2014. Primecoin has a block time of one minute, changes difficulty every block, and has a block reward that is a function of the difficulty. References Further reading External links Cryptocurrency projects Prime numbers Currencies introduced in 2013
Primecoin
[ "Mathematics" ]
225
[ "Prime numbers", "Mathematical objects", "Numbers", "Number theory" ]
40,104,795
https://en.wikipedia.org/wiki/John%20A.%20Hartford%20Foundation
The John A. Hartford Foundation (JAHF or the Hartford Foundation) is a private United States–based philanthropy whose current mission is to improve the care of older adults. For many years, it made grants for research and education in geriatric medicine, nursing and social work. It now focuses on three priority areas: creating age-friendly health systems, supporting family caregivers and improving serious illness, and end-of-life care. History Based in New York City, the foundation was founded in 1929 by John Augustine Hartford and later joined by his brother George Ludlum Hartford, the family owners of the A&P grocery chain. The foundation's mission from the beginning has been "to do the greatest good for the greatest number." In the early and mid-20th century, the foundation primarily made grants for research centered on basic and clinical medicine and health care quality and costs. It was at one time the largest funder in these areas. In 1982, the focus turned towards improving the care of older adults, as there became an increasing need to fund this underserved area. In 1994, the foundation made this priority its chief mission. Leadership Terry Fulmer, PhD, RN, FAAN, is the President of the foundation. She serves as the chief strategist for the foundation, and her vision for better care of older adults is influencing the Age-Friendly Health Systems social movement. She is an elected member of the National Academy of Medicine and recently served on the independent Coronavirus Commission for Safety and Quality in Nursing Homes established to advise the Centers for Medicare and Medicaid Services. She previously served as Distinguished Professor and Dean of Health Sciences at Northeastern University. Prior, she served as the Erline Perkins McGriff Professor and Founding Dean of the New York University College of Nursing. She has held faculty appointments at Columbia University, where she was the Anna C. Maxwell Chair in Nursing, and she has also held appointments at Boston College, Yale University, and the Harvard Division on Aging at Harvard Medical School. She is a Distinguished Practitioner of the National Academies of Practice and is currently an attending nurse and senior nurse in the Yvonne L. Munn Center for Nursing Research at the Massachusetts General Hospital and an attending nurse at Mount Sinai Medical Center in NYC. Her clinical appointments have included the Beth Israel Hospital in Boston, the Massachusetts General Hospital, and the NYU Langone Medical Center. She is a Fellow of the American Academy of Nursing, the Gerontological Society of America, and the New York Academy of Medicine where she served as vice-chair. She has authored nearly 400 peer-reviewed papers. Priority areas In a rapidly evolving health care environment, the Foundation supports the spread of evidence-based models that can dramatically accelerate care improvement for older adults, which benefits all of us. The three priority areas are: Age-Friendly Health Systems; Family Caregiving, and Serious Illness & End of Life. Age-Friendly Health Systems To prevent harm to older adults, improve health outcomes, and lower overall costs, health systems must adopt evidence-based models and practices that deliver better care to our rapidly aging population across all settings, including the home and community. In partnership with expert innovators and health care leaders, The John A. Hartford Foundation is working to create health systems that are age-friendly and better able to meet the goals of the Triple Aim. Grantmaking in this area will: (1)Develop large-scale approaches that help health systems transform care; (2)Operationalize the essential elements of good care, building on the Foundation’s investments in evidence-based models and best practices; and (3)Better integrate community-based supports and services within the health system and across the continuum of care. Family Caregiving There are nearly 18 million people in the US regularly providing care to an older loved one who needs assistance. These family caregivers frequently perform heroic tasks, but are often invisible in our health care system and receive little preparation and support. Recognizing the critical role that family caregivers play, The John A. Hartford Foundation is committed to transforming our health care and social services systems to meet the needs of family caregivers. Grantmaking in this area will: (1)Improve the ability of health systems and providers to identify, assess, and support family caregivers; (2)Raise awareness among policymakers, health system leaders, funders, and the public to drive change; and (3)Create large-scale change in partnership with national efforts. Serious Illness and End of Life With increasing age comes greater risk for serious illness and the natural progression towards the end of life. Too often, care at this time fails to meet the goals and preferences of older adults and results in harm and burden. Palliative care and other effective approaches must be more widely available. The John A. Hartford Foundation will continue to promote care that preserves dignity and honors the wishes of older adults and their families. Grantmaking in this area will: (1) Increase access to high-quality palliative care services and other evidence-based models and practices; (2) Develop approaches for better educating and preparing the health care workforce; and (3) Foster communication and community-based solutions while informing public policy supportive of the needs of the seriously ill and their families. Activities From about 1980 to 2012, the foundation focused on a two-pronged effort to build training capacity and conduct research into different models of care for older adults at schools of medicine, nursing, and social work. Its current programs aim to more directly impact the health of older adults. In 2013, the Foundation organized its grantmaking in five portfolios: Interprofessional Leadership in Action, Linking Education and Practice, Developing and Disseminating Models of Care, Tools and Measures for Quality Care, and Policy and Communications. In 2015 and 2016, the foundation implemented its current priority areas: creating age-friendly health systems, supporting family caregivers, and improving serious illness and end-of-life care. In 2016, the Age-Friendly Health System initiative was started in collaboration with the Institute for Healthcare Improvement (IHI), American Hospital Association (AHA), and the Catholic Health Association of the United States (CHA). In 2008, the foundation led a consortium of grantmakers to fund a study from the Institute of Medicine to look at the "crisis" of an ill-prepared workforce and outdated models of caring for older adults." Geriatric training In order to grow the geriatric workforce, JAHF has helped to develop training programs across various disciplines, from university-based physicians to nurses and social workers. The hope has been to establish a strong interdisciplinary labor force that is well-trained clinically and well-equipped academically to further geriatric research and care. Medicine One of the largest and most important programs in the foundation's recent history has been to help build academic capacity in geriatric medicine through the Centers of Excellence in Geriatric Medicine, which started in 1986. They are located at academic medical centers around the country, and are known as high-throughput producers of academic geriatricians as well as the generators of basic, clinical, and population level medical knowledge about older adults. As many as 55,000 physicians are trained or mentored at these centers annually. The Paul Beeson Physician Faculty Scholars in Aging Research Program was launched in 1994, in partnership with Atlantic Philanthropies, the Commonwealth Fund, and the Starr Foundation. As of 2019, the three-year fellowship has trained 225 scholars, with significant impacts to aging research. The foundation has also supported training and development programs for medical students, fellows, junior faculty, and senior thought leaders through its funding of the Association of American Medical Colleges for geriatric scholarships, residence programs, and curriculum development. Another important initiative has been building bridges from geriatric medicine out to the subspecialties of internal medicine and surgical and related specialties. Nursing Geriatric nursing has long been an important focus of JAHF, and this culminated in $5 million of funding to establish the John A. Hartford Foundation Institute for the Advancement of Geriatric Nursing in 1996, the first of its kind in the US. From supporting nursing societies to the American Association of Colleges of Nursing, the institute helped to build curriculums and experiences to expose nursing students to geriatrics in their programs. To further the academic training of nurses, five centers of geriatric nursing excellence were established – 300 trainees had received pre- and postdoctoral fellowships through 2015, and as of 2012, 95% were part of nursing school faculties. Social work The foundation focused on enhancing the capability of schools of social work to train aging-competent social workers through the Geriatric Social Work Initiative in the late 1990s. There were three core goals for the initiative: (1) work with the Council on Social Work Education to incorporate geriatrics into the curriculum; (2) offer scholarships and fellowships to faculty and students to fund their pursuits in geriatric social work; and (3) provide practical exposure by placing upper-level social work students in field experiences. Care models Community-based care Programs of All-Inclusive Care for the Elderly (PACE) has long been championed by JAHF. Providing services for approximately 40,000 people, the model allows persons older than fifty-five to receive health and social services in their homes, instead of resorting to long-term care facilities. Despite being reimbursed by Medicare and Medicaid, PACE still only covers a small portion of those who are eligible. Team Care Separate from the foundation’s support of geriatric training, there were multiple studies done on promoting the interdisciplinary nature of care teams. Beyond just physicians and nurses, these teams also included social workers and others, and were especially effective for managing those with complex conditions and multiple comorbidities. Hospital and home care For some older adults with serious illness, the preferred place of treatment is their home. The Hospital at Home program allows this option for conditions such as congestive heart failure. The foundation has also invested in other innovative delivery models, such as Acute Care of the Elderly (ACE), which focuses on structuring age-appropriate hospital rooms, and Nurses Improving Care for Healthsystem Elders (NICHE), a framework for age-friendly nursing care in hospitals and other healthcare settings. Palliative care Originating in 2006, JAHF’s support of the Center for Palliative Care at Mount Sinai Medical Center has been one of its most significant projects. Despite difficulty in securing Medicare reimbursement, palliative care has expanded significantly, with almost 75% of all hospitals offering it in some capacity. Leadership and policy JAHF has avidly supported thought leaders in policy, from funding National Health Policy Forum seminars to investing in various reports by the National Academies. The foundation continues to engage in the policy sphere through its various partnerships with government agencies. Special grants The foundation has made rare special grants, including grants in response to the 911 terrorist events, and Hurricanes Katrina and Rita. The grants have often focused on better disaster preparation and relief policies for institutionalized older adults. Grantee communication Beginning in 2006, the foundation has made efforts to communicate with its grantees and other stakeholders, sponsoring grantee perception surveys by the Center for Effective Philanthropy and released in part to the public. (A 2010 Grantee Perception Report is not currently publicly available.) It has circulated an e-newsletter to its stakeholder community since at least 2004, and has launched an online blog, Health AGEnda. Impact When JAHF entered the space of aging and health, there were very few funders, and often with limited scope. The Veterans Health Administration and the Bureau of Health Professions were the only ones involved in geriatrician training. The Gerontological Society of American and the American Geriatrics Society, both created in the 1940s, helped to finance research and clinical work in aging. And in 1974, the National Institute of Aging was established, also with the primary intent to fund clinical research. At that time (late 1970s), there were less than 750 geriatricians in the entire United States. Since then, JAHF has helped make tremendous progress, with more than 7,000 geriatricians certified by 2010 and geriatrics integrated into almost all medical curricula and board certification requirements across the country. Geriatrics exposure has also significantly changed in nursing education, with 90% of bachelor’s-level nursing programs incorporating geriatrics into their required coursework by 2003. This training is not just limited to physicians and nurses, JAHF has sought to bring geriatrics into the professional education of all health workers on some level. Through its funding of work on care models, the foundation has helped to foster various experiential care delivery methods that show promise clinically and economically. As more evidence proves the effectiveness of these models, the spotlight is now on the policy work needed to secure reimbursements and scale up these programs. Governance The foundation, like most large U.S. charitable foundations, has no set closure date. Although not directly related by mission or program activities, in 1983 the Hartford Family Foundation was established in the State of New Jersey to preserve "the memory of the late George Huntington Hartford and the company he founded in 1859." Investments The foundation solicits no new donations, and invests the assets that it has not yet distributed to maximize the return on investment. Unsolicited donations or estate gifts are deposited into the foundation's core accounts, while larger donations are earmarked and targeted for projects mutually agreeable to the foundation and the donor(s). Awards and honors The foundation's grantees are often recognized with prestigious awards and honors—including multiple MacArthur Fellow ("genius") grants The foundation itself has won recognition; for example, in 2011 the foundation received the Community College of Philadelphia's Foundation Keystone Award for its work on the needs of older adults in community college nursing programs. Criticism The foundation's genesis as an offspring of the A&P supermarket fortune, and its consequent stockholding ties with the A&P during the chain's marketplace decline in the 1960s and '70s, proved almost disastrous to the solvency of the foundation. At several points grantmaking was placed on hiatus. In 1969 the foundation attracted attention of the U.S. House Ways and Means Committee during hearings about the tax treatment of foundations with substantial financial links to corporations. Huntington Hartford, John and George Hartford's nephew and never a part of the foundation's governance structure, was highly critical of this lingering financial relationship. A period of A&P stock divestiture ensued and the foundation returned to financial stability by the late 1970s. See also The Great Atlantic & Pacific Tea Company George Huntington Hartford George Ludlum Hartford John Augustine Hartford Huntington Hartford References External links The John A. Hartford Foundation The Great Atlantic & Pacific Tea Company The Hartford Family Foundation 1929 establishments in New York (state) Biomedical research foundations Medical and health foundations in the United States Non-profit organizations based in New York City Organizations established in 1929 The Great Atlantic & Pacific Tea Company
John A. Hartford Foundation
[ "Engineering", "Biology" ]
3,079
[ "Biotechnology organizations", "Biomedical research foundations" ]
3,962,591
https://en.wikipedia.org/wiki/Experience%20modifier
In the insurance industry in the United States, an experience modifier or experience modification is an adjustment of an employer's premium for worker's compensation coverage based on the losses the insurer has experienced from that employer. An experience modifier of 1 would be applied for an employer that had demonstrated the actuarially expected performance. Poorer loss experience leads to a modifier greater than 1, and better experience to a modifier less than 1. The loss experience used in determining the modifier typically comprises three years but excluding the immediate past year. For instance, if a policy expired on January 1, 2018, the period reflected by the experience modifier would run from January 1, 2014 to January 1, 2017. Methods of calculation Experience modifiers are normally recalculated for an employer annually by using experience ratings. The rating is a method used by insurers to determine pricing of premiums for different groups or individuals based on the group or individual's history of claims. The experience rating approach uses an individual's or group’s historic data as a proxy for future risk, and insurers adjust and set insurance premiums and plans accordingly. Each year, a newer year's data is added to the three year window of experience used in the calculation, and the oldest year from the prior calculation is dropped off. The other two years worth of data in the rating window are also updated on an annual basis. Experience modifiers are calculated by organizations known as "rating bureaus" and rely on information reported by insurance companies. The rating bureau used by most states is the NCCI, the National Council on Compensation Insurance. But a number of states have independent rating bureaus: California, Michigan, Delaware, and Pennsylvania have stand-alone rating bureaus that do not integrate data with NCCI. Other states such as Wisconsin, Texas, New York, New Jersey, Indiana, and North Carolina, maintain their own rating bureaus but integrate with NCCI for multi-state employers. The experience modifier adjusts workers compensation insurance premiums for a particular employer based on a comparison of past losses of that employer to what is calculated to be "average" losses of other employers in that state in the same business, adjusted for size. To do this, experience modifier calculations use loss information reported in by an employer's past insurers. This is compared to a calculation of expected losses for a company in that line of work, in that particular state, and adjusted for the size of the employer. The calculation of expected losses utilizes past audited payroll information for a particular employer, by classification code and state. These payrolls are multiplied by Expected Loss Rates, which are calculated by rating bureaus based on past reported claims costs per classification. Errors in experience modifiers can occur if inaccurate information is reported to a rating bureau by a past insurer of an employer. Some states (Illinois and Tennessee) prohibit increases in experience modifiers once a workers compensation policy begins, even if the higher modifier has been correctly calculated under the rules. Most states allow increases in experience modifiers if done relatively early in the term of the workers compensation insurance policy, and most states prohibit increases in experience modifier late in the term of the policy. The detailed rules governing calculation of experience modifiers are developed by the various rating bureaus. Although all states use similar methodology, there can be differences in details in the formulas used by independent rating bureaus and the NCCI. In many NCCI states, the Experience Rating Adjustment plan is in place, allowing for the 70% reduction in the reportable amount of medical-only claims. That is, for claims where there has been no payment to the worker for lost time, but only for medical expenses. This gives employers an incentive to report all claims to their insurers, rather than trying to pay for medical-only claims out of pocket. Discounting medical-only claims in the experience modifier calculation greatly reduces the impact of medical-only claims on the modifier. Formula and calculations The formula primarily used by the NCCI is the following. A = Weight Factor G = Ballast I = Actual Primary Losses H = Actual Incurred Losses F = Actual Excess Losses (H-I) E = Expected Primary Losses D = Expected Incurred Loses C = Expected Excess Losses (D-E) The formula is broken down into 3 main categories or subsections for understanding. Primary Losses Primary losses show up as both I and E in the above formula, E is for "Expected" primary losses vs actual. This expected value is determined based on a company's payroll cost with a little actuarial calculations. Stabilizing Value This is a calculation based on expected excess losses, a weighting factor, and a Ballast factor. The weighting factor and Ballast factor are determined from proprietary calculations that are not published publicly. Ratable Excess Using the weighting factor the Ratable excess is simply the excess losses times this factor. These 3 categories are summed up, with Actual numbers divided by Expected numbers, notice that the Stabilizing value does not change between the numerator and denominator. A note about losses In the EMR calculation there are 4 fundamental losses that are necessary for the calculation, they are: D = Expected Incurred Losses E = Expected Primary Losses H = Actual Incurred Losses Claims under $2,000. I = Actual Primary Losses All claims including Actual Incurred Losses The losses that are not part of this fundamental 4 are, C = Expected Excess Losses F = Actual Excess Losses Examples Unemployment insurance is experience rated in the United States; companies that have more claims resulting from past workers face higher unemployment insurance rates. The logic of this approach is that these are the companies that are more likely to cause someone to be unemployed, so they should pay more into the pool from which unemployment compensation is paid. Unemployment insurance is financed by a payroll tax paid by employers. Experience rating in unemployment insurance is described as imperfect, due in large part to the fact that there are statutory maximum and minimum rates that an employer can receive without regard to its history of lay-off. If a worker is laid off, generally the increased costs to the employer due to the higher value of unemployment insurance tax rates are less than the UI benefits received by the worker. References Insurance in the United States Actuarial science
Experience modifier
[ "Mathematics" ]
1,275
[ "Applied mathematics", "Actuarial science" ]
3,963,004
https://en.wikipedia.org/wiki/Knaster%E2%80%93Kuratowski%E2%80%93Mazurkiewicz%20lemma
The Knaster–Kuratowski–Mazurkiewicz lemma is a basic result in mathematical fixed-point theory published in 1929 by Knaster, Kuratowski and Mazurkiewicz. The KKM lemma can be proved from Sperner's lemma and can be used to prove the Brouwer fixed-point theorem. Statement Let be an -dimensional simplex with n vertices labeled as . A KKM covering is defined as a set of closed sets such that for any , the convex hull of the vertices corresponding to is covered by . The KKM lemma says that in every KKM covering, the common intersection of all n sets is nonempty, i.e: Example When , the KKM lemma considers the simplex which is a triangle, whose vertices can be labeled 1, 2 and 3. We are given three closed sets such that: covers vertex 1, covers vertex 2, covers vertex 3. The edge 12 (from vertex 1 to vertex 2) is covered by the sets and , the edge 23 is covered by the sets and , the edge 31 is covered by the sets and . The union of all three sets covers the entire triangle The KKM lemma states that the sets have at least one point in common. The lemma is illustrated by the picture on the right, in which set #1 is blue, set #2 is red and set #3 is green. The KKM requirements are satisfied, since: Each vertex is covered by a unique color. Each edge is covered by the two colors of its two vertices. The triangle is covered by all three colors. The KKM lemma states that there is a point covered by all three colors simultaneously; such a point is clearly visible in the picture. Note that it is important that all sets are closed, i.e., contain their boundary. If, for example, the red set is not closed, then it is possible that the central point is contained only in the blue and green sets, and then the intersection of all three sets may be empty. Equivalent results Generalizations Rainbow KKM lemma (Gale) David Gale proved the following generalization of the KKM lemma. Suppose that, instead of one KKM covering, we have n different KKM coverings: . Then, there exists a permutation of the coverings with a non-empty intersection, i.e: . The name "rainbow KKM lemma" is inspired by Gale's description of his lemma:"A colloquial statement of this result is... if each of three people paint a triangle red, white and blue according to the KKM rules, then there will be a point which is in the red set of one person, the white set of another, the blue of the third".The rainbow KKM lemma can be proved using a rainbow generalization of Sperner's lemma. The original KKM lemma follows from the rainbow KKM lemma by simply picking n identical coverings. Connector-free lemma (Bapat) A connector of a simplex is a connected set that touches all n faces of the simplex. A connector-free covering is a covering in which no contains a connector. Any KKM covering is a connector-free covering, since in a KKM covering, no even touches all n faces. However, there are connector-free coverings that are not KKM coverings. An example is illustrated at the right. There, the red set touches all three faces, but it does not contain any connector, since no connected component of it touches all three faces. A theorem of Ravindra Bapat, generalizing Sperner's lemma, implies the KKM lemma extends to connector-free coverings (he proved his theorem for ). The connector-free variant also has a permutation variant, so that both these generalizations can be used simultaneously. KKMS theorem The KKMS theorem is a generalization of the KKM lemma by Lloyd Shapley. It is useful in economics, especially in cooperative game theory. While a KKM covering contains n closed sets, a KKMS covering contains closed sets - indexed by the nonempty subsets of (equivalently: by nonempty faces of ). For any , the convex hull of the vertices corresponding to should be covered by the union of sets corresponding to subsets of , that is: . Any KKM covering is a special case of a KKMS covering. In a KKM covering, the n sets corresponding to singletons are nonempty, while the other sets are empty. However, there are many other KKMS coverings. in general, it is not true that the common intersection of all sets in a KKMS covering is nonempty; this is illustrated by the special case of a KKM covering, in which most sets are empty. The KKMS theorem says that, in every KKMS covering, there is a balanced collection of , such that the intersection of sets indexed by is nonempty: It remains to explain what a "balanced collection" is. A collection of subsets of is called balanced if there is a weight function on (assigning a weight to every ), such that, for each element , the sum of weights of all subsets containing is exactly 1. For example, suppose . Then: The collection {{1}, {2}, {3}} is balanced: choose all weights to be 1. The same is true for any collection in which each element appears exactly once, such as the collection {{1,2},{3}} or the collection { {1,2,3} }. The collection {{1,2}, {2,3}, {3,1}} is balanced: choose all weights to be 1/2. The same is true for any collection in which each element appears exactly twice. The collection {{1,2}, {2,3}} is not balanced, since for any choice of positive weights, the sum for element 2 will be larger than the sum for element 1 or 3, so it is not possible that all sums equal 1. The collection {{1,2}, {2,3}, {1}} is balanced: choose . In hypergraph terminology, a collection B is balanced with respect to its ground-set V, iff the hypergraph with vertex-set V and edge-set B admits a perfect fractional matching. The KKMS theorem implies the KKM lemma. Suppose we have a KKM covering , for . Construct a KKMS covering as follows: whenever ( is a singleton that contains only element ). otherwise. The KKM condition on the original covering implies the KKMS condition on the new covering . Therefore, there exists a balanced collection such that the corresponding sets in the new covering have nonempty intersection. But the only possible balanced collection is the collection of all singletons; hence, the original covering has nonempty intersection. The KKMS theorem has various proofs. Reny and Wooders proved that the balanced set can also be chosen to be partnered. Zhou proved a variant of the KKMS theorem where the covering consists of open sets rather than closed sets. Polytopal KKMS theorem (Komiya) Hidetoshi Komiya generalized the KKMS theorem from simplices to polytopes. Let P be any compact convex polytope. Let be the set of nonempty faces of P. A Komiya covering of P is a family of closed sets such that for every face : Komiya's theorem says that for every Komiya covering of P, there is a balanced collection , such that the intersection of sets indexed by is nonempty: Komiya's theorem also generalizes the definition of a balanced collection: instead of requiring that there is a weight function on such that the sum of weights near each vertex of P is 1, we start by choosing any set of points . A collection is called balanced with respect to iff , that is, the point assigned to the entire polygon P is a convex combination of the points assigned to the faces in the collection B. The KKMS theorem is a special case of Komiya's theorem in which the polytope and is the barycenter of the face F (in particular, is the barycenter of , which is the point ). Boundary conditions (Musin) Oleg R. Musin proved several generalizations of the KKM lemma and KKMS theorem, with boundary conditions on the coverings. The boundary conditions are related to homotopy. See also A common generalization of the KKMS theorem and Carathéodory's theorem. References External links See the proof of KKM Lemma in Planet Math. Fixed points (mathematics) Lemmas
Knaster–Kuratowski–Mazurkiewicz lemma
[ "Mathematics" ]
1,839
[ "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical theorems", "Fixed points (mathematics)", "Fixed-point theorems", "Theorems in topology", "Topology", "Mathematical problems", "Lemmas", "Dynamical systems" ]
3,963,759
https://en.wikipedia.org/wiki/Chen%20Chunxian
Chen Chunxian (; 1934 – 11 August 2004) was a Chinese theoretical physicist and businessman. He was the founder of Zhongguancun in Beijing, often called China's Silicon Valley. He initiated the project to create China's first tokamak device in 1973. Biography Chen Chunxian was born in 1934 in Sichuan Province, China. In 1958, he graduated from the Department of Physics of Moscow State University. From 1959 to 1986, he was a researcher at the Institute of Physics, Chinese Academy of Sciences. He initiated the development of China's first tokamak device and recruited engineer Yan Luguang to the project. In 1973, their collaboration created the CT-6. In 1979, Chen visited Boston and Silicon Valley in the United States and was greatly impressed. On October 23, 1980, he founded the first non-governmental entity in Zhongguancun, called the "Advanced Technology Service Association". (Only government-run entities can be called "company" in China.) Chen's company was shut down after an investigation, but he received validation from the central government in 1983, when Hu Yaobang mentioned him in a national statement. Many independent high-tech companies were founded in Zhongguancun, including Lenovo. In his later years, Chen lived in poor conditions and without health care. He died on 11 August 2004. References 1934 births 2004 deaths Businesspeople from Chengdu Chinese expatriates in the Soviet Union Moscow State University alumni Physicists from Sichuan Theoretical physicists Zhongguancun
Chen Chunxian
[ "Physics" ]
319
[ "Theoretical physics", "Theoretical physicists" ]
3,967,296
https://en.wikipedia.org/wiki/String%20diagram
String diagrams are a formal graphical language for representing morphisms in monoidal categories, or more generally 2-cells in 2-categories. They are a prominent tool in applied category theory. When interpreted in the monoidal category of vector spaces and linear maps with the tensor product, string diagrams are called tensor networks or Penrose graphical notation. This has led to the development of categorical quantum mechanics where the axioms of quantum theory are expressed in the language of monoidal categories. History Günter Hotz gave the first mathematical definition of string diagrams in order to formalise electronic circuits. However, the invention of string diagrams is usually credited to Roger Penrose, with Feynman diagrams also described as a precursor. They were later characterised as the arrows of free monoidal categories in a seminal article by André Joyal and Ross Street. While the diagrams in these first articles were hand-drawn, the advent of typesetting software such as LaTeX and PGF/TikZ made the publication of string diagrams more wide-spread. The existential graphs and diagrammatic reasoning of Charles Sanders Peirce are arguably the oldest form of string diagrams, they are interpreted in the monoidal category of finite sets and relations with the Cartesian product. The lines of identity of Peirce's existential graphs can be axiomatised as a Frobenius algebra, the cuts are unary operators on homsets that axiomatise logical negation. This makes string diagrams a sound and complete two-dimensional deduction system for first-order logic, invented independently from the one-dimensional syntax of Gottlob Frege's Begriffsschrift. Intuition String diagrams are made of boxes , which represent processes, with a list of wires coming in at the top and at the bottom, which represent the input and output systems being processed by the box . Starting from a collection of wires and boxes, called a signature, one may generate the set of all string diagrams by induction: each box is a string diagram, for each list of wires , the identity is a string diagram representing the process which does nothing to its input system, it is drawn as a bunch of parallel wires, for each pair of string diagrams and , their tensor is a string diagram representing the parallel composition of processes, it is drawn as the horizontal concatenation of the two diagrams, for each pair of string diagrams and , their composition is a string diagram representing the sequential composition of processes, it is drawn as the vertical concatenation of the two diagrams. Definition Algebraic Let the Kleene star denote the free monoid, i.e. the set of lists with elements in a set . A monoidal signature is given by: a set of generating objects, the lists of generating objects in are also called types, a set of generating arrows, also called boxes, a pair of functions which assign a domain and codomain to each box, i.e. the input and output types. A morphism of monoidal signature is a pair of functions and which is compatible with the domain and codomain, i.e. such that and . Thus we get the category of monoidal signatures and their morphisms. There is a forgetful functor which sends a monoidal category to its underlying signature and a monoidal functor to its underlying morphism of signatures, i.e. it forgets the identity, composition and tensor. The free functor , i.e. the left adjoint to the forgetful functor, sends a monoidal signature to the free monoidal category it generates. String diagrams (with generators from ) are arrows in the free monoidal category . The interpretation in a monoidal category is a defined by a monoidal functor , which by freeness is uniquely determined by a morphism of monoidal signatures . Intuitively, once the image of generating objects and arrows are given, the image of every diagram they generate is fixed. Geometric A topological graph, also called a one-dimensional cell complex, is a tuple of a Hausdorff space , a closed discrete subset of nodes and a set of connected components called edges, each homeomorphic to an open interval with boundary in and such that . A plane graph between two real numbers with is a finite topological graph embedded in such that every point is also a node and belongs to the closure of exactly one edge in . Such points are called outer nodes, they define the domain and codomain of the string diagram, i.e. the list of edges that are connected to the top and bottom boundary. The other nodes are called inner nodes. A plane graph is progressive, also called recumbent, when the vertical projection is injective for every edge . Intuitively, the edges in a progressive plane graph go from top to bottom without bending backward. In that case, each edge can be given a top-to-bottom orientation with designated nodes as source and target. One can then define the domain and codomain of each inner node , given by the list of edges that have source and target. A plane graph is generic when the vertical projection is injective, i.e. no two inner nodes are at the same height. In that case, one can define a list of the inner nodes ordered from top to bottom. A progressive plane graph is labeled by a monoidal signature if it comes equipped with a pair of functions from edges to generating objects and from inner nodes to generating arrows, in a way compatible with domain and codomain. A deformation of plane graphs is a continuous map such that the image of defines a plane graph for all , for all , if is an inner node for some it is inner for all . A deformation is progressive (generic, labeled) if is progressive (generic, labeled) for all . Deformations induce an equivalence relation with if and only if there is some with and . String diagrams are equivalence classes of labeled progressive plane graphs. Indeed, one can define: the identity diagram as a set of parallel edges labeled by some type , the composition of two diagrams as their vertical concatenation with the codomain of the first identified with the domain of the second, the tensor of two diagrams as their horizontal concatenation. Combinatorial While the geometric definition makes explicit the link between category theory and low-dimensional topology, a combinatorial definition is necessary to formalise string diagrams in computer algebra systems and use them to define computational problems. One such definition is to define string diagrams as equivalence classes of well-typed formulae generated by the signature, identity, composition and tensor. In practice, it is more convenient to encode string diagrams as formulae in generic form, which are in bijection with the labeled generic progressive plane graphs defined above. Fix a monoidal signature . A layer is defined as a triple of a type on the left, a box in the middle and a type on the right. Layers have a domain and codomain defined in the obvious way. This forms a directed multigraph, also known as a quiver, with the types as vertices and the layers as edges. A string diagram is encoded as a path in this multigraph, i.e. it is given by: a domain as starting point a length , a list of such that and for all . In fact, the explicit list of layers is redundant, it is enough to specify the length of the type to the left of each layer, known as the offset. The whiskering of a diagram by a type is defined as the concatenation to the right of each layer and symmetrically for the whiskering on the left. One can then define: the identity diagram with and , the composition of two diagrams as the concatenation of their list of layers, the tensor of two diagrams as the composition of whiskerings . Note that because the diagram is in generic form (i.e. each layer contains exactly one box) the definition of tensor is necessarily biased: the diagram on the left hand-side comes above the one on the right-hand side. One could have chosen the opposite definition . Two diagrams are equal (up to the axioms of monoidal categories) whenever they are in the same equivalence class of the congruence relation generated by the interchanger:That is, if the boxes in two consecutive layers are not connected then their order can be swapped. Intuitively, if there is no communication between two parallel processes then the order in which they happen is irrelevant. The word problem for free monoidal categories, i.e. deciding whether two given diagrams are equal, can be solved in polynomial time. The interchanger is a confluent rewriting system on the subset of boundary connected diagrams, i.e. whenever the plane graphs have no more than one connected component which is not connected to the domain or codomain and the Eckmann–Hilton argument does not apply. Extension to 2-categories The idea is to represent structures of dimension d by structures of dimension 2-d, using Poincaré duality. Thus, an object is represented by a portion of plane, a 1-cell is represented by a vertical segment—called a string—separating the plane in two (the right part corresponding to A and the left one to B), a 2-cell is represented by an intersection of strings (the strings corresponding to f above the link, the strings corresponding to g below the link). The parallel composition of 2-cells corresponds to the horizontal juxtaposition of diagrams and the sequential composition to the vertical juxtaposition of diagrams. A monoidal category is equivalent to a 2-category with a single 0-cell. Intuitively, going from monoidal categories to 2-categories amounts to adding colours to the background of string diagrams. Examples The snake equation Consider an adjunction between two categories and where is left adjoint of and the natural transformations and are respectively the unit and the counit. The string diagrams corresponding to these natural transformations are: The string corresponding to the identity functor is drawn as a dotted line and can be omitted. The definition of an adjunction requires the following equalities: The first one is depicted as A monoidal category where every object has a left and right adjoint is called a rigid category. String diagrams for rigid categories can be defined as non-progressive plane graphs, i.e. the edges can bend backward. In the context of categorical quantum mechanics, this is known as the snake equation. The category of Hilbert spaces is rigid, this fact underlies the proof of correctness for the quantum teleportation protocol. The unit and counit of the adjunction are an abstraction of the Bell state and the Bell measurement respectively. If Alice and Bob share two qubits Y and Z in an entangled state and Alice performs a (post-selected) entangled measurement between Y and another qubit X, then this qubit X will be teleported from Alice to Bob: quantum teleportation is an identity morphism.The same equation appears in the definition of pregroup grammars where it captures the notion of information flow in natural language semantics. This observation has led to the development of the DisCoCat framework and quantum natural language processing. Hierarchy of graphical languages Many extensions of string diagrams have been introduced to represent arrows in monoidal categories with extra structure, forming a hierarchy of graphical languages which is classified in Selinger's Survey of graphical languages for monoidal categories. Braided monoidal categories with 3-dimensional diagrams, a generalisation of braid groups. Symmetric monoidal categories with 4-dimensional diagrams where edges can cross, a generalisation of the symmetric group. Ribbon categories with 3-dimensional diagrams where the edges are undirected, a generalisation of knot diagrams. Compact closed categories with 4-dimensional diagrams where the edges are undirected, a generalisation of Penrose graphical notation. Dagger categories where every diagram has a horizontal reflection. List of applications String diagrams have been used to formalise the following objects of study. Concurrency theory Artificial neural networks Game theory Bayesian probability Consciousness Markov kernels Signal-flow graphs Conjunctive queries Bidirectional transformations Categorical quantum mechanics Quantum circuits, measurement-based quantum computing and quantum error correction, see ZX-calculus Natural language processing, see DisCoCat Quantum natural language processing See also Proof nets, a generalisation of string diagrams used to denote proofs in linear logic Existential graphs, a precursor of string diagrams used to denote formulae in first-order logic Penrose graphical notation and Feynman diagrams, two precursors of string diagrams in physics Tensor networks, the interpretation of string diagrams in vector spaces, linear maps and tensor product References External links DisCoPy, a Python toolkit for computing with string diagrams External links Higher category theory Monoidal categories
String diagram
[ "Mathematics" ]
2,609
[ "Higher category theory", "Monoidal categories", "Mathematical structures", "Category theory" ]
35,933,786
https://en.wikipedia.org/wiki/Flufenacet
Flufenacet is an oxyacetanilide herbicide applied before crops have emerged. In the model plant Arabidopsis thaliana it causes similar symptoms to the fiddlehead mutant. References External links Herbicides Thiadiazoles Trifluoromethyl compounds Anilides 4-Fluorophenyl compounds
Flufenacet
[ "Biology" ]
69
[ "Herbicides", "Biocides" ]
35,933,915
https://en.wikipedia.org/wiki/2-Methoxypropene
2-Methoxypropene is an ether with the chemical formula C4H8O. It is a reagent used in organic synthesis to introduce a protecting group for alcohols, and the conversion diols to the acetonide group. 2-Methoxypropene can be prepared by the elimination of methanol from dimethoxypropane, or by the addition of methanol to propyne or allene. References Reagents for organic chemistry Ethers Isopropenyl compounds
2-Methoxypropene
[ "Chemistry" ]
109
[ "Isopropenyl compounds", "Functional groups", "Reagents for organic chemistry" ]
35,936,954
https://en.wikipedia.org/wiki/Quantum%20cylindrical%20quadrupole
The quantum cylindrical quadrupole is a solution to the Schrödinger equation, where is the reduced Planck constant, is the mass of the particle, is the imaginary unit and is time. One peculiar potential that can be solved exactly is when the electric quadrupole moment is the dominant term of an infinitely long cylinder of charge. It can be shown that the Schrödinger equation is solvable for a cylindrically symmetric electric quadrupole, thus indicating that the quadrupole term of an infinitely long cylinder can be quantized. In the physics of classical electrodynamics, it can be shown that the scalar potential and associated mechanical potential energy of a cylindrically symmetric quadrupole is as follows: (SI units) (SI units) Using cylindrical symmetry, the time independent Schrödinger equation becomes the following: Using separation of variables, the above equation can be written as two ordinary differential equations in both the radial and azimuthal directions. The radial equation is Bessel's equation as can be seen below. If one changes variables to , Bessel's equation is exactly obtained. Azimuthal equation The azimuthal equation is given by This is the Mathieu equation, with and . The solution of the Mathieu equation is expressed in terms of the Mathieu cosine and the Mathieu sine for unique a and q. This indicates that the quadrupole moment can be quantized in order of the Mathieu characteristic values and . In general, Mathieu functions are not periodic. The term q must be that of a characteristic value in order for Mathieu functions to be periodic. It can be shown that the solution of the radial equation highly depends on what characteristic values are seen in this case. See also Cylindrical multipole moments References External links MULTIPOLE EXPANSION The nonvanishing coefficients of the dipole moment expansion in axially symmetric molecules Quantum mechanics
Quantum cylindrical quadrupole
[ "Physics" ]
387
[ "Quantum mechanics", "Eponymous equations of physics", "Equations of physics", "Schrödinger equation" ]
35,937,890
https://en.wikipedia.org/wiki/Locally%20compact%20field
In algebra, a locally compact field is a topological field whose topology forms a locally compact Hausdorff space. These kinds of fields were originally introduced in p-adic analysis since the fields are locally compact topological spaces constructed from the norm on . The topology (and metric space structure) is essential because it allows one to construct analogues of algebraic number fields in the p-adic context. Structure Finite dimensional vector spaces One of the useful structure theorems for vector spaces over locally compact fields is that the finite dimensional vector spaces have only an equivalence class of norm: the sup norm pg. 58-59. Finite field extensions Given a finite field extension over a locally compact field , there is at most one unique field norm on extending the field norm ; that is,for all which is in the image of . Note this follows from the previous theorem and the following trick: if are two equivalent norms, andthen for a fixed constant there exists an such thatfor all since the sequence generated from the powers of converge to . Finite Galois extensions If the index of the extension is of degree and is a Galois extension, (so all solutions to the minimal polynomial of any is also contained in ) then the unique field norm can be constructed using the field norm pg. 61. This is defined asNote the n-th root is required in order to have a well-defined field norm extending the one over since given any in the image of its norm issince it acts as scalar multiplication on the -vector space . Examples Finite fields All finite fields are locally compact since they can be equipped with the discrete topology. In particular, any field with the discrete topology is locally compact since every point is the neighborhood of itself, and also the closure of the neighborhood, hence is compact. Local fields The main examples of locally compact fields are the p-adic rationals and finite extensions . Each of these are examples of local fields. Note the algebraic closure and its completion are not locally compact fields pg. 72 with their standard topology. Field extensions of Qp Field extensions can be found by using Hensel's lemma. For example, has no solutions in since only equals zero mod if , but has no solutions mod . Hence is a quadratic field extension. See also References External links Inequality trick https://math.stackexchange.com/a/2252625 Topology
Locally compact field
[ "Physics", "Mathematics" ]
484
[ "Spacetime", "Topology", "Space", "Geometry" ]
35,938,255
https://en.wikipedia.org/wiki/Linear%20topology
In algebra, a linear topology on a left -module is a topology on that is invariant under translations and admits a fundamental system of neighborhood of that consists of submodules of If there is such a topology, is said to be linearly topologized. If is given a discrete topology, then becomes a topological -module with respect to a linear topology. The notion is used more commonly in algebra than in analysis. Indeed, "[t]opological vector spaces with linear topology form a natural class of topological vector spaces over discrete fields, analogous to the class of locally convex topological vector spaces over the normed fields of real or complex numbers in functional analysis." The term "linear topology" goes back to Lefschetz' work. Examples and non-examples For each prime number p, is linearly topologized by the fundamental system of neighborhoods . Topological vector spaces appearing in functional analysis are typically not linearly topologized (since subspaces do not form a neighborhood system). See also References Bourbaki, N. (1972). Commutative algebra (Vol. 8). Hermann. Topology Topological algebra Topological groups
Linear topology
[ "Physics", "Mathematics" ]
239
[ "Algebra stubs", "Topological groups", "Space (mathematics)", "Topological spaces", "Fields of abstract algebra", "Topology", "Space", "Geometry", "Topological algebra", "Spacetime", "Algebra" ]
35,939,958
https://en.wikipedia.org/wiki/Paralytic%20peptides
Paralytic peptides are a family of short (23 amino acids) insect peptides that halt metamorphosis of insects from larvae to pupae. These peptides contain one disulphide bridge. The family includes growth-blocking peptide (GBP) of Mythimna separata (Oriental armyworm) and the paralytic peptides from Manduca sexta (tobacco hawkmoth), Heliothis virescens (noctuid moth), and Spodoptera exigua (beet armyworm) as well as plasmatocyte-spreading peptide (PSP1). References Protein families Protein toxins Peptides
Paralytic peptides
[ "Chemistry", "Biology" ]
139
[ "Biomolecules by chemical classification", "Protein stubs", "Toxins by chemical classification", "Protein toxins", "Protein classification", "Biochemistry stubs", "Molecular biology", "Protein families", "Peptides" ]
31,733,633
https://en.wikipedia.org/wiki/C3H3NO3
{{DISPLAYTITLE:C3H3NO3}} The molecular formula C3H3NO3 (molar mass: 101.061 g/mol) may refer to: 2,4-Oxazolidinedione Glycine N-carboxyanhydride (2,5-Oxazolidinedione) Molecular formulas
C3H3NO3
[ "Physics", "Chemistry" ]
78
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
31,734,604
https://en.wikipedia.org/wiki/Sorbinil
Sorbinil (INN) is an aldose reductase inhibitor being investigated for treatment of diabetic complications including neuropathy and retinopathy. Aldose reductase is an enzyme present in lens and brain that removes excess glucose by converting it to sorbitol. Sorbitol accumulation can lead to the development of cataracts in the lens and neuropathy in peripheral nerves. Sorbinil has been shown to inhibit aldose reductase in human brain and placenta and calf and rat lens. Sorbinil reduced sorbitol accumulation in rat lens and sciatic nerve of diabetic rats orally administered 0.25 mg/kg sorbinil. References Aldose reductase inhibitors Hydantoins Fluoroarenes Drugs developed by Pfizer Spiro compounds
Sorbinil
[ "Chemistry" ]
168
[ "Organic compounds", "Spiro compounds" ]
31,735,884
https://en.wikipedia.org/wiki/ZNF300
Zinc finger protein 300 is a protein that in humans is encoded by the ZNF300 gene. The protein encoded by this gene is a C2H2-type zinc finger DNA binding protein and a likely transcription factor. It is antisense to the human gene, C16orf71, indicating possibility of regulated alternative expression. Clinical relevance It is associated with Crohn's disease. References Further reading Transcription factors
ZNF300
[ "Chemistry", "Biology" ]
88
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
31,737,243
https://en.wikipedia.org/wiki/Cyclotruncated%205-simplex%20honeycomb
In five-dimensional Euclidean geometry, the cyclotruncated 5-simplex honeycomb or cyclotruncated hexateric honeycomb is a space-filling tessellation (or honeycomb). It is composed of 5-simplex, truncated 5-simplex, and bitruncated 5-simplex facets in a ratio of 1:1:1. Structure Its vertex figure is an elongated 5-cell antiprism, two parallel 5-cells in dual configurations, connected by 10 tetrahedral pyramids (elongated 5-cells) from the cell of one side to a point on the other. The vertex figure has 8 vertices and 12 5-cells. It can be constructed as six sets of parallel hyperplanes that divide space. The hyperplane intersections generate cyclotruncated 5-cell honeycomb divisions on each hyperplane. Related polytopes and honeycombs See also Regular and uniform honeycombs in 5-space: 5-cubic honeycomb 5-demicubic honeycomb 5-simplex honeycomb Omnitruncated 5-simplex honeycomb Notes References Norman Johnson Uniform Polytopes, Manuscript (1991) Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] (1.9 Uniform space-fillings) (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] Honeycombs (geometry) 6-polytopes
Cyclotruncated 5-simplex honeycomb
[ "Physics", "Chemistry", "Materials_science" ]
392
[ "Tessellation", "Crystallography", "Honeycombs (geometry)", "Symmetry" ]
31,737,675
https://en.wikipedia.org/wiki/Omnitruncated%205-simplex%20honeycomb
In five-dimensional Euclidean geometry, the omnitruncated 5-simplex honeycomb or omnitruncated hexateric honeycomb is a space-filling tessellation (or honeycomb). It is composed entirely of omnitruncated 5-simplex facets. The facets of all omnitruncated simplectic honeycombs are called permutahedra and can be positioned in n+1 space with integral coordinates, permutations of the whole numbers (0,1,..,n). A5* lattice The A lattice (also called A) is the union of six A5 lattices, and is the dual vertex arrangement to the omnitruncated 5-simplex honeycomb, and therefore the Voronoi cell of this lattice is an omnitruncated 5-simplex. ∪ ∪ ∪ ∪ ∪ = dual of Related polytopes and honeycombs Projection by folding The omnitruncated 5-simplex honeycomb can be projected into the 3-dimensional omnitruncated cubic honeycomb by a geometric folding operation that maps two pairs of mirrors into each other, sharing the same 3-space vertex arrangement: See also Regular and uniform honeycombs in 5-space: 5-cube honeycomb 5-demicube honeycomb 5-simplex honeycomb Notes References Norman Johnson Uniform Polytopes, Manuscript (1991) Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] (1.9 Uniform space-fillings) (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] Honeycombs (geometry) 6-polytopes
Omnitruncated 5-simplex honeycomb
[ "Physics", "Chemistry", "Materials_science" ]
445
[ "Tessellation", "Crystallography", "Honeycombs (geometry)", "Symmetry" ]
31,738,070
https://en.wikipedia.org/wiki/Omnitruncated%20simplicial%20honeycomb
In geometry an omnitruncated simplicial honeycomb or omnitruncated n-simplex honeycomb is an n-dimensional uniform tessellation, based on the symmetry of the affine Coxeter group. Each is composed of omnitruncated simplex facets. The vertex figure for each is an irregular n-simplex. The facets of an omnitruncated simplicial honeycomb are called permutahedra and can be positioned in n+1 space with integral coordinates, permutations of the whole numbers (0,1,..,n). Projection by folding The (2n-1)-simplex honeycombs can be projected into the n-dimensional omnitruncated hypercubic honeycomb by a geometric folding operation that maps two pairs of mirrors into each other, sharing the same vertex arrangement: See also Hypercubic honeycomb Alternated hypercubic honeycomb Quarter hypercubic honeycomb Simplectic honeycomb Truncated simplicial honeycomb References George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Branko Grünbaum, Uniform tilings of 3-space. Geombinatorics 4(1994), 49 - 56. Norman Johnson Uniform Polytopes, Manuscript (1991) Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] (1.9 Uniform space-fillings) (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] Honeycombs (geometry) Polytopes Truncated tilings
Omnitruncated simplicial honeycomb
[ "Physics", "Chemistry", "Materials_science" ]
474
[ "Honeycombs (geometry)", "Truncated tilings", "Tessellation", "Crystallography", "Symmetry" ]
24,645,298
https://en.wikipedia.org/wiki/DnaN
dnaN is the gene that codes for the DNA clamp (also known as β sliding clamp) of DNA polymerase III in prokaryotes. The β clamp physically locks Pol III onto a DNA strand during replication to help increase its processivity. The eukaryotic equivalent to the β clamp is PCNA. See also Beta clamp References DNA replication
DnaN
[ "Chemistry", "Biology" ]
78
[ "DNA replication", "Molecular genetics", "Genetics techniques" ]
24,645,636
https://en.wikipedia.org/wiki/DNA%20polymerase%20III%20subunit%20gamma/tau
The τ and γ subunits are part of the DNA polymerase III holoenzyme of prokaryotes. The protein family is characterized by the well-conserved first N-terminal domain, approx. 365 amino acids. The eukaryotic equivalent to the DNA clamp loader is replication factor C, with the subunits RFC1, RFC2, RFC3, RFC4, and RFC5. The domain is also found in plants as gene STICHEL (STI), with similarity to cyanobacterial sequences. However, STI in plants is nuclear-localized and does not participate in genome duplication. It seems to instead regulate branching. References Bacterial proteins Protein families DNA replication
DNA polymerase III subunit gamma/tau
[ "Chemistry", "Biology" ]
146
[ "Genetics techniques", "Protein classification", "Molecular biology stubs", "DNA replication", "Molecular genetics", "Molecular biology", "Protein families" ]
24,648,049
https://en.wikipedia.org/wiki/Self-assembling%20peptide
Self-assembling peptides are a category of peptides which undergo spontaneous assembling into ordered nanostructures. Originally described in 1993, these designer peptides have attracted interest in the field of nanotechnology for their potential for application in areas such as biomedical nanotechnology, tissue cell culturing, molecular electronics, and more. Effectively self-assembling peptides act as building blocks for various material and device applications. The essence of this technology is to replicate what nature does: to use molecular recognition processes to form ordered assemblies of building blocks capable of conducting biochemical activities. Background Peptides can serve as sturdy building blocks for a wide range of materials as they can be designed to combine with a range of other building blocks such as lipids, sugars, nucleic acids, metallic nanocrystals, and so on; this gives the peptides an edge over carbon nanotubes, which are another popular nanomaterial, as the carbon structure is unreactive. They also exhibit biocompatibility and molecular recognition; the latter is particularly useful as it enables specific selectivity for building ordered nanostructures. Additionally, peptides have superb resistance to extreme temperature, detergents, and denaturants. The ability of peptides to perform self-assembly allows them to be used as fabrication tools, which will continue to grow as a fundamental part of nanomaterials production. The self-assembling of peptides is facilitated through the molecules' structural and chemical compatibility with each other. The structures formed demonstrate physical and chemical stability. An advantage to using self-assembling peptides to build nanostructures in a bottom-up approach is that specific features can be incorporated; the peptides can be modified to serve specific functions. This approach means the final structures are made from the self-integration of small, simple building blocks. This approach is needed for nanoscale structure, as the top-down method of miniaturizing devices using sophisticated lithography and etching techniques has reached a physical limit. Moreover, the top-down approach applies mainly to silicon-based technology and cannot be used for biological developments. The peptide structure is organized hierarchically into four levels. The primary structure of a peptide is the sequence of the amino acids of the peptide chain. Amino acids are monomer molecules that carry a carboxyl and an amine functional group; a spectrum of other chemical groups are attached to different amino acids, such as thiols and alcohols. This facilitates the wide range of chemical interactions and, therefore, molecular recognitions that peptides are capable of; for designer self-assembling peptides, both natural and non-natural amino acids are used. They link together in a controlled manner to form short peptides, which link to form long polypeptide chains. Along these chains, the alternating amine (NH) and carbonyl (CO) groups are highly polar, and they readily form hydrogen bonds with each other. These hydrogen bonds bind peptide chains together to give rise to secondary structures. Stable secondary structures include the alpha-helices and beta-sheets. Unstable secondary structures are random loops, turns, and coils that are formed. The secondary structure that is formed depends on the primary structure; different sequences of the amino acids exhibit different preferences. Secondary structures usually fold, with a variety of loops and turns, into a tertiary structure. What differentiates the secondary structure from the tertiary structure is that the latter includes non-covalent interactions. The quaternary structure combines two or more different chains of polypeptide to form what is known as a protein sub-unit. The self-assembly process of the peptide chains includes dynamic—reassembly, which occurs repeatedly in a self-healing manner. The types of interactions that facilitate the reassembly of peptide structures include van der Waals forces, ionic bonds, hydrogen bonds, and hydrophobic forces. These forces also facilitate the molecular recognition function that the peptides encompass. These interactions work on the basis of preference dependent on energy properties and specificity. A range of different nanostructures can be formed. Nanotubes are defined as elongated nano-objects with definite inner holes. Nanofibrils are solid on the inside, as opposed to the hollow nanotubes. Processing/Synthesis Peptide synthesis can be easily conducted by the established method of solid-phase chemistry in gram or kilogram quantities. The d-isomer conformation can be used for peptide synthesis. Nanostructures can be made by dissolving dipeptides in 1,1,1,3,3,3-hexafluoro-2-propanol at 100 mg/ml and then diluting it with water for a concentration of less than 2 mg/ml. Multiwall nanotubes with diameters of 80–300 nm, made of dipeptides from the diphenylalanine motif of Alzheimer's β-amyloid peptide are made by this method. If a thiol is introduced into the diphenylalanine then nano-spheres can be formed instead; nanospheres with diameters of 10–100 nm can also be made this way, from a diphenylglycine peptide. Characterization Atomic force microscopy can measure the mechanical properties of nanotubes. Scanning-electron and atomic-forces microscopy are used to examine Lego peptide nanofiber structures. Dynamic light scattering studies show structures of surfactant peptides. Surfactant peptides have been studied using a quick-freeze/deep–etch sample preparation method which minimizes effects on the structure. The sample nanostructures are flash frozen at −196 °C and can be studied three-dimensionally, using Transmission electron microscopy. Using computer technology, a molecular model of peptides and their interactions can be built and studied. Specific tests can be performed on certain peptides: for example, a fluorescent emission test could be applied to amyloid fibrils by using the dye Thioflavin T, which binds specifically to the peptide and emits blue fluorescence when excited. Structure Dipeptides The simplest peptide building blocks are dipeptides. Nanotubes formed from dipeptides are the widest among peptide nanotubes. An example of a dipeptide that has been studied is a peptide from the diphenylalanine motif of the Alzheimer's β-amyloid peptide. Dipeptides have also been shown to self-assemble into hydrogels, another form of nanostructures, when connected to the protecting group Fluorenylmethyloxycarbonyl chloride. Experiments focusing on the dipeptide Fmoc-Diphenylalanine have been conducted that have explored the mechanism in which Fmoc-diphenylalanine self-assembles into hydrogels via π-π interlocked β-sheets. Phenylalanine has an aromatic ring, a crucial part of the molecule due to its high electron-density, which favors self-assembly where the rings stack and enable the assembly to occur. Lego peptides / Ionic self-complementary peptides These peptides are approximately 5 nm in size and have 16 amino acids. The class of Lego peptides has the unique characteristics of having two distinct surfaces being either hydrophobic or hydrophilic, similar to the pegs and holes of Lego blocks. The hydrophobic side promotes self-assembly in water and the hydrophilic side has a regular arrangement of charged amino-acid residues, which in turn brings about a defined pattern of ionic bonds. The arrangement of the residues can be classified according to the order of the charges; Modulus I has a charge pattern of , modulus II , and modulus III , and so on. The peptides self-assemble into nanofibers approximately 10 nm long in the presence of alkaline cations or an addition of peptide solution. The fibers form ionic interactions with each other to form checkerboard-like matrices, which develop into a scaffold hydrogel with a high water content of larger than 99.5–99.9% and pores of 10–200 nm in diameter. These hydrogels allow neurite outgrowth and therefore can be used as scaffolds for tissue engineering. Surfactant peptides Surfactant–like peptides that undergo self-assembly in water to form nanotubes and nanovesicles have been designed using natural lipids as guides. This class of peptides has a hydrophilic head (with one or two charged amino acids such as aspartic or glutamic acids, or lysine or histidine acids) with a hydrophobic tail (with 4 or more hydrophobic amino acids such as alanine, valine, or leucine). The peptide monomers are about 2-3 nm long and consist of seven or eight amino acids; the peptide length can be adjusted by adding or removing acids. In water, surfactant peptides undergo self-assembling to form well-ordered nanotubes and nanovesicles of 30–50 nm through intermolecular hydrogen bonds and the packing of the hydrophobic tails in between the residues, like micelle formation. Transmission electron microscopy examination on quick-frozen samples of surfactant-peptide structures showed helical open-ended nanotubes. The samples also showed dynamic behaviours and some vesicle "buds" sprouting out of the peptide nanotubes. Molecular paint or carpet peptides This class of peptides undergoes self-assembling on a surface and form monolayers just few nanometers thick. These types of molecular "paint" or "carpet" peptides are able to form cell patterns, interacting with or trapping other molecules onto the surface. This class of peptides consists of three segments: the head is a ligand part, which has functional groups attached for recognition by other molecules or cell surface receptors; the middle segment is a "linker", allows the head to interact at a distance away from the surface and which also controls the flexibility and the rigidity of the peptide structure; and, at the other end of the linker, a surface anchor where a chemical group on the peptide forms a covalent bond with a particular surface. This class of peptides has the unique property of being able to change molecular structure dramatically. This property is best illustrated using an example. The DAR16-IV peptide, has 16 amino acids and forms a 5 nm β-sheet structure at ambient temperatures; a swift change in structure occurs at high temperature or a change in pH when a 2.5 nm α-helix forms. Cyclic peptides Extensive research has been performed on nanotubes formed by stacking cyclic peptides with an even number of alternating D and L amino acids. These nanotubes are the narrowest formed by peptides. The stacking occurs through intermolecular hydrogen bonding, and the end product is a cylindrical structure with the amino acid side chains of the peptide defining the properties of the outer surface of the tube and the peptide backbone determining the properties of the inner surface of the tube. Polymers can also be covalently attached to the peptides, in which case a polymer shell around the nanotube can be formed. By applying peptide design, the inner diameter, which is completely uniform, can be specified; the outer surface properties can also be affected by peptide design. Therefore, these cyclic nanotubes can form in a range of different environments. Property evaluation One should evaluate the properties (mechanical, electronic, optical, magnetic, etc.) of the material that has been chosen and indicate what the major differences would be if the same material were not at nanoscale. Nanotubes formed from dipeptides are stable under extreme conditions. Dry nanotubes do not degrade until 200 °C; nanotubes display exceptional chemical stability at a range of pH and in the presence of organic solvents. This is a marked difference from natural biological systems, which are often unstable and sensitive to temperature and chemical conditions. Indentation-atomic-force-microscopy experiments showed that dry nanotubes on mica have an average stiffness of 160 N/m and a high Young's modulus of 19–27 GPa. Although they are less stiff than carbon and non-carbon nanotubes, with these values these nanotubes are amongst some of the stiffest known biological materials. The mechanisms which facilitates the mechanical stiffness has been suggested to be the intermolecular hydrogen bonds and rigid aromatic side chains on the peptides. Apart from those made by cyclic peptides, the nanotubes' inner and outer surface properties have not yet been successfully independently modified. Hence, it presents a limitation that the inner and outer tube surfaces are identical. Molecular assembly mostly occurs through weak non-covalent bonds, which include: hydrogen bonds, ionic bonds, van der Waals interactions, and hydrophobic interactions. Self-assembling peptides versus carbon nanotubes Carbon nanotubes (CNTs) are another type of nanomaterial that have attracted much interest for their potential to serve as building blocks for bottom-up applications. They have excellent mechanical, electrical, and thermal properties and can be fabricated to a wide range of nanoscale diameters, making them attractive and appropriate for the development of electronic and mechanical devices. They demonstrate metal-like properties and can act as remarkable conductors. However, there are several areas where peptides have advantages over CNTs. One advantage is that peptides have almost limitless chemical functionality compared with the very limited chemical interactions that CNTs can perform due to their non-reactiveness. Furthermore, CNTs exhibits strong hydrophobicity which results in a tendency to clump in aqueous solutions and therefore have limited solubility; their electrical properties are also affected by humidity, and the presence of oxygen, N2O, and NH3. It is also difficult to produce CNTs with uniform properties and this poses serious drawbacks as the reproducibility of precise structural properties is a key concern for commercial purposes. Lastly, CNTs are expensive, with prices in the range of hundreds of dollars per gram, rendering most applications commercially unviable. Present and future applications The appeal of designer peptides is that they are structurally simple and are simple and affordable to produce a large scale. Cell culturing Peptide scaffolds formed from LEGO peptides have been used extensively for 3D cell culturing as they closely resemble the porosity and the structure of extra-cellular matrices. These scaffolds have also been used in cell proliferation and differentiation into desired cell types. Experimentations with rat neurons demonstrated the usefulness of LEGO peptides in cell culturing. Rat neurons that were attached to the peptides projected functional axons that followed the contours of the peptide scaffolds. Biomedical applications By examining the behaviours of the molecular 'switch' peptides, more information about interactions between proteins and, more significantly, the pathogenesis of some protein conformational diseases can be obtained. These diseases include scrapie, kuru, Huntington's, Parkinson's and Alzheimer's. Self-assembling and surfactant peptides can be used as targeting delivery systems for genes, drugs and RNAi. Research has already shown that cationic dipeptides NH2-Phe-Phe-NH2 nanovesicles, which are about 100 nm in diameter, can be absorbed into cells through endocytosis and deliver oligonucleotides into the cell; this is one example of how peptide nanostructure can in used in gene and drug delivery. It is also envisaged that water-soluble molecules and biological molecules would be able to be delivered to cells in this way. Self-assembling LEGO peptides can form biologically compatible scaffolds for tissue repair and engineering, which should be of great potential, as a large number of diseases cannot be cured by small molecule drugs; a cell-based therapy approach is needed and peptides could potentially play a huge role in this. Cyclic peptide nanotubes formed from self-assembly can act as ion channels, which form pores through the cell membrane and cause cellular osmotic collapse. Peptide can be designed to preferentially form on bacterial cell membranes and thus these tubes can perform as antibacterial and cytotoxin agents. Molecular electronics applications Molecular 'switch' peptides can be made into nanoswitches when an electronic component is incorporated. Metal nanocrystals can be covalently linked to the peptides to make them electronically responsive; research is currently being conducted on how to develop electronically controlled molecules and molecular 'machines' using such molecular 'switches'. Peptide nanofibers can also be used as growth templates for a range of inorganic materials, such as silver, gold, platinum, cobalt, nickel, and various semiconducting materials. Electrons transferring aromatic moieties can also be attached to the side chains of peptides to form conducting nanostructures that can transfer electrons in a certain direction. Metal and semiconductor binding peptides have been used for the fabrication of nanowires. Peptides self-assemble into hollow nanotubes to act as casting molds; metal ions that migrate inside the tube undergo reduction to metallic form. The peptide 'mold' can then be enzymatically destroyed to produce a metal nanowire of about 20 nm diameter. This has been done making gold nanowires and this application is especially significant because nanowires at this scale cannot be made by lithography. Researchers have also successfully developed multi-layer nanocables with a silver core nanowire, a peptide insulation layer, and a gold outer coat. This is done by reducing AgNO3 inside nanotubes, and then bounding a layer of thiol-containing peptides with gold particles attached. This layer acts as a nucleation site during the next step, where a process of electroless deposition layers a coating of gold on the nanotubes to form metal-insulator-metal trilayer coaxial nanocables. Peptide nanotubes are able to produce nanowires of uniform size, and this is particularly useful in the nano-electric applications as electrical and magnetic properties are sensitive to size. Nanotubes' exceptional mechanical strength and stability makes them excellent materials for application in this area. Nanotubes have also been used in developing electrochemical biosensing platforms and have proved to have great potential. Dipeptide nanotubes deposited on graphite electrodes improved electrode sensitivity; thiol-modified nanotubes deposited on gold with a coating of enzymes improved sensitivity and reproducibility for the detection of glucose and ethanol, as well as a shortened detection time, large current density, and improved stability. Nanotubes have also been successfully coated with proteins, nanocrystals, and metalloporphyrin through hydrogen bonding, and these coated tubes have great potential as chemical sensors. Designed peptides with a known structure that would self-assemble into a regular growth template would enable the self-assembly of nanoscale electronic circuits and devices. However, one issue that has yet to be resolved is the ability to control the positioning of the nanostructures. This positioning relative to substrates, to each other, and to other functional components is crucial. Although progress has been made in this domain, more work has to be completed before this control can be established. Miscellaneous applications Molecular carpet/paint peptides can be used in diverse industries. They can be used as 'nano-organizers' for non-biological materials, or could be used to study cell-cell communications and behavior. It has also been found that the catalytic abilities of the lipase enzyme is greatly improved when encapsulated in a peptide nanotube. After incubation in a nanotube for a week, the catalytic activities of the enzyme is improved by 33%, compared with free-standing lipases at room temperature; at 65 °C the improvement rises to 70%. It is suggested that the enhanced ability is due to a conformational change to an enzymatically active structure. Limitations Although well ordered nanostructures have already been successfully formed from self-assembling peptides, their potential will not be fully fulfilled until useful functionality is incorporated into the structures. Moreover, so far most of the peptide structures formed are in one or two dimensions. In contrast, in nature, most biological structures are in three dimensions. Critism has come because there is a lack of theoretical knowledge about the self-assembling behaviours of peptides. Further knowledge could prove to be very useful in facilitating rational designs and precise control of the peptide assemblies. Lastly, although an extensive amount of work is being conducted on developing self-assembling peptide-related applications, issues such as commercial viability and processability have not been paid the same amount of attention. Yet these issues must be assessed if further useful applications are to be realized. References Further reading Peptides Chemical reactions
Self-assembling peptide
[ "Chemistry" ]
4,303
[ "Biomolecules by chemical classification", "Peptides", "nan", "Molecular biology" ]
26,049,975
https://en.wikipedia.org/wiki/Kontsevich%20quantization%20formula
In mathematics, the Kontsevich quantization formula describes how to construct a generalized ★-product operator algebra from a given arbitrary finite-dimensional Poisson manifold. This operator algebra amounts to the deformation quantization of the corresponding Poisson algebra. It is due to Maxim Kontsevich. Deformation quantization of a Poisson algebra Given a Poisson algebra , a deformation quantization is an associative unital product on the algebra of formal power series in , subject to the following two axioms, If one were given a Poisson manifold , one could ask, in addition, that where the are linear bidifferential operators of degree at most . Two deformations are said to be equivalent iff they are related by a gauge transformation of the type, where are differential operators of order at most . The corresponding induced -product, , is then For the archetypal example, one may well consider Groenewold's original "Moyal–Weyl" -product. Kontsevich graphs A Kontsevich graph is a simple directed graph without loops on 2 external vertices, labeled f and g; and internal vertices, labeled . From each internal vertex originate two edges. All (equivalence classes of) graphs with internal vertices are accumulated in the set . An example on two internal vertices is the following graph, Associated bidifferential operator Associated to each graph , there is a bidifferential operator defined as follows. For each edge there is a partial derivative on the symbol of the target vertex. It is contracted with the corresponding index from the source symbol. The term for the graph is the product of all its symbols together with their partial derivatives. Here f and g stand for smooth functions on the manifold, and is the Poisson bivector of the Poisson manifold. The term for the example graph is Associated weight For adding up these bidifferential operators there are the weights of the graph . First of all, to each graph there is a multiplicity which counts how many equivalent configurations there are for one graph. The rule is that the sum of the multiplicities for all graphs with internal vertices is . The sample graph above has the multiplicity . For this, it is helpful to enumerate the internal vertices from 1 to . In order to compute the weight we have to integrate products of the angle in the upper half-plane, H, as follows. The upper half-plane is , endowed with the Poincaré metric and, for two points with , we measure the angle between the geodesic from to and from to counterclockwise. This is The integration domain is Cn(H) the space The formula amounts , where t1(j) and t2(j) are the first and second target vertex of the internal vertex . The vertices f and g are at the fixed positions 0 and 1 in . The formula Given the above three definitions, the Kontsevich formula for a star product is now Explicit formula up to second order Enforcing associativity of the -product, it is straightforward to check directly that the Kontsevich formula must reduce, to second order in , to just References Mathematical quantization
Kontsevich quantization formula
[ "Physics" ]
648
[ "Mathematical quantization", "Quantum mechanics" ]
26,050,094
https://en.wikipedia.org/wiki/Equivalent%20width
The equivalent width of a spectral line is a measure of the area of the line on a plot of intensity versus wavelength in relation to underlying continuum level. It is found by forming a rectangle with a height equal to that of continuum emission, and finding the width such that the area of the rectangle is equal to the area in the spectral line. It is a measure of the strength of spectral features that is primarily used in astronomy. Definition Formally, the equivalent width is given by the equation Here, represents the underlying continuum intensity, while represents the intensity of the actual spectrum (the line and continuum). Then represents the width of a hypothetical line which drops to an intensity of zero and has the "same integrated flux deficit from the continuum as the true one." This equation can be applied to either emission or absorption, but when applied to emission, the value of is negative, and so the absolute value is used. In other words, if the continuum level is constant and the area under/above the emission/absorption line (compared to the continuum) is (the integral above), then (further highlighting the continuum-level dependence). Therefore, for a fixed line strength (), the equivalent width will be smaller for a brighter continuum. Applications The equivalent width is used as a quantitative measure of the strength of spectral features. The equivalent width is a convenient choice because the shapes of spectral features can vary depending upon the configuration of the system which is producing the lines. For instance, the line may experience Doppler broadening due to motions of the gas emitting the photons. The photons will be shifted away from the line center, thus rendering the height of the emission line a poor measure of its overall strength. The equivalent width, on the other hand, "measures the fraction of energy removed from the spectrum by the line," regardless of the broadening intrinsic to the line or a detector with poor resolution. Thus the equivalent width can in many conditions yield the number of absorbing or emitting atoms, by using the curve of growth. For example, measurements of the equivalent width of the Balmer alpha transition in T Tauri stars are used in order to classify individual T Tauri stars as being classical or weak-lined. Also, the equivalent width is used in studying star formation in Lyman alpha galaxies, as the equivalent width of the Lyman alpha line is related to the star formation rate in the galaxy. The equivalent width is also used in many other situations where a quantitative comparison between line strengths is needed. References External links Equivalent Width in the SAO Encyclopedia of Astronomy Emission spectroscopy Astrochemistry
Equivalent width
[ "Physics", "Chemistry", "Astronomy" ]
524
[ "Spectrum (physical sciences)", "Emission spectroscopy", "Astrochemistry", "nan", "Spectroscopy", "Astronomical sub-disciplines" ]
27,934,732
https://en.wikipedia.org/wiki/Oak%20Ridge%20School%20of%20Reactor%20Technology
Oak Ridge School of Reactor Technology (ORSORT) was the successor of the school known locally as the Clinch College of Nuclear Knowledge, later shorten to Clinch College. ORSORT was authorized and financed by the U.S. government and founded in 1950 by Admiral Hyman G. Rickover and Alvin Weinberg. During its existence, the school was the only educational venue in the U.S. from where a comprehensive twelve-month education and training in either "Reactor Hazards Analysis" or "Reactor Operations" could be obtained, with accompanying certificates. Funding ended and the school was closed in 1965, shortly after authorization was extended to select U.S. universities to develop their own Nuclear Engineering curricula. Housed at the Oak Ridge National Laboratory, this unique venue and its renowned instructors offered its students the highest level of education of practical applications of atomic energy available at the time, and first-hand exposure to a variety of nuclear reactor designs including the legendary first graphite reactor, pool reactor, high temperature gas reactor, molten salt reactor, fast reactor and high flux reactor. The school was made known first nationally and eventually worldwide to U.S. enterprises and to U.S. allies involved in the development of peaceful uses of atomic energy, and who were interested in educating and training designated scientific and engineering personnel at its unique venue. In 1959, ORSORT accepted its first international enrollments. Applications to enroll required strict clearance from the Atomic Energy Commission. Tuition fees partially offset school operating costs. Courses listed in their 1965 curricula included Analysis, Chemical Technology, Economics of Nuclear Power, Engineering Science, Experimental Physics, Nuclear Systems Laboratory, Hazards Study, Health Physics, Instrumentation and Controls, Materials, Mathematics, Meteorology, Physics, Reactor Operating Experience and Shielding. Scientific and engineering graduates of the one-year program earned certificates of completion and were awarded the degree of Doctor of Pile Engineering (D.O.P.E.). ORSORT turned out up to 100 graduates a year, many of whom became leaders in the nuclear industry, such as a former Secretary of Energy, James D. Watkins. The total number of ORSORT graduates was 976. In addition to 19 US students from the Atomic Energy Commission, the Oak Ridge National Laboratory and US utilities, the last graduating class of 1965 included engineering and scientific personnel sponsored by their governments in Australia, India, Israel, Japan, Netherlands, Pakistan, Philippines and South Africa. References Sources: External links Oak Ridge National Laboratory Educational institutions established in 1950 Educational institutions disestablished in 1965 Defunct universities and colleges in Tennessee Nuclear history of the United States Nuclear technology Oak Ridge National Laboratory 1950 establishments in Tennessee 1965 disestablishments in Tennessee
Oak Ridge School of Reactor Technology
[ "Physics" ]
546
[ "Nuclear technology", "Nuclear physics" ]
27,936,530
https://en.wikipedia.org/wiki/Toilet%20paper%20orientation
Some toilet roll holders or dispensers allow the toilet paper to hang in front of (over) or behind (under) the roll when it is placed parallel to the wall. This divides opinions about which orientation is better. Arguments range from aesthetics, hospitality, ease of access, and cleanliness, to paper conservation, ease of detaching sheets, and compatibility with pets. This issue was the topic of a 1977 Ask Ann Landers column, where it was occasionally reconsidered and often mentioned. In a 1986 speech, Landers claimed it was the most popular column, attracting 15,000 letters. The case study of "toilet paper orientation" has been used as a teaching tool in instructing sociology students in the practice of social constructionism. Arguments The main reasons given by people to explain why they hang their toilet paper a given way are ease of grabbing and habit. The over position reduces the risk of accidentally brushing the wall or cabinet with one's knuckles, potentially transferring grime and germs; makes it easier to visually locate and to grasp the loose end; gives the option to fold over the last sheet to show that the room has been cleaned; and is generally the intended direction of viewing for the manufacturer's branding, so patterned toilet paper looks better this way. The under position provides a more tidy appearance, in that the loose end can be more hidden from view; reduces the risk of a toddler or a house pet such as a cat unrolling the toilet paper when batting at the roll; and in a recreational vehicle may reduce unrolling during driving. Partisans have claimed that each method makes it easier to tear the toilet paper on a perforated sheet boundary. The over position is shown in illustrations with the first patents for a free-hanging toilet-roll holders, issued in 1891. Various toilet paper dispensers are available which avoid the question of over or under orientation; for example, single sheet dispensers, jumbo roll dispensers in which the toilet roll is perpendicular to the wall, and twin roll dispensers. Swivelling toilet paper dispensers have been developed which allow the paper to be unrolled in either direction. Public opinion In various surveys, around 70% of people prefer the over position. Based on a survey of 1,000 Americans, Kimberly-Clark (Cottonelle) reported that "overs" are more likely than "unders" to notice a roll's direction (~75 percent), to be annoyed when the direction is "incorrect" (~25 percent), and to have flipped the direction at a friend's home (~30 percent). The same claim is made by James Buckley's The Bathroom Companion for people older than 50. Toilet paper orientation is sometimes mentioned as a hurdle for married couples. The issue may also arise in businesses and public places. At the Amundsen–Scott Research Station at the South Pole, complaints have been raised over which way to install toilet paper. It is unclear if one orientation is more economical than the other. The Orange County Register attributes a claim to Planet Green that over saves on paper usage. Uses in social studies The case study of "toilet paper orientation" is an important teaching tool in instructing sociology students in the practice of social constructionism. In the article "Bathroom Politics: Introducing Students to Sociological Thinking from the Bottom Up", Eastern Institute of Technology sociology professor Edgar Alan Burns describes some reasons toilet paper politics is worthy of examination. On the first day of Burns' introductory course in sociology, he asks his students, "Which way do you think a roll of toilet paper should hang?" In the following fifty minutes, the students examine why they picked their answers, exploring the social construction of "rules and practices which they have never consciously thought about before". Burns' activity has been adopted by a social psychology course at the University of Notre Dame, where it is used to illustrate the principles of Berger and Luckmann's 1966 classic The Social Construction of Reality. Christopher Peterson, a professor of psychology at the University of Michigan, classifies the choice of toilet paper orientation under "tastes, preferences, and interests" as opposed to either values or "attitudes, traits, norms, and needs". Other personal interests include one's favorite cola or baseball team. Interests are an important part of identity; one expects and prefers that different people have different interests, which serves one's "sense of uniqueness". Differences in interests usually lead at most to teasing and gentle chiding. For most people, interests do not cause the serious divisions caused by conflicts of values; a possible exception is what Peterson calls "the 'get a life' folks among us" who elevate interests into moral issues. Morton Ann Gernsbacher, a professor of psychology at the University of Wisconsin–Madison, compares the orientation of toilet paper to the orientation of cutlery in a dishwasher, the choice of which drawer in a chest of drawers to place one's socks, and the order of shampooing one's hair and lathering one's body in the shower. In each choice, there is a prototypical solution chosen by the majority, and it is tempting to offer simplistic explanations of how the minority must be different. She warns that neuroimaging experiments—which as of 2007 were beginning to probe behaviors from mental rotation and facial expressions to grocery shopping and tickling—must strive to avoid such cultural bias and stereotypes. In his book Conversational Capital, Bertrand Cesvet gives toilet paper placement as an example of ritualized behavior—one of the ways designers and marketers can create a memorable experience around a product that leads to word-of-mouth momentum. Cesvet's other examples include shaking a box of Tic Tacs and dissecting Oreo cookies. In popular culture In a 1980s episode of the Oprah Winfrey talk show, Winfrey said she was "an over girl", and when she asked the audience which configuration their view the 32% who favored the "under" configuration were booed. In 2016, relationship expert Gilda Carle created a "Toilet Paper Personality Test", surveying 2000 people on their roll preference and asking how assertive they considered themselves to be in relationships. She concluded that "people who roll over are more dominant than those who roll under". Notes References Bibliography Further reading Toilet paper Orientation (geometry) Interpersonal conflict Surveys (human research)
Toilet paper orientation
[ "Physics", "Mathematics" ]
1,319
[ "Topology", "Space", "Geometry", "Spacetime", "Orientation (geometry)" ]
27,937,015
https://en.wikipedia.org/wiki/Mixing%20angle
In particle physics and quantum mechanics, mixing angles are the angles between two sets of (complex-valued) orthogonal basis vectors, or states, usually the eigenbases of two quantum mechanical operators. The choice of angles (parameterization) is not unique but based on convention. Mathematics The relation between two eigenbases is described completely by a unitary matrix, the analogue of a rotation matrix in a complex vector space. The number of degrees of freedom in this matrix is usually reduced by removing any excess complex phase from the transformation, since in most cases that is not a measurable quantity. For two-dimensional vector space this reduces the matrix to a rotation matrix, which can be described completely by one mixing angle. In a three dimensional space there are three mixing angles and one additional complex phase parameter. Different conventions exist for how the three angles are defined, such as Euler angles. Notable mixing angles Some notable mixing angles in particle physics are: Neutrino mixing angles (PMNS matrix), describing the mixing between the mass and flavour eigenstates of neutrinos, which explains neutrino oscillations Quark mixing angles including the Cabbibo angle (CKM matrix), describing the mixing between the mass and flavour eigenstates of quarks The Weinberg angle or weak mixing angle, describing the mixing between the electromagnetic and weak forces Higgs mixing angle Particle physics Quantum mechanics
Mixing angle
[ "Physics" ]
288
[ "Theoretical physics", "Quantum mechanics", "Particle physics", "Particle physics stubs", "Quantum physics stubs" ]
27,942,297
https://en.wikipedia.org/wiki/Intramolecular%20reactions%20of%20diazocarbonyl%20compounds
Intramolecular reactions of diazocarbonyl compounds include addition to carbon–carbon double bonds to form fused cyclopropanes and insertion into carbon–hydrogen bonds or carbon–carbon bonds. Introduction In the presence of an appropriate transition metal (typically copper or rhodium), α-diazocarbonyl compounds are converted to transition metal carbenes, which undergo addition reactions in the presence of carbon–carbon double bonds to form cyclopropanes. Insertion into carbon–carbon or carbon–hydrogen bonds is possible in substrates lacking a double bond. The intramolecular version of this reaction forms fused carbocycles, although yields of reactions mediated by copper are typically moderate. For enantioselective cyclopropanations and insertions, both copper- and rhodium-based catalysts are employed, although the latter have been more heavily studied in recent years. (1) Mechanism and stereochemistry Prevailing mechanism The reaction mechanism of decomposition of diazocarbonyl compounds with copper begins with the formation of a copper carbene complex. Evidence for the formation of copper carbenes is provided by comparison to the behavior of photolytically generated free carbenes and the observation of appreciable enantioselectivity in cyclopropanations with chiral copper complexes. Upon formation of the copper carbene, either insertion or addition takes place to afford carbocycles or cyclopropanes, respectively. Both addition and insertion proceed with retention of configuration. Thus, diastereoselectivity may often be dictated by the configuration of the starting material. (2) Scope and limitations Either copper powder or copper salts can be used very generally for intramolecular reactions of diazocarbonyl compounds. This section describes the different types of diazocarbonyl compounds that may undergo intramolecular reactions in the presence of copper. Note that for intermolecular reactions of diazocarbonyl compounds, the use of rhodium catalysts is preferred. Diazoketones containing pendant double bonds undergo cyclopropanation in the presence of copper. The key step in one synthesis of barbaralone is the selective intramolecular cyclopropanation of a cycloheptatriene. (3) α,β-Cyclopropyl ketones may act as masked α,β-unsaturated ketones. In one example, intramolecular participation of an aryl group leads to the formation of a polycyclic ring system with complete diastereoselectivity. (4) α-Diazoesters are not as efficient as diazoketones at intramolecular cyclizations in some cases because of the propensity of esters to exist in the trans conformation about the carbon–oxygen single bond. However, intramolecular reactions of diazoesters do take place—in the example in equation (5), copper(II) sulfate is used to effect the formation of the cyclopropyl ester shown. (5) In the presence of a catalytic amount of acid, diazomethyl ketone substrates containing a pendant double bond or aryl group undergo cyclization. The mechanism of this process most likely involves protonation of the diazocarbonyl group to form a diazonium salt, followed by displacement of nitrogen by the unsaturated functionality and deprotonation. In the example below, demethylation affords a quinone. (6) When no unsaturated functionality is present in the substrate, C-H insertion may occur. C-H Insertion is particularly facile in conformationally restricted substrates in which a C-H bond is held in close proximity to the diazo group. (7) Transannular insertions, which form fused carbocyclic products, have also been observed. Yields are often low for these reactions, however. (8) Insertion into carbon–carbon bonds has been observed. In the example in equation (9), the methyl group is held in close proximity to the diazo group, facilitating C-C insertion. (9) Synthetic applications Intramolecular cyclopropanation of a diazoketone is applied in a racemic synthesis of sirenin. A single cyclopropane diastereomer was isolated in 55% yield after diazoketone formation and cyclization. (10) Experimental conditions and procedure Typical conditions Diazo compounds may be explosive and should be handled with care. Very often, the diazocarbonyl compound is prepared and immediately used via treatment of the corresponding acid chloride with an excess of diazomethane (see Eq. (18) below for an example). Reactions mediated by copper are typically on the order of hours, and in some cases, slow addition of the diazocarbonyl compound is necessary. Reactions should be carried out under an inert atmosphere in anhydrous conditions. Example procedure Source: (11) A solution of the olefinic acid (0.499 g, 2.25 mmol) dissolved in benzene (20 ml, freshly distilled from calcium hydride) was stirred at 0 °C (ice bath) under nitrogen while oxalyl chloride (1.35 ml, 2.0 g, 15.75 mmol) was added dropwise. The ice bath was removed and the solution was stirred at room temperature for 2 hr. The solvent and excess reagent were removed in vacuo. The resulting orange oil was dissolved in benzene (2 x 5.0 mi, freshly distilled from calcium hydride) under nitrogen. This solution was added dropwise at 0 °C (ice bath) to an anhydrous ethereal solution of diazomethane (50 ml, −20 mmol, predried over sodium metal) with vigorous stirring under nitrogen. The resulting solution was stirred at 0 °C for 1 hr and then at room temperature for 1.5 hr. The solvents and excess reagent were removed in vacuo. Tetrahydrofuran (40 ml, freshly distilled from lithium aluminum hydride) and finely divided metallic copper powder (0.67 g) were added to the crude diazo ketone, sequentially. This suspension was vigorously stirred at reflux under nitrogen for 2 hr. The resulting suspension was allowed to stir at room temperature for an additional 14 hr. The solution was filtered into water (100 ml). The mixture was shaken vigorously for 5 min and then extracted with ether (3 x 50 ml). The combined ethereal extracts were washed with saturated sodium bicarbonate solution (4 X 40 ml), water (40 ml), and saturated sodium chloride solution (40 ml), dried (Na2SO4), and concentrated in vacuo to give 0.673 g of a crude brown oil. This crude oil was chromatographed on silica gel (67 g) in a 2-cm diameter column using 10% ether-90% petroleum ether to develop the column, taking 37-ml sized fractions. Fractions 11–16 gave 0.164 g (33%) of pure ketone product: mp 64-64.5° (from pentane); IR (CCl4) 3095 (cyclopropyl CH) and 1755 cm−1 (CO); NMR (CCl4) δ 1.18 (s, 3H, CH3) 1.03 (9, 3H, CH3), 0.97 (s, 3H, CH3), and 0.90 ppm (s, 3H, CH3). Anal. Calcd for C15H22O: C, 82.52; H, 10.16. Found: C, 82.61; H, 10.01. References Organic reactions
Intramolecular reactions of diazocarbonyl compounds
[ "Chemistry" ]
1,641
[ "Organic reactions" ]
2,131,266
https://en.wikipedia.org/wiki/Compressibility%20factor
In thermodynamics, the compressibility factor (Z), also known as the compression factor or the gas deviation factor, describes the deviation of a real gas from ideal gas behaviour. It is simply defined as the ratio of the molar volume of a gas to the molar volume of an ideal gas at the same temperature and pressure. It is a useful thermodynamic property for modifying the ideal gas law to account for the real gas behaviour. In general, deviation from ideal behaviour becomes more significant the closer a gas is to a phase change, the lower the temperature or the larger the pressure. Compressibility factor values are usually obtained by calculation from equations of state (EOS), such as the virial equation which take compound-specific empirical constants as input. For a gas that is a mixture of two or more pure gases (air or natural gas, for example), the gas composition must be known before compressibility can be calculated. Alternatively, the compressibility factor for specific gases can be read from generalized compressibility charts that plot as a function of pressure at constant temperature. The compressibility factor should not be confused with the compressibility (also known as coefficient of compressibility or isothermal compressibility) of a material, which is the measure of the relative volume change of a fluid or solid in response to a pressure change. Definition and physical significance The compressibility factor is defined in thermodynamics and engineering frequently as: where p is the pressure, is the density of the gas and is the specific gas constant, being the molar mass, and the is the absolute temperature (kelvin or Rankine scale). In statistical mechanics the description is: where is the pressure, is the number of moles of gas, is the absolute temperature, is the gas constant, and is unit volume. For an ideal gas the compressibility factor is per definition. In many real world applications requirements for accuracy demand that deviations from ideal gas behaviour, i.e., real gas behaviour, be taken into account. The value of generally increases with pressure and decreases with temperature. At high pressures molecules are colliding more often. This allows repulsive forces between molecules to have a noticeable effect, making the molar volume of the real gas () greater than the molar volume of the corresponding ideal gas (), which causes to exceed one. When pressures are lower, the molecules are free to move. In this case attractive forces dominate, making . The closer the gas is to its critical point or its boiling point, the more deviates from the ideal case. Fugacity The compressibility factor is linked to the fugacity by the relation: Generalized compressibility factor graphs for pure gases The unique relationship between the compressibility factor and the reduced temperature, , and the reduced pressure, , was first recognized by Johannes Diderik van der Waals in 1873 and is known as the two-parameter principle of corresponding states. The principle of corresponding states expresses the generalization that the properties of a gas which are dependent on intermolecular forces are related to the critical properties of the gas in a universal way. That provides a most important basis for developing correlations of molecular properties. As for the compressibility of gases, the principle of corresponding states indicates that any pure gas at the same reduced temperature, , and reduced pressure, , should have the same compressibility factor. The reduced temperature and pressure are defined by and Here and are known as the critical temperature and critical pressure of a gas. They are characteristics of each specific gas with being the temperature above which it is not possible to liquify a given gas and is the minimum pressure required to liquify a given gas at its critical temperature. Together they define the critical point of a fluid above which distinct liquid and gas phases of a given fluid do not exist. The pressure-volume-temperature (PVT) data for real gases varies from one pure gas to another. However, when the compressibility factors of various single-component gases are graphed versus pressure along with temperature isotherms many of the graphs exhibit similar isotherm shapes. In order to obtain a generalized graph that can be used for many different gases, the reduced pressure and temperature, and , are used to normalize the compressibility factor data. Figure 2 is an example of a generalized compressibility factor graph derived from hundreds of experimental PVT data points of 10 pure gases, namely methane, ethane, ethylene, propane, n-butane, i-pentane, n-hexane, nitrogen, carbon dioxide and steam. There are more detailed generalized compressibility factor graphs based on as many as 25 or more different pure gases, such as the Nelson-Obert graphs. Such graphs are said to have an accuracy within 1–2 percent for values greater than 0.6 and within 4–6 percent for values of 0.3–0.6. The generalized compressibility factor graphs may be considerably in error for strongly polar gases which are gases for which the centers of positive and negative charge do not coincide. In such cases the estimate for may be in error by as much as 15–20 percent. The quantum gases hydrogen, helium, and neon do not conform to the corresponding-states behavior. Rao recommended that the reduced pressure and temperature for those three gases should be redefined in the following manner to improve the accuracy of predicting their compressibility factors when using the generalized graphs: and where the temperatures are in kelvins and the pressures are in atmospheres. Reading a generalized compressibility chart In order to read a compressibility chart, the reduced pressure and temperature must be known. If either the reduced pressure or temperature is unknown, the reduced specific volume must be found. Unlike the reduced pressure and temperature, the reduced specific volume is not found by using the critical volume. The reduced specific volume is defined by, where is the specific volume. Once two of the three reduced properties are found, the compressibility chart can be used. In a compressibility chart, reduced pressure is on the x-axis and Z is on the y-axis. When given the reduced pressure and temperature, find the given pressure on the x-axis. From there, move up on the chart until the given reduced temperature is found. Z is found by looking where those two points intersect. the same process can be followed if reduced specific volume is given with either reduced pressure or temperature. Observations made from a generalized compressibility chart There are three observations that can be made when looking at a generalized compressibility chart. These observations are: Gases behave as an ideal gas regardless of temperature when the reduced pressure is much less than one (PR ≪ 1). When reduced temperature is greater than two (TR > 2), ideal-gas behavior can be assumed regardless of pressure, unless pressure is much greater than one (PR ≫ 1). Gases deviate from ideal-gas behavior the most in the vicinity of the critical point. Theoretical models The virial equation is especially useful to describe the causes of non-ideality at a molecular level (very few gases are mono-atomic) as it is derived directly from statistical mechanics: Where the coefficients in the numerator are known as virial coefficients and are functions of temperature. The virial coefficients account for interactions between successively larger groups of molecules. For example, accounts for interactions between pairs, for interactions between three gas molecules, and so on. Because interactions between large numbers of molecules are rare, the virial equation is usually truncated after the third term. When this truncation is assumed, the compressibility factor is linked to the intermolecular-force potential φ by: The Real gas article features more theoretical methods to compute compressibility factors. Physical mechanism of temperature and pressure dependence Deviations of the compressibility factor, Z, from unity are due to attractive and repulsive intermolecular forces. At a given temperature and pressure, repulsive forces tend to make the volume larger than for an ideal gas; when these forces dominate Z is greater than unity. When attractive forces dominate, Z is less than unity. The relative importance of attractive forces decreases as temperature increases (see effect on gases). As seen above, the behavior of Z is qualitatively similar for all gases. Molecular nitrogen, N, is used here to further describe and understand that behavior. All data used in this section were obtained from the NIST Chemistry WebBook. It is useful to note that for N the normal boiling point of the liquid is 77.4 K and the critical point is at 126.2 K and 34.0 bar. The figure on the right shows an overview covering a wide temperature range. At low temperature (100 K), the curve has a characteristic check-mark shape, the rising portion of the curve is very nearly directly proportional to pressure. At intermediate temperature (160 K), there is a smooth curve with a broad minimum; although the high pressure portion is again nearly linear, it is no longer directly proportional to pressure. Finally, at high temperature (400 K), Z is above unity at all pressures. For all curves, Z approaches the ideal gas value of unity at low pressure and exceeds that value at very high pressure. To better understand these curves, a closer look at the behavior for low temperature and pressure is given in the second figure. All of the curves start out with Z equal to unity at zero pressure and Z initially decreases as pressure increases. N is a gas under these conditions, so the distance between molecules is large, but becomes smaller as pressure increases. This increases the attractive interactions between molecules, pulling the molecules closer together and causing the volume to be less than for an ideal gas at the same temperature and pressure. Higher temperature reduces the effect of the attractive interactions and the gas behaves in a more nearly ideal manner. As the pressure increases, the gas eventually reaches the gas-liquid coexistence curve, shown by the dashed line in the figure. When that happens, the attractive interactions have become strong enough to overcome the tendency of thermal motion to cause the molecules to spread out; so the gas condenses to form a liquid. Points on the vertical portions of the curves correspond to N2 being partly gas and partly liquid. On the coexistence curve, there are then two possible values for Z, a larger one corresponding to the gas and a smaller value corresponding to the liquid. Once all the gas has been converted to liquid, the volume decreases only slightly with further increases in pressure; then Z is very nearly proportional to pressure. As temperature and pressure increase along the coexistence curve, the gas becomes more like a liquid and the liquid becomes more like a gas. At the critical point, the two are the same. So for temperatures above the critical temperature (126.2 K), there is no phase transition; as pressure increases the gas gradually transforms into something more like a liquid. Just above the critical point there is a range of pressure for which Z drops quite rapidly (see the 130 K curve), but at higher temperatures the process is entirely gradual. The final figures shows the behavior at temperatures well above the critical temperatures. The repulsive interactions are essentially unaffected by temperature, but the attractive interaction have less and less influence. Thus, at sufficiently high temperature, the repulsive interactions dominate at all pressures. This can be seen in the graph showing the high temperature behavior. As temperature increases, the initial slope becomes less negative, the pressure at which Z is a minimum gets smaller, and the pressure at which repulsive interactions start to dominate, i.e. where Z goes from less than unity to greater than unity, gets smaller. At the Boyle temperature (327 K for N), the attractive and repulsive effects cancel each other at low pressure. Then Z remains at the ideal gas value of unity up to pressures of several tens of bar. Above the Boyle temperature, the compressibility factor is always greater than unity and increases slowly but steadily as pressure increases. Experimental values It is extremely difficult to generalize at what pressures or temperatures the deviation from the ideal gas becomes important. As a rule of thumb, the ideal gas law is reasonably accurate up to a pressure of about 2 atm, and even higher for small non-associating molecules. For example, methyl chloride, a highly polar molecule and therefore with significant intermolecular forces, the experimental value for the compressibility factor is at a pressure of 10 atm and temperature of 100 °C. For air (small non-polar molecules) at approximately the same conditions, the compressibility factor is only (see table below for 10 bars, 400 K). Compressibility of air Normal air comprises in crude numbers 80 percent nitrogen and 20 percent oxygen . Both molecules are small and non-polar (and therefore non-associating). We can therefore expect that the behaviour of air within broad temperature and pressure ranges can be approximated as an ideal gas with reasonable accuracy. Experimental values for the compressibility factor confirm this. values are calculated from values of pressure, volume (or density), and temperature in Vasserman, Kazavchinskii, and Rabinovich, "Thermophysical Properties of Air and Air Components;' Moscow, Nauka, 1966, and NBS-NSF Trans. TT 70-50095, 1971: and Vasserman and Rabinovich, "Thermophysical Properties of Liquid Air and Its Component, "Moscow, 1968, and NBS-NSF Trans. 69-55092, 1970. See also Fugacity Real gas Theorem of corresponding states Van der Waals equation References External links Compressibility factor (gases) A Citizendium article. Real Gases includes a discussion of compressibility factors. Chemical engineering thermodynamics Gas laws
Compressibility factor
[ "Chemistry", "Engineering" ]
2,837
[ "Chemical engineering", "Chemical engineering thermodynamics", "Gas laws" ]
2,131,962
https://en.wikipedia.org/wiki/Feshbach%20resonance
In physics, a Feshbach resonance can occur upon collision of two slow atoms, when they temporarily stick together forming an unstable compound with short lifetime (so-called resonance). It is a feature of many-body systems in which a bound state is achieved if the coupling(s) between at least one internal degree of freedom and the reaction coordinates, which lead to dissociation, vanish. The opposite situation, when a bound state is not formed, is a shape resonance. It is named after Herman Feshbach, a physicist at MIT. Feshbach resonances have become important in the study of cold atoms systems, including Fermi gases and Bose–Einstein condensates (BECs). In the context of scattering processes in many-body systems, the Feshbach resonance occurs when the energy of a bound state of an interatomic potential is equal to the kinetic energy of a colliding pair of atoms. In experimental settings, the Feshbach resonances provide a way to vary interaction strength between atoms in the cloud by changing scattering length, asc, of elastic collisions. For atomic species that possess these resonances (like K39 and K40), it is possible to vary the interaction strength by applying a uniform magnetic field. Among many uses, this tool has served to explore the transition from a BEC of fermionic molecules to weakly interacting fermion-pairs the BCS in Fermi clouds. For the BECs, Feshbach resonances have been used to study a spectrum of systems from the non-interacting ideal Bose gases to the unitary regime of interactions. Introduction Consider a general quantum scattering event between two particles. In this reaction, there are two reactant particles denoted by A and B, and two product particles denoted by A' and B' . For the case of a reaction (such as a nuclear reaction), we may denote this scattering event by or . The combination of the species and quantum states of the two reactant particles before or after the scattering event is referred to as a reaction channel. Specifically, the species and states of A and B constitute the entrance channel, while the types and states of A' and B' constitute the exit channel. An energetically accessible reaction channel is referred to as an open channel, whereas a reaction channel forbidden by energy conservation is referred to as a closed channel. Consider the interaction of two particles A and B in an entrance channel C. The positions of these two particles are given by and , respectively. The interaction energy of the two particles will usually depend only on the magnitude of the separation , and this function, sometimes referred to as a potential energy curve, is denoted by . Often, this potential will have a pronounced minimum and thus admit bound states. The total energy of the two particles in the entrance channel is , where denotes the total kinetic energy of the relative motion (center-of-mass motion plays no role in the two-body interaction), is the contribution to the energy from couplings to external fields, and represents a vector of one or more parameters such as magnetic field or electric field. We consider now a second reaction channel, denoted by D, which is closed for large values of R. Let this potential curve admit a bound state with energy . A Feshbach resonance occurs when for some range of parameter vectors . When this condition is met, then any coupling between channel C and channel D can give rise to significant mixing between the two channels; this manifests itself as a drastic dependence of the outcome of the scattering event on the parameter or parameters that control the energy of the entrance channel. These couplings can arise from spin-exchange interactions or relativistic spin-dependent interactions. Magnetic Feshbach resonance In ultracold atomic experiments, the resonance is controlled via the magnetic field and we assume that the kinetic energy is approximately 0. Since the channels differ in internal degrees of freedom such as spin and angular momentum, their difference in energy is dependent on by the Zeeman effect. The scattering length is modified as where is the background scattering length, is the magnetic field strength where resonance occurs, and is the resonance width. This allows for manipulation of the scattering length to 0 or arbitrarily high values. As the magnetic field is swept through the resonance, the states in the open and closed channel can also mix and a large number of atoms, sometimes near 100% efficiency, convert to Feshbach molecules. These molecules have high vibrational states, so they then need to be transitioned to lower, more stable states to prevent dissociation. This can be done through stimulated emissions or other optical techniques such as STIRAP. Other methods include inducing stimulated emission through an oscillating magnetic field and atom-molecule thermalization. Feshbach resonances in avoided crossings In molecules, the nonadiabatic couplings between two adiabatic potentials build the avoided crossing (AC) region. The rovibronic resonances in the AC region of two-coupled potentials are very special, since they are not in the bound state region of the adiabatic potentials, and they usually do not play important roles on the scatterings and are less discussed. Yu Kun Yang et al studied this problem in the New J. Phys. 22 (2020). Exemplified in particle scattering, resonances in the AC region are comprehensively investigated. The effects of resonances in the AC region on the scattering cross sections strongly depend on the nonadiabatic couplings of the system, it can be very significant as sharp peaks, or inconspicuous buried in the background. More importantly, it shows a simple quantity proposed by Zhu and Nakamura to classify the coupling strength of nonadiabatic interactions, can be well applied to quantitatively estimate the importance of resonances in the AC region. Unstable state A virtual state, or unstable state is a bound or transient state which can decay into a free state or relax at some finite rate. This state may be the metastable state of a certain class of Feshbach resonance, "A special case of a Feshbach-type resonance occurs when the energy level lies near the very top of the potential well. Such a state is called 'virtual and may be further contrasted to a shape resonance depending on the angular momentum. Because of their transient existence, they can require special techniques for analysis and measurement, for example. See also Resonance (particle physics) Fano resonance Feshbach–Fano partitioning References Atomic physics
Feshbach resonance
[ "Physics", "Chemistry" ]
1,313
[ "Quantum mechanics", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
2,133,133
https://en.wikipedia.org/wiki/Visible%20light%20communication
In telecommunications, visible light communication (VLC) is the use of visible light (light with a frequency of 400–800 THz/wavelength of 780–375 nm) as a transmission medium. VLC is a subset of optical wireless communications technologies. The technology uses fluorescent lamps (ordinary lamps, not special communications devices) to transmit signals at 10 kbit/s, or LEDs for up to 500 Mbit/s over short distances. Systems such as RONJA can transmit at full Ethernet speed (10 Mbit/s) over distances of . Specially designed electronic devices generally containing a photodiode receive signals from light sources, although in some cases a cell phone camera or a digital camera will be sufficient. The image sensor used in these devices is in fact an array of photodiodes (pixels) and in some applications its use may be preferred over a single photodiode. Such a sensor may provide either multi-channel (down to 1 pixel = 1 channel) or a spatial awareness of multiple light sources. VLC can be used as a communications medium for ubiquitous computing, because light-producing devices (such as indoor/outdoor lamps, TVs, traffic signs, commercial displays and car headlights/taillights) are used everywhere. Uses One of the main characteristics of VLC is the incapacity of light to surpass physical opaque barriers. This characteristic can be considered a weak point of VLC, due to the susceptibility of interference from physical objects, but is also one of its many strengths: unlike radio waves, light waves are confined in the enclosed spaces they are transmitted, which enforces a physical safety barrier that requires a receptor of that signal to have physical access to the place where the transmission is occurring. A promising application of VLC is the Indoor Positioning System (IPS), an analogue to GPS which is built to operate in enclosed spaces where GPS satellite transmissions cannot reach. For instance, commercial buildings, shopping malls, parking garages, as well as subways and tunnel systems are all possible applications for VLC-based indoor positioning systems. Additionally, once the VLC lamps are able to perform lighting at the same time as data transmission, it can simply occupy the installation of traditional single-function lamps. Other applications for VLC involve communication between appliances of a smart home or office. With increasing IoT-capable devices, connectivity through traditional radio waves might be subjected to interference. Light bulbs with VLC capabilities can transmit data and commands for such devices. History The history of visible light communications dates back to the 1880s in Washington, D.C., when the Scottish-born scientist Alexander Graham Bell invented the photophone, which transmitted speech on modulated sunlight over several hundred meters. This pre-dates the transmission of speech by radio. More recent work began in 2003 at Nakagawa Laboratory, in Keio University, Japan, using LEDs to transmit data by visible light. Since then there have been numerous research activities focussed on VLC. In 2006, researchers from CICTR at Penn State proposed a combination of power line communication (PLC) and white light LED to provide broadband access for indoor applications. This research suggested that VLC could be deployed as a perfect last-mile solution in the future. In January 2010 a team of researchers from Siemens and Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute, in Berlin, demonstrated transmission at 500 Mbit/s with a white LED over a distance of , and 100 Mbit/s over longer distance using five LEDs. The VLC standardization process is conducted within the IEEE 802.15.7 working group. In December 2010 St. Cloud, Minnesota, signed a contract with LVX Minnesota and became the first to commercially deploy this technology. In July 2011 a presentation at TED Global gave a live demonstration of high-definition video being transmitted from a standard LED lamp, and proposed the term Li-Fi to refer to a subset of VLC technology. Recently, VLC-based indoor positioning systems have become an attractive topic. ABI research forecasts that it could be a key solution to unlocking the $5 billion "indoor location market". Publications have been coming from Nakagawa Laboratory, ByteLight filed a patent on a light positioning system using LED digital pulse recognition in March 2012. COWA at Penn State and other researchers around the world. Another recent application is in the world of toys, thanks to cost-efficient and low-complexity implementation, which only requires one microcontroller and one LED as optical front-end. VLCs can be used for providing security. They are especially useful in body sensor networks and personal area networks. Recently Organic LEDs (OLED) have been used as optical transceivers to build up VLC communication links up to 10 Mbit/s. In October 2014, Axrtek launched a commercial bidirectional RGB LED VLC system called MOMO that transmits down and up at speeds of 300 Mbit/s and with a range of 25 feet. In May 2015, Philips collaborated with supermarket company Carrefour to deliver VLC location-based services to shoppers' smartphones in a hypermarket in Lille, France. In June 2015, two Chinese companies, Kuang-Chi and Ping An Bank, partnered to introduce a payment card that communicates information through a unique visible light. In March 2017, Philips set up the first VLC location-based services to shoppers' smartphones in Germany. The installation was presented at EuroShop in Düsseldorf (5–9 March). As first supermarket in Germany an Edeka supermarket in Düsseldorf-Bilk is using the system, which offers a 30 centimeter positioning accuracy can be achieved, which meets the special demands in food retail. Indoor positioning systems based on VLC can be used in places such as hospitals, eldercare homes, warehouses, and large, open offices to locate people and control indoor robotic vehicles. There is wireless network that for data transmission uses visible light, and does not use intensity modulation of optical sources. The idea is to use vibration generator instead of optical sources for data transmission. Modulation Techniques In order to send data, a modulation of light is required. A modulation is the form in which the light signal varies in order to represent different symbols. In order for the data to be decoded. Unlike radio transmission, a VLC modulation requires the light signal to be modulated around a positive dc value, responsible for the lighting aspect of the lamp. The modulation will thus be an alternating signal around the positive dc level, with a high-enough frequency to be imperceptible to the human eye. Due to this superposition of signals, implementation of VLC transmitter usually require a high-efficiency, higher-power, slower response DC converter responsible for the LED bias that will provide lighting, alongside a lower-efficiency, lower-power, but higher response velocity amplifier in order to synthesize the required AC current modulation. There are several modulation techniques available, forming three main groups: Single-Carrier Modulated Transmission (SCMT), Multi-Carrier Modulated Transmission (MCMT) and Pulse-Based Transmission (PBT). Single-Carrier Modulated Transmission The Single-Carrier Modulated Transmission comprises modulation techniques established for traditional forms of transmission, such as radio. A sinusoidal wave is added to the lighting dc level, allowing digital information to be coded in the characteristics of the wave. By keying between two or several different values of a given characteristic, symbols attributed to each value are transmitted on the light link. Possible techniques are Amplitude Switch Keying (ASK), Phase Switch Keying (PSK) and Frequency Switch Keying (FSK). Out of these three, FSK is capable of larger bitrate transmission once it allows more symbols to be easily differentiated on frequency switching. An additional technique called Quadrature Amplitude Modulation (QAM) has also been proposed, where both amplitude and phase of the sinusoidal voltage are keyed simultaneously in order to increase the possible number of symbols. Multi-Carrier Modulated Transmission Multi-Carrier Modulated Transmission works on the same way of Single-Carrier Modulated Transmission methods, but embed two or more sinusoidal waves modulated for data transmission. This type of modulation is among the hardest and more complex to synthesize and decode. However, it presents the advantage of excelling in multipath transmission, where the receptor is not in direct view of the transmitter and therefore makes the transmission depend on reflection of the light in other barriers. Pulse-Based Transmission Pulse-Based transmission encompasses modulation techniques in which the data is encoded not on a sinusoidal wave, but on a pulsed wave. Unlike sinusoidal alternating signals, in which the periodic average will always be null, pulsed waves based on high-low states will present inherit average values. This brings two main advantages for the Pulse-Based Transmission modulations: It can be implemented with a single high-power, high-efficiency, dc converter of slow response and an additional power switch operating in fast speeds to deliver current to the LED at determined instants. Once the average value depends on the pulse width of the data signal, the same switch that operates the data transmission can provide dimming control, greatly simplifying the dc converter. Due to these important implementation advantages, these dimming-capable modulations have been standardized in IEEE 802.15.7, in which are described three modulation techniques: On-Off Keying (OOK), Variable Pulse Position Modulation (VPPM) and Color Shift Keying (CSK). On-Off Keying On the On-Off Keying technique, the LED is switched on and off repeatedly, and the symbols are differentiated by the pulse width, with a wider pulse representing the logical high '1', while narrower pulses representing logical low '0'. Because the data is encoded on the pulse width, the information sent will affect the dimming level if not corrected: for instance, a bitstream with several high values '1' will appear brighter than a bitstream with several low values '0'. In order to fix this problem, the modulation requires a compensation pulse that will be inserted on the data period whenever necessary to equalize the brightness overall. The lack of this compensation symbol could introduce perceived flickering, which is undesirable. Because of the additional compensation pulse, modulating this wave is slightly more complex than modulating the VPPM. However, the information encoded on the pulse width is easy to differentiate and decode, so the complexity of the transmitter is balanced by the simplicity of the receiver. Variable Pulse Position Modulation Variable Pulse Position also switches the LED on and off repeatedly, but encode the symbols on the pulse position inside the data period. Whenever the pulse is located at the immediate beginning of the data period, the transmitted symbol is standardized as logical low '0', with logical high '1' being composed of pulses that end with the data period. Because the information is encoded at the location of the pulse inside the data period, both pulses can and will have the same width, and thus, no compensation symbol is required. Dimming is performed by the transmitting algorithm, that will select the width of the data pulses accordingly. The lack of a compensation pulse makes VPPM marginally simpler to encode when compared to OOK. However, a slightly more complex demodulation compensates for that simplicity on the VPPM technique. This decoding complexity mostly comes from the information being encoded at different rising edges for each symbol, which makes the sampling harder in a microcontroller. Additionally, in order to decode the location of a pulse within the data period, the receptor must be somehow synchronized with the transmitter, knowing exactly when a data period starts and how long it lasts. These characteristics makes the demodulation of a VPPM signal slightly more difficult to implement. Color Shift Keying Color shift keying (CSK), outlined in IEEE 802.15.7, is an intensity modulation based modulation scheme for VLC. CSK is intensity-based, as the modulated signal takes on an instantaneous color equal to the physical sum of three (red/green/blue) LED instantaneous intensities. This modulated signal jumps instantaneously, from symbol to symbol, across different visible colors; hence, CSK can be construed as a form of frequency shifting. However, this instantaneous variation in the transmitted color is not to be humanly perceptible, because of the limited temporal sensitivity in the human vision — the "critical flicker fusion threshold" (CFF) and the "critical color fusion threshold" (CCF), both of which cannot resolve temporal changes shorter than 0.01 second. The LEDs’ transmissions are, therefore, preset to time-average (over the CFF and the CCF) to a specific time-constant color. Humans can thus perceive only this preset color that seems constant over time, but cannot perceive the instantaneous color that varies rapidly in time. In other words, CSK transmission maintains a constant time-averaged luminous flux, even as its symbol sequence varies rapidly in chromaticity. See also Electric beacon Fiber-optic communication Free space optics Free-space optical communication IrDA—Same principle as VLC but uses infrared light instead of visible light Li-Fi Optical wireless communications RONJA References Further reading David G. Aviv (2006): Laser Space Communications, ARTECH HOUSE. . External links IEEE 802.15 WPAN Task Group 7 (TG7) Visible Light Communication Optical communications Telecommunications Light
Visible light communication
[ "Physics", "Technology", "Engineering" ]
2,764
[ "Information and communications technology", "Optical communications", "Physical phenomena", "Telecommunications engineering", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Telecommunications", "Waves", "Light" ]
2,133,253
https://en.wikipedia.org/wiki/Space%E2%80%93time%20trellis%20code
Space–time trellis codes (STTCs) are a type of space–time code used in multiple-antenna wireless communications. This scheme transmits multiple, redundant copies of a generalised TCM signal distributed over time and a number of antennas ('space'). These multiple, 'diverse' copies of the data are used by the receiver to attempt to reconstruct the actual transmitted data. For an STC to be used, there must necessarily be multiple transmit antennas, but only a single receive antennas is required; nevertheless multiple receive antennas are often used since the performance of the system is improved by the resulting spatial diversity. In contrast to space–time block codes (STBCs), they are able to provide both coding gain and diversity gain and have a better bit-error rate performance. In essence they marry single channel continuous time coding with the signaling protocol being used, and extend that with a multi-antenna framework. However, that also means they are more complex than STBCs to encode and decode; they rely on a Viterbi decoder at the receiver where STBCs need only linear processing. Also, whereas in a single transmitter, single receiver framework the Viterbi algorithm (or one of the sequential decoding algorithms) only has to proceed over a trellis in a single time dimension, in here the optimal decoding also has to take into consideration the number of antennas, leading to an extraneous polynomial complexity term. STTCs were discovered by Vahid Tarokh et al. in 1998. References Wireless
Space–time trellis code
[ "Engineering" ]
318
[ "Wireless", "Telecommunications engineering" ]
2,133,279
https://en.wikipedia.org/wiki/Space%E2%80%93time%20block%20code
Space–time block coding is a technique used in wireless communications to transmit multiple copies of a data stream across a number of antennas and to exploit the various received versions of the data to improve the reliability of data transfer. The fact that the transmitted signal must traverse a potentially difficult environment with scattering, reflection, refraction and so on and may then be further corrupted by thermal noise in the receiver means that some of the received copies of the data may be closer to the original signal than others. This redundancy results in a higher chance of being able to use one or more of the received copies to correctly decode the received signal. In fact, space–time coding combines all the copies of the received signal in an optimal way to extract as much information from each of them as possible. Introduction Most work on wireless communications until the early 1990s had focused on having an antenna array at only one end of the wireless link — usually at the receiver. Seminal papers by Gerard J. Foschini and Michael J. Gans, Foschini and Emre Telatar enlarged the scope of wireless communication possibilities by showing that for the highly scattering environment, substantial capacity gains are enabled when antenna arrays are used at both ends of a link. An alternative approach to utilizing multiple antennas relies on having multiple transmit antennas and only optionally multiple receive antennas. Proposed by Vahid Tarokh, Nambi Seshadri and Robert Calderbank, these space–time codes (STCs) achieve significant error rate improvements over single-antenna systems. Their original scheme was based on trellis codes but the simpler block codes were utilised by Siavash Alamouti, and later Vahid Tarokh, Hamid Jafarkhani and Robert Calderbank to develop space–time block-codes (STBCs). STC involves the transmission of multiple redundant copies of data to compensate for fading and thermal noise in the hope that some of them may arrive at the receiver in a better state than others. In the case of STBC in particular, the data stream to be transmitted is encoded in blocks, which are distributed among spaced antennas and across time. While it is necessary to have multiple transmit antennas, it is not necessary to have multiple receive antennas, although to do so improves performance. This process of receiving diverse copies of the data is known as diversity reception and is what was largely studied until Foschini's 1998 paper. An STBC is usually represented by a matrix. Each row represents a time slot and each column represents one antenna's transmissions over time. Here, is the modulated symbol to be transmitted in time slot from antenna . There are to be time slots and transmit antennas as well as receive antennas. This block is usually considered to be of 'length' The code rate of an STBC measures how many symbols per time slot it transmits on average over the course of one block. If a block encodes symbols, the code-rate is Only one standard STBC can achieve full-rate (rate 1) — Alamouti's code. Orthogonality STBCs as originally introduced, and as usually studied, are orthogonal. This means that the STBC is designed such that the vectors representing any pair of columns taken from the coding matrix is orthogonal. The result of this is simple, linear, optimal decoding at the receiver. Its most serious disadvantage is that all but one of the codes that satisfy this criterion must sacrifice some proportion of their data rate (see Alamouti's code). Moreover, there exist quasi-orthogonal STBCs that achieve higher data rates at the cost of inter-symbol interference (ISI). Thus, their error-rate performance is lower bounded by the one of orthogonal rate 1 STBCs, that provide ISI free transmissions due to orthogonality. Design of STBCs The design of STBCs is based on the so-called diversity criterion derived by Tarokh et al. in their earlier paper on space–time trellis codes. Orthogonal STBCs can be shown to achieve the maximum diversity allowed by this criterion. Diversity criterion Call a codeword and call an erroneously decoded received codeword Then the matrix has to be full-rank for any pair of distinct codewords and to give the maximum possible diversity order of . If instead, has minimum rank over the set of pairs of distinct codewords, then the space–time code offers diversity order . An examination of the example STBCs shown below reveals that they all satisfy this criterion for maximum diversity. STBCs offer only diversity gain (compared to single-antenna schemes) and not coding gain. There is no coding scheme included here — the redundancy purely provides diversity in space and time. This is contrast with space–time trellis codes which provide both diversity and coding gain since they spread a conventional trellis code over space and time. Encoding Alamouti's code Siavash Alamouti invented the simplest of all the STBCs in 1998, although he did not coin the term "space–time block code" himself. It was designed for a two-transmit antenna system and has the coding matrix: where * denotes complex conjugate. It is readily apparent that this is a rate-1 code. It takes two time-slots to transmit two symbols. Using the optimal decoding scheme discussed below, the bit-error rate (BER) of this STBC is equivalent to -branch maximal ratio combining (MRC). This is a result of the perfect orthogonality between the symbols after receive processing — there are two copies of each symbol transmitted and copies received. This is a very special STBC. It is the only orthogonal STBC that achieves rate-1. That is to say that it is the only STBC that can achieve its full diversity gain without needing to sacrifice its data rate. Strictly, this is only true for complex modulation symbols. Since almost all constellation diagrams rely on complex numbers however, this property usually gives Alamouti's code a significant advantage over the higher-order STBCs even though they achieve a better error-rate performance. See 'Rate limits' for more detail. The significance of Alamouti's proposal in 1998 is that it was the first demonstration of a method of encoding which enables full diversity with linear processing at the receiver. Earlier proposals for transmit diversity required processing schemes which scaled exponentially with the number of transmit antennas. Furthermore, it was the first open-loop transmit diversity technique which had this capability. Subsequent generalizations of Alamouti's concept have led to a tremendous impact on the wireless communications industry. Higher order STBCs Tarokh et al. discovered a set of STBCs that are particularly straightforward, and coined the scheme's name. They also proved that no code for more than 2 transmit antennas could achieve full-rate. Their codes have since been improved upon (both by the original authors and by many others). Nevertheless, they serve as clear examples of why the rate cannot reach 1, and what other problems must be solved to produce 'good' STBCs. They also demonstrated the simple, linear decoding scheme that goes with their codes under perfect channel state information assumption. 3 transmit antennas Two straightforward codes for 3 transmit antennas are: These codes achieve rate-1/2 and rate-3/4 respectively. These two matrices give examples of why codes for more than two antennas must sacrifice rate — it is the only way to achieve orthogonality. One particular problem with is that it has uneven power among the symbols it transmits. This means that the signal does not have a constant envelope and that the power each antenna must transmit has to vary, both of which are undesirable. Modified versions of this code that overcome this problem have since been designed. 4 transmit antennas Two straightforward codes for 4 transmit antennas are: These codes achieve rate-1/2 and rate-3/4 respectively, as for their 3-antenna counterparts. exhibits the same uneven power problems as . An improved version of is which has equal power from all antennas in all time-slots. Decoding One particularly attractive feature of orthogonal STBCs is that maximum likelihood decoding can be achieved at the receiver with only linear processing. In order to consider a decoding method, a model of the wireless communications system is needed. At time , the signal received at antenna is: where is the path gain from transmit antenna to receive antenna , is the signal transmitted by transmit antenna and is a sample of additive white Gaussian noise (AWGN). The maximum-likelihood detection rule is to form the decision variables where is the sign of in the th row of the coding matrix, denotes that is (up to a sign difference), the element of the coding matrix, for and then decide on constellation symbol that satisfies with the constellation alphabet. Despite its appearance, this is a simple, linear decoding scheme that provides maximal diversity. Rate limits Apart from there being no full-rate, complex, orthogonal STBC for more than 2 antennas, it has been further shown that, for more than two antennas, the maximum possible rate is 3/4. Codes have been designed which achieve a good proportion of this, but they have very long block-length. This makes them unsuitable for practical use, because decoding cannot proceed until all transmissions in a block have been received, and so a longer block-length, , results in a longer decoding delay. One particular example, for 16 transmit antennas, has rate-9/16 and a block length of 22 880 time-slots! It has been proven that the highest rate any -antenna code can achieve is where or , if no linear processing is allowed in the code matrix (the above maximal rate proved in only applies to the original definition of orthogonal designs, i.e., any entry in the matrix is , or , which forces that any variable can not be repeated in any column of the matrix). This rate limit is conjectured to hold for any complex orthogonal space–time block codes even when any linear processing is allowed among the complex variables. Closed-form recursive designs have been found. Quasi-orthogonal STBCs These codes exhibit partial orthogonality and provide only part of the diversity gain mentioned above. An example reported by Hamid Jafarkhani is: The orthogonality criterion only holds for columns (1 and 2), (1 and 3), (2 and 4) and (3 and 4). Crucially, however, the code is full-rate and still only requires linear processing at the receiver, although decoding is slightly more complex than for orthogonal STBCs. Results show that this Q-STBC outperforms (in a bit-error rate sense) the fully orthogonal 4-antenna STBC over a good range of signal-to-noise ratios (SNRs). At high SNRs, though (above about 22 dB in this particular case), the increased diversity offered by orthogonal STBCs yields a better BER. Beyond this point, the relative merits of the schemes have to be considered in terms of useful data throughput. Q-STBCs have also been developed considerably from the basic example shown. See also Multiple-input and multiple-output (MIMO) Space–time block coding based transmit diversity (STTD) Space–time code Space–time trellis code Differential space–time code References Wireless
Space–time block code
[ "Engineering" ]
2,320
[ "Wireless", "Telecommunications engineering" ]
2,133,700
https://en.wikipedia.org/wiki/Coulomb%20blockade
In mesoscopic physics, a Coulomb blockade (CB), named after Charles-Augustin de Coulomb's electrical force, is the decrease in electrical conductance at small bias voltages of a small electronic device comprising at least one low-capacitance tunnel junction. Because of the CB, the conductance of a device may not be constant at low bias voltages, but disappear for biases under a certain threshold, i.e. no current flows. Coulomb blockade can be observed by making a device very small, like a quantum dot. When the device is small enough, electrons inside the device will create a strong Coulomb repulsion preventing other electrons to flow. Thus, the device will no longer follow Ohm's law and the current-voltage relation of the Coulomb blockade looks like a staircase. Even though the Coulomb blockade can be used to demonstrate the quantization of the electric charge, it remains a classical effect and its main description does not require quantum mechanics. However, when few electrons are involved and an external static magnetic field is applied, Coulomb blockade provides the ground for a spin blockade (like Pauli spin blockade) and valley blockade, which include quantum mechanical effects due to spin and orbital interactions respectively between the electrons. The devices can comprise either metallic or superconducting electrodes. If the electrodes are superconducting, Cooper pairs (with a charge of minus two elementary charges ) carry the current. In the case that the electrodes are metallic or normal-conducting, i.e. neither superconducting nor semiconducting, electrons (with a charge of ) carry the current. In a tunnel junction The following section is for the case of tunnel junctions with an insulating barrier between two normal conducting electrodes (NIN junctions). The tunnel junction is, in its simplest form, a thin insulating barrier between two conducting electrodes. According to the laws of classical electrodynamics, no current can flow through an insulating barrier. According to the laws of quantum mechanics, however, there is a nonvanishing (larger than zero) probability for an electron on one side of the barrier to reach the other side (see quantum tunnelling). When a bias voltage is applied, this means that there will be a current, and, neglecting additional effects, the tunnelling current will be proportional to the bias voltage. In electrical terms, the tunnel junction behaves as a resistor with a constant resistance, also known as an ohmic resistor. The resistance depends exponentially on the barrier thickness. Typically, the barrier thickness is on the order of one to several nanometers. An arrangement of two conductors with an insulating layer in between not only has a resistance, but also a finite capacitance. The insulator is also called dielectric in this context, the tunnel junction behaves as a capacitor. Due to the discreteness of electrical charge, current through a tunnel junction is a series of events in which exactly one electron passes (tunnels) through the tunnel barrier (we neglect cotunneling, in which two electrons tunnel simultaneously). The tunnel junction capacitor is charged with one elementary charge by the tunnelling electron, causing a voltage build up , where is the capacitance of the junction. If the capacitance is very small, the voltage build up can be large enough to prevent another electron from tunnelling. The electric current is then suppressed at low bias voltages and the resistance of the device is no longer constant. The increase of the differential resistance around zero bias is called the Coulomb blockade. Observation In order for the Coulomb blockade to be observable, the temperature has to be low enough so that the characteristic charging energy (the energy that is required to charge the junction with one elementary charge) is larger than the thermal energy of the charge carriers. In the past, for capacitances above 1 femtofarad (10−15 farad), this implied that the temperature has to be below about 1 kelvin. This temperature range is routinely reached for example by Helium-3 refrigerators. Thanks to small sized quantum dots of only few nanometers, Coulomb blockade has been observed next above liquid helium temperature, up to room temperature. To make a tunnel junction in plate condenser geometry with a capacitance of 1 femtofarad, using an oxide layer of electric permittivity 10 and thickness one nanometer, one has to create electrodes with dimensions of approximately 100 by 100 nanometers. This range of dimensions is routinely reached for example by electron beam lithography and appropriate pattern transfer technologies, like the Niemeyer–Dolan technique, also known as shadow evaporation technique. The integration of quantum dot fabrication with standard industrial technology has been achieved for silicon. CMOS process for obtaining massive production of single electron quantum dot transistors with channel size down to 20 nm x 20 nm has been implemented. Single-electron transistor The simplest device in which the effect of Coulomb blockade can be observed is the so-called single-electron transistor. It consists of two electrodes known as the drain and the source, connected through tunnel junctions to one common electrode with a low self-capacitance, known as the island. The electrical potential of the island can be tuned by a third electrode, known as the gate, which is capacitively coupled to the island. In the blocking state no accessible energy levels are within tunneling range of an electron (in red) on the source contact. All energy levels on the island electrode with lower energies are occupied. When a positive voltage is applied to the gate electrode the energy levels of the island electrode are lowered. The electron (green 1.) can tunnel onto the island (2.), occupying a previously vacant energy level. From there it can tunnel onto the drain electrode (3.) where it inelastically scatters and reaches the drain electrode Fermi level (4.). The energy levels of the island electrode are evenly spaced with a separation of This gives rise to a self-capacitance of the island, defined as To achieve the Coulomb blockade, three criteria have to be met: The bias voltage must be lower than the elementary charge divided by the self-capacitance of the island: ; The thermal energy in the source contact plus the thermal energy in the island, i.e. must be below the charging energy: or else the electron will be able to pass the QD via thermal excitation; and The tunneling resistance, should be greater than which is derived from Heisenberg's uncertainty principle. Coulomb blockade thermometer A typical Coulomb blockade thermometer (CBT) is made from an array of metallic islands, connected to each other through a thin insulating layer. A tunnel junction forms between the islands, and as voltage is applied, electrons may tunnel across this junction. The tunneling rates and hence the conductance vary according to the charging energy of the islands as well as the thermal energy of the system. Coulomb blockade thermometer is a primary thermometer based on electric conductance characteristics of tunnel junction arrays. The parameter , the full width at half minimum of the measured differential conductance dip over an array of N junctions together with the physical constants provide the absolute temperature. Ionic Coulomb blockade Ionic Coulomb blockade (ICB) is the special case of CB, appearing in the electro-diffusive transport of charged ions through sub-nanometer artificial nanopores or biological ion channels. ICB is widely similar to its electronic counterpart in quantum dots,[1] but presents some specific features defined by possibly different valence z of charge carriers (permeating ions vs electrons) and by the different origin of transport engine (classical electrodiffusion vs quantum tunnelling). In the case of ICB, Coulomb gap is defined by dielectric self-energy of incoming ion inside the pore/channel and hence depends on ion valence z. ICB appears strong , even at the room temperature, for ions with , e.g. for Ca^2+ ions. ICB has been recently experimentally observed in sub-nanometer MoS2 pores. In biological ion channels ICB typically manifests itself in such valence selectivity phenomena as conduction bands (vs fixed charge ) and concentration-dependent divalent blockade of sodium current. See also Ionic Coulomb blockade Quantisation of charge Elementary charge References General Single Charge Tunneling: Coulomb Blockade Phenomena in Nanostructures, eds. H. Grabert and M. H. Devoret (Plenum Press, New York, 1992) D.V. Averin and K.K Likharev, in Mesoscopic Phenomena in Solids, eds. B.L. Altshuler, P.A. Lee, and R.A. Webb (Elsevier, Amsterdam, 1991) External links Computational Single-Electronics book Coulomb blockade online lecture Nanoelectronics Quantum electronics Mesoscopic physics
Coulomb blockade
[ "Physics", "Materials_science" ]
1,890
[ "Quantum electronics", "Quantum mechanics", "Condensed matter physics", "Nanoelectronics", "Nanotechnology", "Mesoscopic physics" ]
41,543,593
https://en.wikipedia.org/wiki/Gastrointestinal%20wall
The gastrointestinal wall of the gastrointestinal tract is made up of four layers of specialised tissue. From the inner cavity of the gut (the lumen) outwards, these are the mucosa, the submucosa, the muscular layer and the serosa or adventitia. The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the lumen of the tract and comes into direct contact with digested food (chyme). The mucosa itself is made up of three layers: the epithelium, where most digestive, absorptive and secretory processes occur; the lamina propria, a layer of connective tissue, and the muscularis mucosae, a thin layer of smooth muscle. The submucosa contains nerves including the submucous plexus (also called Meissner's plexus), blood vessels and elastic fibres with collagen, that stretches with increased capacity but maintains the shape of the intestine. The muscular layer surrounds the submucosa. It comprises layers of smooth muscle in longitudinal and circular orientation that also helps with continued bowel movements (peristalsis) and the movement of digested material out of and along the gut. In between the two layers of muscle lies the myenteric plexus (also called plexus). The serosa/adventitia are the final layers. These are made up of loose connective tissue and coated in mucus so as to prevent any friction damage from the intestine rubbing against other tissue. The serosa is present if the tissue is within the peritoneum, and the adventitia if the tissue is retroperitoneal. Structure When viewed under the microscope, the gastrointestinal wall has a consistent general form, but with certain parts differing along its course. Mucosa The mucosa is the innermost layer of the gastrointestinal tract. It surrounds the cavity (lumen) of the tract and comes into direct contact with digested food (chyme). The mucosa is made up of three layers: The epithelium is the innermost layer. It is where most digestive, absorptive and secretory processes occur. The lamina propria, a layer of connective tissue within the mucosa. The muscularis mucosae, a thin layer of smooth muscle. The epithelium, the most exposed part of the mucosa, is a glandular epithelium with many goblet cells. Goblet cells secrete mucus, which lubricates the passage of food along and protects the intestinal wall from digestive enzymes. In the small intestine, villi are folds of the mucosa that increase the surface area of the intestine. The villi contain a lacteal, a vessel connected to the lymph system that aids in the removal of lipids and tissue fluids. Microvilli are present on the epithelium of a villus and further increase the surface area over which absorption can take place. Numerous intestinal glands as pocket-like invaginations are present in the underlying tissue. In the large intestines, villi are absent and a flat surface with thousands of glands is observed. Underlying the epithelium is the lamina propria, which contains myofibroblasts, blood vessels, nerves, and several different immune cells, and the muscularis mucosa which is a layer of smooth muscle that aids in the action of continued peristalsis and catastalsis along the gut. Cells of the small intestinal mucosa Epithelium The epithelial lining of the mucosa, differs along the gastrointestinal tract. The epithelium is described as stratified if it consists of multiple layers of cells, and simple if it is made up of one layer of cells. Terms used to describe the shape of the cells in it - columnar if column-shaped, and squamous if flat. In the oesophagus, pharynx and external anal canal the epithelium is stratified, squamous and non-keratinising, for protective purposes. In the stomach, the epithelium is simple columnar, and is organised into gastric pits and glands to deal with secretion. In the small intestine, epithelium is simple columnar and specialised for absorption. It is organised into plicae circulares and villi, and the enterocytes have microvilli. The microvilli create a brush border that increases the area for absorption. In the ileum there are occasionally Peyer's patches in the lamina propria. Brunner's glands are found in the duodenum but not in other parts of the small intestine. In the colon, epithelium is simple columnar and without villi. Goblet cells, which secrete mucus, are also present. The appendix has a mucosa resembling the colon but is heavily infiltrated with lymphocytes. Transition between the different types of epithelium occurs at the junction between the oesophagus and stomach; between the stomach and duodenum, between the ileum and caecum, and at the pectinate line of the anus. Submucosa The submucosa consists of a dense and irregular layer of connective tissue with blood vessels, lymphatics, and nerves branching into the mucosa and muscular layer. It contains the submucous plexus, and enteric nervous plexus, situated on the inner surface of the muscular layer. Muscular layer The muscular layer consists of two layers of muscle, the inner and outer layer. The muscle of the inner layer is arranged in circular rings around the tract, whereas the muscle of the outer layer is arranged longitudinally. The stomach has an extra layer, an inner oblique muscular layer. Between the two muscle layers is the myenteric plexus (Auerbach's plexus). This controls peristalsis. Activity is initiated by the pacemaker cells (interstitial cells of Cajal). The gut has intrinsic peristaltic activity (basal electrical rhythm) due to its self-contained enteric nervous system. The rate can, of course, be modulated by the rest of the autonomic nervous system. The layers are not truly longitudinal or circular, rather the layers of muscle are helical with different pitches. The inner circular is helical with a steep pitch and the outer longitudinal is helical with a much shallower pitch. The coordinated contractions of these layers is called peristalsis and propels the food through the tract. Food in the GI tract is called a bolus (ball of food) from the mouth down to the stomach. After the stomach, the food is partially digested and semi-liquid, and is referred to as chyme. In the large intestine the remaining semi-solid substance is referred to as faeces. The circular muscle layer prevents food from travelling backward and the longitudinal layer shortens the tract. The thickness of the muscular layer varies in each part of the tract: In the colon, for example, the muscular layer is much thicker because the faeces are large and heavy and require more force to push along. The outer longitudinal layer of the colon thins out into 3 discontinuous longitudinal bands, known as taeniae coli (bands of the colon). This is one of the 3 features helping to distinguish between the large and small intestine. Occasionally in the large intestine (2-3 times a day), there will be mass contraction of certain segments, moving a lot of faeces along. This is generally when one gets the urge to defecate. The pylorus of the stomach has a thickened portion of the inner circular layer: the pyloric sphincter. Alone among the GI tract, the stomach has a third layer of muscular layer. This is the inner oblique layer and helps churn the chyme in the stomach. Serosa and adventitia The outermost layer of the gastrointestinal wall consists of several layers of connective tissue and is either of serosa (below the diaphragm) or adventitia above the diaphragm. Regions of the gastrointestinal tract within the peritoneum (called Intraperitoneal) are covered with serosa. This structure consists of connective tissue covered by a simple squamous epithelium, called the mesothelium, which reduces frictional forces during digestive movements. The intraperitoneal regions include most of the stomach, first part of the duodenum, all of the small intestine, caecum and appendix, transverse colon, sigmoid colon and rectum. In these sections of the gut there is clear boundary between the gut and the surrounding tissue. These parts of the tract have a mesentery. Regions of the gastrointestinal tract behind the peritoneum (called retroperitoneal) are covered with adventitia. They blend into the surrounding tissue and are fixed in position (for example, the retroperitoneal section of the duodenum usually passes through the transpyloric plane). The retroperitoneal regions include the oral cavity, esophagus, pylorus of the stomach, distal duodenum, ascending colon, descending colon and anal canal. Clinical significance The gastrointestinal wall can be affected in a number of conditions. An ulcer is something that's eroded through the epithelium of the wall. Ulcers that affect the tract include peptic ulcers and perforated ulcer is one that has eroded completely through the layers. The gastrointestinal wall is inflamed in a number of conditions. This is called esophagitis, gastritis, duodenitis, ileitis, and colitis depending on the parts affected. It can be due to infections or other conditions, including coeliac disease, and inflammatory bowel disease affects the layers of the gastrointestinal tract in different ways. Ulcerative colitis involves the colonic mucosa. Crohn's disease may produce inflammation in all layers in any part of the gastrointestinal tract and so can result in transmural fistulae. Invasion of tumours through the layers of the gastrointestinal wall is used in staging of tumour spread. This affects treatment and prognosis. The normal thickness of the small intestinal wall is 3–5 mm, and 1–5 mm in the large intestine. Focal, irregular and asymmetrical gastrointestinal wall thickening suggests a malignancy. Segmental or diffuse gastrointestinal wall thickening is most often due to ischemic, inflammatory or infectious disease. Additional images References Membrane biology Digestive system
Gastrointestinal wall
[ "Chemistry", "Biology" ]
2,297
[ "Digestive system", "Organ systems", "Membrane biology", "Molecular biology" ]
41,546,457
https://en.wikipedia.org/wiki/AC%20power%20plugs%20and%20sockets%3A%20British%20and%20related%20types
Plugs and sockets for electrical appliances not hardwired to mains electricity originated in the United Kingdom in the 1870s and were initially two-pin designs. These were usually sold as a mating pair, but gradually de facto and then official standards arose to enable the interchange of compatible devices. British standards have proliferated throughout large parts of the former British Empire. BS 1363, 13 A plugs socket-outlets adaptors and connection units is a British Standard which specifies the most common type of single-phase AC power plugs and sockets that are used in the United Kingdom. Distinctive characteristics of the system are shutters on the neutral and (see below) socket holes, and a fuse in the plug. It has been adopted in many former British colonies and protectorates. BS 1363 was introduced in 1947 as one of the new standards for electrical wiring in the United Kingdom used for post-war reconstruction. The plug and socket replaced the BS 546 plug and socket, which are still found in old installations or in special applications. BS 1363 plugs have been designated as Type G in the IEC 60083 plugs and sockets standard. In the United Kingdom and in Ireland, this system is usually referred to simply as a "13 amp plug" or a "13 amp socket". BS 546, Two-pole and earthing-pin plugs, socket-outlets and socket-outlet adaptors for AC (50–60 Hz) circuits up to 250 V is an older British Standard for three-pin AC power plugs and sockets. Originally published in April 1934, it was updated by a 1950 edition which is still current, with eight amendments up to 1999. BS 546 is also the precursor of current Indian and South African plug standards. The 5 A version has been designated as Type D and the 15 A as Type M in the IEC 60083 plugs and sockets standard. BS 546 plugs and sockets are still permitted in the UK, provided the socket has shutters. In the United Kingdom and in Ireland this system is usually referred to by its pin shape and is simply known as "round pin plugs" or "round pin sockets". It is often associated with obsolete wiring installations or where it is found in modern wiring, it is confined to special use cases, particularly for switch-controlled lamps and stage lighting. Concepts and terminology The International Electrotechnical Commission publishes IEC 60050, the International Electrotechnical Vocabulary. Generally the plug is the movable connector attached to an electrically operated device's mains cable, and the socket is fixed on equipment or a building structure and connected to an energised electrical circuit. The plug has protruding pins (referred to as male) that fit into matching apertures (called female) in the sockets. A plug is defined in IEC 60050 as an "accessory having pins designed to engage with the contacts of a socket-outlet, also incorporating means for the electrical connection and mechanical retention of flexible cables or cords". A plug therefore does not contain components which modify the electrical output from the electrical input (except where a switch or fuse is provided as a means of disconnecting the output from input). There is an erroneous tendency to refer to power conversion devices with incorporated plug pins as plugs, but IEC 60050 refers to these as 'direct plug-in equipment' defined as "equipment in which the mains plug forms an integral part of the equipment enclosure so that the equipment is supported by the mains socket-outlet". In this article, the term 'plug' is used in the sense defined by IEC 60050. Sockets are designed to prevent exposure of bare energised contacts. To reduce the risk of users accidentally touching energized conductors and thereby experiencing electric shock, plug and socket systems often incorporate safety features in addition to the recessed contacts of the energized socket. These include plugs with insulated sleeves, sockets with blocking shutters, and sockets designed to accept only compatible plugs inserted in the correct orientation. The term plug is in general and technical use in all forms of English, common alternatives being power plug, electric plug, and plug top. The normal technical term for an AC power socket is socket-outlet, but in non-technical common use a number of other terms are used. The general term is socket, but there are numerous common alternatives, including power point, plug socket, wall socket, and wall plug. Modern British sockets for domestic use are normally manufactured as single or double units with an integral face plate and are designed to fit standard mounting boxes. Electrical sockets for single phase domestic, commercial and light industrial purposes generally provide three electrical connections to the supply conductors. These are termed neutral, line and earth. Both neutral and line carry current and are defined as live parts. Neutral is usually at or very near to earth potential, being earthed either at the substation or at the service entrance (neutral-to-earth bonding is not permitted in the distribution board/consumer unit). Line (commonly, but technically incorrectly, called live) carries the full supply voltage relative to the neutral. The protective earth connection allows the exposed metal parts of the appliance to be connected to earth, providing protection to the user should those exposed parts inadvertently come into contact with any live parts within the appliance. Historically, two-pin sockets without earth were used in Britain, but their use is now restricted to sockets specifically designated for shavers and toothbrushes. An adaptor (in the context of plugs and sockets) is defined in IEC 60050 as "a portable accessory constructed as an integral unit incorporating both a plug portion and one or more socket-outlet portions". (There is an alternative spelling, 'adapter', but adaptor is the form usually used in standards and official documents.) Common characteristics There are certain characteristics common to British mains plugs and sockets intended for domestic use. The brass pins appear relatively solid and large compared to others. British Standards for plugs (with the exception of BS 4573) have always specified side entry flex (entry in other types is usually parallel to the axes of the pins). Since 1934, the contacts of a socket have been specified in terms of the pins of the plug, rather than by specifying the contact dimensions. The pins of both round pin and rectangular pin plugs are arranged in a triangular fashion, the earth pin being the larger and longer pin at the apex. Earthed sockets are designed to be incompatible with two-pin plugs. Both BS 546 and BS 1363 sockets, when viewed from the front with the earth uppermost, have the line aperture at the lower right. British plugs and sockets regulatory system A Statutory Instrument, the Plugs and Sockets etc. (Safety) Regulations 1987, was introduced to specifically regulate plugs and sockets in the United Kingdom. This was revised by the Plugs and Sockets etc. (Safety) Regulations 1994. The guidance notes to the 1994 regulations state: The regulations include a requirement that all plug types must be tested and certified by a nominated approval body (normally BSI, ASTA-Intertek or NEMKO). They also require that all mains appliances for domestic use in the UK be supplied with approved BS 1363 plugs, but there is an exception for plugs fitted to shavers and toothbrushes which are normally a UK shaver plug (BS 4573) but may also be a Europlug (BS EN 50075). The regulations also contain a provision for the approval of non-BS 1363 conforming plugs when "the plugs are constructed using an alternative method of construction which provides an equivalent level of safety in respect of any risk of death or personal injury to plugs which conform to BS 1363 and is such that plugs of that type may reasonably be expected to be safe in use". Certifying bodies have used this provision by developing their own standards for novel devices, thus allowing the introduction of innovative developments; an example is the plastic ISOD (insulated shutter opening device) which was originally approved against either an ASTA Standard or the BSI PAS 003 (See NOTE) before becoming incorporated into BS 1363-1:1995 at the second amendment (AMD 14539) in 2003. NOTE: despite having a reference beginning 'PAS', PAS 003 was not a Publicly Available Specification but a BSI Product Approval Specification. There is no European Union regulation of domestic mains plugs and sockets; the Low Voltage Directive specifically excludes domestic plugs and sockets. As such, no relevant retained EU law applies. EU countries each have their own regulations and national standards and CE marking is neither applicable nor permitted on plugs and sockets. Despite this CE marking is sometimes fraudulently used, especially on universal sockets. Early history When electricity was first introduced into houses, it was primarily used for lighting. As electricity became a common method of operating labour-saving appliances, a safe means of connection to the electric system other than using a light socket was needed. According to British author John Mellanby the first plug and socket in England was introduced by T. T. Smith in 1883, and there were two-pin designs by 1885, one of which appears in the (British) General Electric Company catalogue of 1889. Gustav Binswanger, a German Jewish immigrant who founded the General Electric Company, obtained a patent (GB189516898) in 1895 for a plug and socket using a concentric (co-axial) contact system. The earthed consumer plug has several claimants to its invention. A 1911 book dealing with the electrical products of A. P. Lundberg & Sons of London describes the Tripin earthed plug available in 2.5 A and 5 A models. The pin configuration of the Tripin appears virtually identical to modern BS 546 plugs. In her 1914 book Electric cooking, heating, cleaning, etc Maud Lucas Lancaster mentions an earthed iron-clad plug and socket by the English firm of A. Reyrolle & Company. The 1911 General Electric Company (GEC) Catalogue included several earthed sockets intended for industrial use. British two-pin plugs and sockets The earliest domestic plug and socket is believed to be that patented by T. T. Smith in 1883. This was shortly followed by patents from W. B. Sayers and G. Hookham; these early designs had rectangular plugs with contact plates on either side. In 1885, two-pin plug designs appeared and in 1889 there were two-pin plugs and sockets in the GEC catalogue. The 1893 GEC Catalogue included three sizes of what was described as Double plug Sockets with capacities described not in amps, but as "1 to 5 lights", "5 to 10 lights" and "10 to 20 lights". These were clearly recognisable as two-pin plugs and sockets, but with no indication as to pin size or spacing, they were sold as pairs. The same catalogue included lampholder plugs for both BC and ES lampholders (capacity unspecified), and also a type of two-pole concentric plug and socket (similar to a very large version of the concentric connectors used for laptop PC power connections) in the "1 to 5 lights" and "5 to 10 lights" capacities. Crompton and Company introduced the first two-pin socket with protective shutters in 1893, and the Edison & Swan Company was also manufacturing two-pin plug and sockets in the 1890s. By the time the 1911 GEC Catalogue was published two-pin plugs and sockets were being offered with specifications in amps, but still with no indication as to pin size or spacing. The Midget Gauge was rated at 3 A, the Standard Gauge rated at 5 A, and the Union Gauge rated at 10 A. Also offered were two-way and three-way "T pieces" or multi-way adaptors for the 3 A and 5 A plugs, two-way only for the 10 A. Versions of the concentric plug and socket were now offered rated at 5 A and 10 A. At the same time Lundberg were offering the 2.5 A Dot, 5 A Universal, and 15 A Magnum, and Tucker were offering a range of 5 A, 10 A and 20 A plugs and sockets. BS 73 Wall plugs and sockets (five ampere two-pin without earthing connection) was first published in 1915, and revised in 1919 with the addition of 15 A and 30 A sizes. By the 1927 revision of BS 73 four sizes of two-pin plugs and sockets were standardized: 2 A, 5 A, 15 A and 30 A. This was later superseded by BS 372:1930 part 1 Two-pin Side-entry Wall Plugs And Sockets for Domestic Purposes. Following the introduction of BS 4573 in 1970 there were no longer any UK domestic uses for two-pin sockets except for shavers, so BS 372 was renamed "Two-pin Side-entry Wall Plugs And Sockets For Special Circuits" and subsequently withdrawn. BS 4573 (UK shaver) BS 4573 British Standard Specification for two-pin reversible plugs and shaver socket-outlets defines a plug for use with electric shavers. The pin dimensions are the same as those of the 5 A plug specified in the obsolete BS 372:1930 part 1 (as shown in the table above). Unlike the original, the plug has insulated sleeves on the pins. Electric toothbrushes in the UK are normally supplied with the same plug. The sockets for this plug are rated at (and limited to) 200 mA. BS 4573 has no explicit specification for the plug rating, but Sheet GB6 of IEC 60083 states that a rating of 0.2 A applies to all BS 4573 accessories. The BS 4573 socket is for use in rooms other than bathrooms. When installed in wet areas (e.g. bathrooms), for safety reasons it is normally found incorporated into a shaver supply unit which includes an isolation transformer and meets various mechanical and electrical characteristics specified by the BS EN 61558-2-5 safety standard to protect against shock in wet areas. Shaver supply units also typically accept a variety of 230 V two-pin plug types including BS 4573 and Australian two-pin plugs. The isolation transformer often includes a 115 V output that supplies a two-pin US Type A socket. Shaver supply units must also be current-limited; BS EN 61558-2-5 specifies a minimum rating of 20 VA and maximum of 50 VA. BS 4573 and BS EN 61558-2-5 both require sockets to be marked with the shaver symbol defined in the IEC Standard 60417-5225; the words "shavers only" are also often used but not required. Differences between BS4573 Type C and Europlug Type C. The BS4573 plug has round 5.1mm contacts, spacing 16.7mm. The Europlug has 4mm contacts, spacing 19mm. In order to plug a Europlug into a BS4573 socket, an adapter is required. However, it may also be possible to force the Euro plug into British sockets. British three-pin (round) plugs and sockets In the early 20th century, A. P. Lundberg & Sons of London manufactured the Tripin earthed plug available in 2.5 A and 5 A models. The Tripin is described in a 1911 book dealing with the electrical products of A. P. Lundberg & Sons and its pin configuration appears virtually identical to modern BS 546 plugs. The first British standard for domestic three-pin plugs was BS 317 Hand-Shield and Side Entry Pattern Three-Pin Wall Plugs and Sockets (Two Pin and Earth Type) published in 1928. This was superseded in 1930 by BS 372 Side-Entry Wall Plugs and Sockets for Domestic Purposes Part II which states that there are only minor alterations from BS 317. In 1934, BS 372 Part II was in turn superseded by the first edition of BS 546 Two-Pole and Earthing-Pin Plugs and Socket Outlets. BS 546:1934 clause 2 specifies interchangeability with BS 372 Part II which includes the same four plug and socket sizes. (BS 372 Part I was a standard for two-pin non-earthed plugs which were never included in BS 546 and which were incompatible due to different pin spacings.) Also in 1934 the 10th Edition of the IEE's "Regulations for the Electrical Equipment of Buildings" introduced the requirement for all sockets to have an earth contact. Prior to BS 546, British Standards for domestic plugs and sockets included dimensional specifications for the socket contact tubes. In BS 546 there are no dimensions for socket contacts, instead they are required to make good contact with the specified plug pins. Before the introduction of BS 317, GH Scholes (Wylex) introduced (in 1926) an alternative three-pin plug in three sizes, 5 A, 10 A and 15 A with a round earth pin and rectangular live and neutral pins. A fused 13 A version of this continued to be available after the introduction of BS 1363, illustrating that BS 546 was not used exclusively at any time. Although still permitted by the UK wiring regulations, BS 546 sockets are no longer used for general purposes. The type D 2 amp and 5 amp plugs are mainly used for lighting such as table lamps. Some of the type D and M varieties remain in use in other countries and in specialist applications in the UK such as stage lighting for the type M. When BS 546 was in common use domestically in the UK the standard did not require sockets to be shuttered, although many were. The current revision of the standard allows optional shutters similar to those of BS 1363. Current UK wiring regulations require socket outlets installed in homes to be shuttered. BS 546 There are four ratings of plug and socket in BS 546, (2 A, 5 A, 15 A and 30 A). Each has the same general appearance but they are different physical sizes to prevent interchangeability. They also have pin spacing which is different from the two-pin plugs specified in BS 372, so earthed plugs will not fit into unearthed sockets, and vice versa. Plugs fitted with BS 546 fuses have been optional since the original BS 546:1934 with maximum fuse ratings of 2 A in the 2 A plug, and 5 A in the 5 A, 15 A and 30 A plugs. In practice most BS 546 plugs are unfused with fused versions being unusual and expensive. The 15 ampere (A) sockets were generally given a dedicated 15 A circuit. Multiple 5 A sockets might be on a 15 A circuit, or each on a dedicated 5 A circuit. Lighting circuits fused at 5 A were generally used to feed the 2 A sockets. Adaptors were available from 15 A down to 5 A and from 5 A down to 2 A so in practice it was possible for an appliance with the smallest size of flex to be protected only by a 15 A fuse. This is a similar level of protection to that seen for portable appliances in other countries, but less than the protection offered by the BS 1363 fused plug. The larger top pin is the earth connection, the left-hand pin is neutral and the right-hand pin is line when looking at a socket or at the rear of a plug. 2-ampere This plug was used to connect low-power appliances (and to adaptors from the larger socket types). It is sometimes still used to connect lamps to a lighting circuit. 5-ampere This plug corresponds to Type D in the IEC table. In the UK it was used for moderate sized appliances, either on its own 5 A circuit or on a multi socket 15 A circuit, and also on many adaptors (both multi socket 5 A adaptors and adaptors that also had 15 A pins). This 5 A plug, along with its 2 A cousin, is sometimes used in the UK for centrally switched domestic lighting circuits, in order to distinguish them from normal power circuits; this is quite common in hotel rooms. This plug was also once used in theatrical installations for the same reasons as the 15 A model below. 15-ampere This plug corresponds to Type M in the IEC table. It is the largest in domestic use and is commonly used in the UK for indoor dimmable theatre and architectural lighting installations. 30-ampere The 30 A plug is the largest of the family. This was used for high power industrial equipment up to 7.2 kW, such as industrial kitchen appliances, or dimmer racks for stage lighting. Plugs and sockets were usually of an industrial waterproof design with a screw locking ring on the plug to hold it in the socket against waterproof seals, and sockets often had a screw cap chained to them to be used when no plug was inserted to keep them waterproof. Use of the BS 546 30 A plugs and sockets diminished through the 1970s as they were replaced with BS 4343 (which later became IEC 60309) industrial combo plugs and sockets. Characteristics of BS 546 three-pin plugs BS 546:1950 (current version confirmed October 2012) specifies pin dimensions only in decimal fractions of an inch, as shown below. The metric values are conversions provided here for convenience. Note, the original lengths of the line and neutral pins on the 15 and 5 amp versions were slightly longer at and respectively. BS 1363 three-pin (rectangular) plugs and sockets BS 1363 is a British Standard which specifies the common single-phase AC power plugs and sockets that are used in the United Kingdom. Distinctive characteristics of the system are shutters on the and neutral socket holes, and a fuse in the plug. It has been adopted in many former British overseas territories. BS 1363 was introduced in 1947 as one of the new standards for electrical wiring in the United Kingdom used for post-war reconstruction. This plug corresponds to Type G in the IEC table. BS 1363 replaced the BS 546 plug and socket (which are still found in old installations or in special applications such as remotely switched lighting). Other exceptions to the use of BS 1363 plugs and sockets include equipment requiring more than 13 A, low-power portable equipment (such as shavers and toothbrushes) and mains-operated clocks. History In 1941 Lord Reith, then the minister of Works and Planning, established committees to investigate problems likely to affect the post-war rebuilding of Britain. One of these, the Electrical Installations Committee, was charged with the study of all aspects of electrical installations in buildings. Amongst its members was Dame Caroline Haslett, President of the Women's Engineering Society, Director of the Electrical Association for Women and an expert on safety in the home. Convened in 1942, the committee reported in 1944, producing one of a set of Post War Building Studies that guided reconstruction. The plug and socket-outlet system defined in BS 1363 is a result of one of the report's recommendations. Britain pre-war had used a combination of 2A, 5A, and 15A round pin sockets. In an appendix to the main report (July 1944), the committee proposed that a completely new socket-outlet with a fuse in the plug to protect an appliance's flexible cord should be adopted as the "all-purpose" one socket and plug domestic standard. The main report listed eight points to consider in deciding the design of the new standard. The first of these was stated as, "To ensure the safety of young children it is of considerable importance that the contacts of the socket-outlet should be protected by shutters or other like means, or by the inherent design of the socket-outlet." Others included flush-fitting as opposed to the 2A, 5A and 15A sockets which mainly protruded from the wall being fitted on a patress, a switch being optional, requirements for terminals, bottom entry for the cable, and contact design. The appendix added five further "points of technical detail" including requirements that plugs could not be inserted incorrectly, should be easy to withdraw, and should include a fuse. This requirement for a new system of plugs and sockets led to the publishing in 1947 of "British Standard 1363:1947 Fused-Plugs and Shuttered Socket-Outlets". One of the other recommendations in the report was the introduction of the final ring circuit system (often informally called a "ring main"). In this arrangement a cable connected to a fuse, or circuit breaker, in the distribution board was wired in sequence to a number of sockets before being terminated back at the distribution board, thus forming a final ring circuit. In the final ring circuit, each socket-outlet was supplied with current by conductors on both sides of the 'loop.' This contrasts with the radial circuit system (which is also used in the UK, often in the same installation) wherein a single cable runs out radially, like a spoke, from the distribution board to serve a number of sockets. Since the fuse or circuit breaker for a final ring circuit has to be rated for the maximum current the final ring could carry (30A or 32A for a breaker), additional protection is required at each socket-plug connection. Theoretically, such protection could have been designated either within the socket or within the plug. However, to ensure that this protection has a rating matched to the appliance flexible cord fitted to the plug, a fuse rated between 1A and 13A is incorporated into each plug. The fuse in the plug would be sized to protect the flexible cord for over-current. Wired connections may also be connected to the final ring, requiring to include a suitably rated fuse and switch. The final ring circuit in the UK requires the use of BS 1363 plugs and sockets. However, the BS 1363 system is not limited to use with final ring circuits being suitable for radial circuits. Chronology BS 1363 is periodically revised and with supplements and amendments issued between major revisions. BS 1363:1984 and earlier versions dealt only with 13 A plugs and sockets. From 1989 onwards the standard was rearranged into five parts as follows: Part 1: Rewirable and non-rewirable 13 A fused plugs Part 2: 13 A Switched and unswitched socket-outlets Part 3: Adaptors Part 4: 13 A fused connection units: switched and unswitched Part 5: 13 A fused conversion plugs The following chronology shows revisions, supplements and significant amendments. June 1947: BS 1363:1947 "Fused-Plugs and Shuttered Socket-Outlets" published. May 1950: BS 1363:1947 Amendment 3, title changed to "Specification for two-pole and earthing-pin fused-plugs and shuttered socket-outlets for A.C. circuits up to 250 Volts (not intended for use on D.C. circuits)". January 1957: BS 1363:1947 Amendment 5, added clause permitting operation of shutters by simultaneous insertion of two or more pins (in addition to original method using only earth pin). January 1957: BS 1363:1947 Supplement No. 1 added specification for surface mounted socket-outlets. 1957: Complementary standard published, BS 2814:1957 "Two-pole and earthing-pin flush-mounted 13-Amp switch socket-outlets for A.C. circuits up to 250 Volts". A separate standard specifying a switched version of the BS 1363 socket-outlet for use with BS 1363 plugs. December 1960: BS 1363:1947 Supplement No. 2, added specification for Resilient Plugs. December 1961: BS 2814:1957 Amendment 2, title simplified to "13 Ampere Switch Socket-Outlets". 1962: BS 2814:1957 Supplement No. 1 added specification for surface mounted switch outlets. September 1967: BS 1363:1967 "Specification for 13A plugs, switched and unswitched socket-outlets and boxes" published. This standard superseded both BS 1363:1947 and BS 2814:1957. Only 3 A and 13 A fuses are specified. Resilient Plugs are included. August 1984: BS 1363:1984 "Specification for 13 A fused plugs switched and unswitched socket-outlets" published. This standard superseded BS 1363:1967. Changes include the introduction of sleeved pins on Line and Neutral, metric dimensions replacing inches, specifications added for non-rewirable plugs and portable socket-outlets. The standard was aligned, where possible, with the proposed IEC standard for domestic plugs and socket-outlets. February 1989: BS 1363-3:1989 "13 A plugs socket-outlets and adaptors - Part 3: Specification for adaptors" published. This new standard covers adaptors for use with BS 1363 socket-outlets and includes conversion adaptors (those which accept plugs of a different type), multiway adaptors (those which accept more than one plug, which may or may not be of a different type) and shaver adaptors. All adaptors (except for those accepting not more than two BS 1363 plugs) require to be fused. All sockets, including those to other standards, must be shuttered. 1994: A Product Approval Specification, PAS 003:1994, "Non-Rewirable 13 A Plugs with Plastic Socket Shutter Opening Pins" published. PAS 003 allowed for the design and approval of plugs without earthing intended for class II applications only. This was superseded by BS 1363-1:1995 but the PAS was not withdrawn until 23 July 2013. February 1995: BS 1363-1:1995 "13 A plugs socket-outlets adaptors and connection units - Part 1: Specification for rewirable and non-rewirable 13 A fused plugs" published. This standard, together with BS 1363-2:1995, supersedes BS 1363:1984. The provisions of PAS 003 are incorporated, but the plastic pin is redesignated as an "ISOD". September 1995: BS 1363-2:1995 "13 A plugs socket-outlets adaptors and connection units - Part 2: Specification for 13 A switched and unswitched socket-outlets" published. September 1995: BS 1363-3:1995 "13 A plugs socket-outlets adaptors and connection units - Part 3: Specification for adaptors" published. Supersedes BS 1363-3:1989 November 1995: BS 1363-4:1995 "13 A plugs socket-outlets adaptors and connection units - Part 4: Specification for 13 A fused connection units switched and unswitched" published. A new standard. August 2008: BS 1363-5:2008 "13 A plugs socket-outlets adaptors and connection units - Part 5: Specification for 13 A fused conversion plugs" published. A new standard. May 2012: BS 1363-1:1995 +A4:2012 (Title unchanged) published. This amended standard allows switches to be incorporated into plugs, and introduced new overload tests amongst others. BS 1363-1:1995 remained current until 31 May 2015. May 2012: BS 1363-2:1995 +A4:2012 (Title unchanged) published. This amended standard adds a requirement that it shall not be possible to operate a shutter by the insertion of a two-pin Europlug, and introduced new temperature rise tests amongst others. BS 1363-2:1995 remained current until 31 May 2015. May 2012: BS 1363-4:1995 +A4:2012 (Title unchanged) published. Minor changes to BS 1363-4:1995 which remained current until 31 May 2015. November 2012: BS 1363-3:1995 +A4:2012 (Title unchanged) published. This amended standard adds a requirement that it shall not be possible to operate a shutter by the insertion of a two-pin Europlug, and added specifications for switched adaptors amongst others. BS 1363-3:1995 will remain current until 31 December 2015. August 2016: BS 1363-1:2016 (Title unchanged) published. Added requirements for incorporated electronic components and for electric vehicle charging. BS 1363-1:1995 +A4:2012 remained current until 31 August 2019. August 2016: BS 1363-2:2016 (Title unchanged) published. Added requirements for incorporated electronic components and for electric vehicle charging. BS 1363-2:1995 +A4:2012 remained current until 31 August 2019. August 2016: BS 1363-3:2016 (Title unchanged) published. Added requirements for incorporated electronic components. BS 1363-3:1995 +A4:2012 remained current until 31 August 2019. August 2016: BS 1363-4:2016 (Title unchanged) published. Minor changes only. BS 1363-4:1995 +A4:2012 remained current until 31 August 2019. August 2016: BS 1363-5:2016 (Title unchanged) published. Minor changes only. BS 1363-5:2008 remained current until 31 August 2019. February 2018: BS 1363-1:2016 +A1:2018 (Title unchanged) published. BS 1363‑1:2016 is withdrawn and BS 1363‑1:1995+A4:2012 remained current until 31 August 2019. BS 1363-1 Rewirable and non-rewirable 13 A fused plugs A BS 1363 plug has two horizontal, rectangular pins for and neutral, and above these pins, a larger, vertical pin for an earth connection. Both line and neutral carry current and are defined as live parts. The earth pin also serves to operate the basic shutter mechanism used in many sockets. Correct polarity is established by the position of the earth pin relative to the other two pins, ensuring that the pin is connected to the correct terminal in the socket-outlet. Moulded plugs for unearthed, double-insulated appliances may instead have a non-conductive plastic pin (an Insulated Shutter Opening Device or ISOD) the same size and shape as an earth pin, to open the shutters. When looking at the plug pins with the earth uppermost the lower left pin is live, and the lower right is neutral. UK consumer protection legislation requires that most domestic electrical goods sold must be provided with fitted plugs to BS 1363-1. These are usually, but not necessarily, non-rewirable. Rewirable plugs for hand-wiring with a screwdriver are commonly available and must be provided with instructions. Nominal dimensions BS 1363-1 specifies the dimensions of plug pins and their disposition with respect to each other in precise, absolute terms. The and neutral pins have a rectangular cross section 6.4 mm by 4.0 mm, 17.7 mm long and with centres 22.2 mm apart. The protective-earth pin is a rectangular cross section 8.0 mm by 4.0 mm, 22.3 mm long and with a centre line 22.2 mm from the line/neutral pin centre line. The dimensions were originally specified in decimal inches with asymmetric tolerances and redefined as minimum and maximum metric dimensions in BS 1363:1984. Dimensions are chosen to provide safe clearance to live parts. The distance from any part of the and neutral pins to the periphery of the plug base must be not less than 9.5 mm. This ensures that nothing can be inserted alongside a pin when the plug is in use and helps keep fingers away from the pins. The longer earth pin ensures that the earth path is connected before the live pins, and that it remains connected until after the live pins are disconnected. The earth pin is too large to be inserted into the or neutral sockets by mistake. Pin insulation Initially, BS 1363 did not require the and neutral pins to have insulating sleeves. Plugs made to the recent revisions of the standard have insulated sleeves to prevent finger contact with pins, and also to stop metal objects (for example, fallen window blind slats) from becoming live if lodged between the wall and a partly pulled out plug. The length of the sleeves prevents any live contacts from being exposed while the plug is being inserted or removed. An early method of sleeving the pins involving spring-loaded sleeves is described in the 1967 British Patent GB1067870. The method actually adopted is described in the 1972 British Patent GB1292991. Plugs with such pins were available in the 1970s; a Southern Electricity/RoSPA safety pamphlet from 1978 encourages their use. Sleeved pins became required by the standard in 1984. Fuses (BS 1362) There are two common misconceptions about the purpose of the fuse in a BS 1363 plug: one is that it protects the appliance connected to the plug, and the other is that it protects against overloading. In fact the fuse is there to protect the flexible cord between the plug and the appliance under fault conditions (typical British ring circuits can deliver more current than appliance flexible power cords can handle). BS 1363 plugs are required to carry a cartridge fuse, which must conform to BS 1362. Post-War Building Studies No. 11, Electrical Installations included the recommendation that "Provision should be made in the plug for the accommodation of a cartridge type of fuse for 13 amps., and alternatively, for 3 amps. Fuses of these ratings should be interchangeable and be readily identified." The original BS 1363:1947 specified fuse ratings of 3 A, 7 A and 13 A. The current version of the fuse standard, BS 1362:1973, allows any fuse rating up to 13 A, with 3 A (coloured red) and 13 A (coloured brown) as the preferred (but not mandated) values when used in a plug. All other ratings are to be coloured black. Most common in consumer retail outlets are fuses rated 3, 5 and 13 A; Professional suppliers also commonly stock fused rated 1, 2, 7, and 10 A. Fuses are mechanically interchangeable; it is up to the end-user or appliance manufacturer to install the appropriate rating fuse. More appropriate lower-capacity fuses are now supplied with some plugs instead. BS 1362 specifies sand-filled ceramic-bodied cylindrical fuses, with dimensions of in length, with two metallic end caps of diameter and roughly long. The standard specifies breaking time versus current characteristics only for 3 A or 13 A fuses. For 3 A fuses: 0.02–80 s at 9 A, < 0.1 s at 20 A and < 0.03 s at 30 A. For 13 A fuses: 1–400 s at 30 A, 0.1–20 s at 50 A and 0.01–0.2 s at 100 A. Other safety features The plug sides are shaped to improve grip and make it easier to remove the plug from a socket-outlet. The plug is polarised, so that the fuse is in the side of the supply. The flexible cord always enters the plug from the bottom, discouraging removal by tugging on the cable, which can damage the cable. Rewireable plugs must be designed so that they can be wired in a manner which prevents strain to the earth connection before the line and neutral connection in the event of failure of the cord anchorage. Ratings BS 1363 plugs and sockets are rated for use at a maximum of 250 V ac and 13 A, with the exception of non-rewirable plugs which have a current rating according to the type of cable connected to them and the fuse fitted. The rating must be marked on the plug, and in the case of non-rewirable plugs the marking must be the value of the fuse fitted by the plug manufacturer in accordance with table 2 of the standard. Typical ratings for non-rewirable plugs are 3 A, 5 A, 10 A and 13 A. Counterfeits and non-standard plugs Plugs which do not meet BS 1363 often find their way into the UK. Some of these are legal in the country they are manufactured in, but do not meet BS 1363 – these can be brought into the UK by unsuspecting travellers, or people purchasing electrical goods online. They can also be purchased through many UK electrical component distributors. There are also counterfeit plugs which appear to meet the standards (and are marked as such) but do not in fact comply. Legislation was introduced, with the last revision in 1994, to require plugs sold to meet the technical standard. Counterfeit products are regularly seized when found, to enforce the safety standards and to protect the approval marks and trademarks of imitated manufacturers. The pressure group PlugSafe reported in March 2014 that since August 2011 "thousands" of listings of products including illegal plugs had been removed from the UK sections of the websites eBay and Amazon Marketplace. The UK Electrical Safety Council expressed shock at the magnitude of the problem and published a video showing a plug exploding due to a counterfeit BS 1362 fuse. The Institution of Engineering and Technology also published information on the extent of the problem with on-line retailers, many advertising replacement cord sets, mobile device chargers, and travel adaptors fraudulently marked BS 1363, and mentioning the same sites. BS 1363-2 13 A switched and unswitched socket-outlets BS 1363 sockets are commonly supplied with integral switches as a convenience, but switches are optional and did not form part of BS 1363 until 1967. Sockets are required to mate correctly with BS 1363 plugs (as opposed to the dimensions of the socket contacts being specified). This is checked by means of the use of various gauges which are specified in the standard; these gauges ensure that the socket contacts are correctly positioned and make effective and secure contact with the plug pins. There is no provision for establishing the interchangeability with any other device having plug pins incorporated, but which is not covered by BS 1363 (for example a charger or socket cover), unless that device conforms precisely to the plug pin dimensions specified. The insertion of non-compliant plugs may damage sockets. The important socket dimensions which the standard does specify are: A minimum insertion of 9.6 mm from the face of the socket-outlet to the first point of contact with a live part, a minimum distance of 9.5 mm from the line and neutral apertures to the periphery of the socket face, and not to exceed dimensions for the apertures of 7.2 mm × 4.8 mm (line and neutral) and 8.8 mm × 4.8 mm (earth). When looking at the front of the socket with the earth aperture uppermost (as normally mounted) the lower left aperture is for the neutral contact, and the lower right is for the line contact. Shutters BS 1363 sockets must have shutters on the and neutral contacts to prevent the insertion of a foreign object into the socket. Many sockets use the original method of shutters opened by the earth pin (or plastic ISOD), longer than the other pins and hence opening the shutters before the other pins engage, alone. Alternatively, shutters may be opened by simultaneous insertion of and neutral pins. Some later designs require all three pins to be inserted simultaneously. The use of automatic shutters for protection dates back to at least 1927. Other countries, for example the USA, are gradually requiring their sockets to be protected by shutters. There is a specific requirement in the standard to ensure that Europlugs and other two-pin plugs may not be used with BS 1363 sockets "It shall not be possible to operate a shutter by inserting a 2-pin plug into a 3-pin socket-outlet." However, many extension sockets will allow a plug to be inserted upside down, i.e. only the earth pin, defeating the shutter mechanism. This method is sometimes used to allow a Europlug (with two small round pins and no earth pin) to be forced into the open and neutral ports. The UK Electrical Safety Council has drawn attention to the fire risk associated with forcing Europlugs into BS 1363 sockets. Socket covers In countries using un-shuttered socket-outlets, socket covers are sometimes sold to prevent children inserting objects into otherwise unprotected sockets. Such covers are also sometimes sold in the UK, but the shutters of the socket-outlet make these unnecessary. There has been publicity about the dangers of poor quality covers, most of which open the shutters when plugged in, but some of which then break apart on removal in a way that leaves the shutters open and the contact holes exposed, or some with poorly formed pins that can strain the contact springs and damage the socket. A 2012 article in the Institution of Engineering and Technology journal Wiring Matters concludes that "Socket protectors are not regulated for safety, therefore, using a non-standard system to protect a long established safe system is not sensible." In 2016 the use of socket covers was banned in premises controlled by the National Health Service (NHS) in the United Kingdom. BEAMA (British Electrotechnical and Allied Manufacturers Association) published the following statement in June 2017: "BEAMA strongly advises against the use of socket-outlet 'protective' covers." BS 1363-3 Adaptors Plug adaptors permit two or more plugs to share one socket-outlet, or allow the use of a plug of different type. There are several common types, including double- and triple-socket blocks, shaver adaptors, and multi-socket strips. Adaptors which allow the use of non-BS 1363 plugs, or more than two BS 1363 plugs, must be fused. Appliances are designed not to draw more power than their plug is rated for; the use of such adaptors, and also multi-socketed extension leads, makes it possible for several appliances to be connected through a single outlet, with the potential to cause dangerous overloads. Shaver adaptors The purpose of these adaptors is to accept the two-pin plugs of shavers; they are required to be marked as such. Shaver adaptors must have a 1 A BS 646 fuse. They must accept UK shaver plugs complying with BS 4573 and also Europlugs and American two-pin plugs. BS 1363-4 13 A fused connection units switched and unswitched Switched and unswitched fused connection units, without sockets, use BS 1362 fuses for connection of permanently wired appliances to a socket-outlet circuit. They are also used in other situations where a fuse or switch (or both) is required, such as when feeding lighting off a socket-outlet circuit, to protect spurs off a ring circuit with more than one socket-outlet, and sometimes to switch feeds to otherwise concealed sockets for kitchen appliances. BS 1363-5 13 A fused conversion plugs A conversion plug is a special type of plug suitable for the connection of non-BS 1363 type plugs (to a recognized standard) to BS 1363 sockets. An example would be Class II appliances from mainland Europe which are fitted with moulded europlugs. Similar converters are available for a variety of other plug types. Unlike a temporary travel adaptor, conversion plugs, when closed, resemble normal plugs, although larger and squarer. The non-BS 1363 plug is inserted into the contacts, and the hinged body of the conversion plug is closed and fixed shut to grip the plug. There must be an accessible fuse. Conversion plugs may be non-reusable (permanently closed) or reusable, in which case it must be impossible to open the conversion plug without using a tool. The Plugs and Sockets, etc. (Safety) Regulations 1994 permit domestic appliances fitted with non-BS 1363 plugs to be supplied in the UK with conversion plugs fitted, but not with conversion plugs supplied for fitting by the consumer. BS 1363 variations Folding plugs Due to the size of the BS 1363 plug, attempts have been made to develop a compatible folding plug. As of July 2014 two folding plugs have been certified under specially developed ASTA standards: SlimPlug, which complies with ASTA AS153, and ThinPlug, which complies with ASTA AS158. SlimPlug is available only as part of a complete power lead terminating in an IEC 60320 C7 unpolarized (figure-of-eight) connector. In 2009 the ThinPlug received a "Red Dot" award for product design. The first product, also a power lead terminating in an IEC 60320 C7 unpolarized (figure-of-eight) connector, became available in 2011. Variant pin configurations Several manufacturers have made deliberately incompatible variants for use where connection with standard plugs is not acceptable. Common uses include filtered supplies for computer equipment and cleaners' supplies in public buildings and areas (to prevent visitors plugging in unauthorised equipment). Examples are one design made by MK which has a T-shaped earth pin, and the Walsall Gauge 13 A plug, which has each pin rotated 90°, the latter being in use on parts of the London Underground for 110 V AC supply, and also in some British Rail offices for filtered computer supplies. BS 8546 travel adaptors compatible with UK plug and socket system BS 8546 applies to travel adaptors having at least one plug or socket-outlet portion compatible with BS 1363 plugs and socket-outlets. It was first published in April 2016 to provide a standard for travel adaptors suitable for the connection of a non-BS 1363 plug, or to a non-BS 1363 socket-outlet. It provides for an overall rating of 250 V AC, minimum current rating of 5 A, and a maximum of 13 A. Adaptors with BS 1363 plug pins must incorporate a BS 1362 fuse. BS 8546 travel adaptors may also include USB charging ports. UK electric clock connector Fused plugs and sockets of various proprietary and non-interchangeable types are found in older public buildings in the UK, where they are used to feed AC electric wall clocks. They are smaller than conventional sockets, commonly being made to fit BESA junction boxes, and are often of very low profile. Early types were available fused in both poles; later types fused in the line only and provided an earth pin. Most are equipped with a retaining screw or clip to prevent accidental disconnection. The prevalence of battery-powered quartz-controlled wall clocks has meant that this connector is rarely seen in new installations for clock use. However, it has found use where a low profile fused connector is required and is still available. A relatively common example of such a use is to supply power to an illuminated mirror that has limited clearance from the wall. Obsolete non-BS types Wylex plug Prior to the first British Standard for earthed plugs, George H. Scholes of Manchester introduced plugs with a hollow round earth pin between rectangular current-carrying pins in 1926 under the Wylex brand name. The Wylex plugs were initially made in three ratings, 5 A, 10 A and 15 A and were unpolarized (the current-carrying pins were on the same centre line as the earth pin). In 1933 an asymmetric polarized version was introduced, with line pin slightly offset from the centre line. In 1934 the dual plug system was introduced with the socket rated at 15 A and three sizes of plug, fused 2 A and 5 A plugs and a 15 A plug. The 15 A "dual plug" incorporated a socket with narrower apertures than a standard Wylex 15 A socket, that accepted only the narrow rectangular pins of the lower-rated plugs. The introduction of a 13 A fused plug, rated as 3 kW, enabled Scholes to propose their system as a possible solution for the new standard competing with the Dorman & Smith round pin solution, but it was not selected and the completely new BS 1363 design prevailed. Wylex sockets were used in council housing and public sector buildings and, for a short time, in private housing. They were particularly popular in the Manchester area, although they were installed throughout England, mainly in schools, university accommodation, and government laboratories. In some London schools built in the 1960s they were used as low-voltage AC sockets, typically 12 V, 5 A from a transformer serving one or more laboratories, for microscope lamps etc. Wylex plugs and sockets continued to be manufactured for several years after BS 1363 sockets became standard and were commonly used by banks and in computer rooms during the 1960s and 1970s for uninterruptible power supplies or "clean" filtered mains supplies. Dorman & Smith (D&S) Made by Dorman & Smith (using patents applied for in 1943) the plugs and sockets were rated at 13 A and were one of the competing types for use on ring final circuits. They were never popular in private houses but were widely deployed in prefabricated houses, council housing and LCC schools. The BBC also used them. Some local authorities continued to use them in new installations until the late 1950s. Many D&S sockets were still in use until the early 1980s, although the difficulty in obtaining plugs for them after around 1970 often forced their users to replace them with BS 1363 sockets. The D&S plug suffered from a serious design fault: the line pin was a fuse which screwed into the plug body and tended to come unscrewed on its own in use. A fuse that worked loose could end up protruding from the socket, electrically live and posing a shock hazard, when the plug was removed. International usage of BS types Standards derived from BS 546 Indian IS 1293 Indian standard IS 1293:2005 Plugs and Socket-Outlets of Rated Voltage up to and including 250 Volts and Rated Current up to and including 16 Amperes includes versions of the 5 A and 15 A BS 546 connectors, but they are rated at 6 A and 16 A respectively. Some 6 A 3 pin sockets also have two extra holes above the line and neutral holes to allow a 5 A 2-pin plug to be connected. Malaysian Standard MS 1577 MS 1577:2003 15 A plugs and socket-outlets for domestic and similar purposes Russian GOST 7396 The 2 A, 5 A, and 15 A connectors of BS 546 are duplicated by Group B1 of the GOST 7396 standard. Singapore Standard SS 472 SS 472:1999 15 A plugs and switched socket-outlets for domestic and similar purposes. Also used in the Indonesian Riau Islands. South African SANS 164 The South African standard SANS 164 Plug and socket-outlet systems for household and similar purposes for use in South Africa defines a number of derivatives of BS 546. A household plug and socket is defined in SANS 164-1, and is essentially a modernised version of the BS 546 15 A (the essential differences are that pins can be hollowed to reduce the amount of metal used, the dimensions are metricated, and it is rated 16 A). SANS 164-3 defines a 6 A plug and socket based on the BS 546 5 A. The South African Wiring Code now defines the plug and socket system defined in SANS 164-2 (IEC 60906-1) as the preferred standard, and it is expected that SANS 164-1 and SANS 164-3 devices will be phased out by around 2035. SANS 164-4 defines three variants of the 16 A plug and socket intended for specialist (known as "dedicated") applications. The variants use a flattened earth pin, each at a different specified rotational position. This arrangement ensures that the dedicated plugs can all plug into an ordinary ("non-dedicated") socket, but that the various dedicated plug and socket combinations are not interchangeable (nor can a non-dedicated plug be inserted into a dedicated socket). The dedicated versions have specific colours assigned to them, depending on the rotational position of the flattened portion. These are black (−53°), red (0°), and blue (+53°). The red (0°) version is by far the most common, and is widely used on computer and telecommunication equipment (although this is not required in the standard). In this application the "dedicated" socket refers to one that is not connected to a residual current circuit breaker, which is otherwise mandated for all normal power sockets. International usage of Type D The IEC World Plugs lists Type D as being used in the following locations: Bangladesh, Bhutan, Botswana, Chad, DR Congo, Dominica, French Guiana, Ghana, Guadeloupe, Guyana, Hong Kong, India, Iraq, Jordan, Lebanon, Libya, Macau, Madagascar, Maldives, Martinique, Monaco, Myanmar, Namibia, Nepal, Niger, Nigeria, Pakistan, Qatar, Saint Kitts and Nevis, Senegal, Sierra Leone, South Africa, Sri Lanka, Sudan, Tanzania, United Arab Emirates, Yemen, Zambia, Zimbabwe. International usage of Type M This plug is often used for air conditioners and washing machines. The IEC World Plugs lists Type M as being used in the following locations: Bhutan, Botswana, Eswatini, India, Israel, Lesotho, Macau, Malaysia, Mozambique, Namibia, Nepal, Pakistan, Singapore, South Africa, Sri Lanka. Standards derived from BS 1363 Irish I.S. 401 Irish Standard 401:1997 Safety requirements for rewirable and non-rewirable 13 A fused plugs for normal and rough use having insulating sleeves on live and neutral pins is the equivalent of BS 1363 in Ireland. The use of this standard is enforced by consumer protection legislation which requires that most domestic electrical goods sold in Ireland be fitted with an I.S. 401 plug. Malaysian Standard MS 589 MS 589 parts 1,2,3 and 4 correspond to BS 1363-1, BS 1363-2, BS 1363-3 and BS 1363-4. Russian GOST 7396 Group B2 of the GOST 7396 standard describes BS 1363 plugs and sockets. Saudi Arabian Standard SASO 2203:2003 SASO 2203:2003 Plugs and socket-outlets for household and similar general use 220 V Singapore Standard SS 145 SS 145-1:2010 Specification for 13 A plugs and socket-outlets - Part 1 : Rewirable and non-rewirable 13 A fused plugs SS 145-2:2010 Specification for 13 A plugs and socket-outlets - Part 2 : 13 A switched and unswitched socket-outlets International usage of Type G The IEC World Plugs lists Type G as being used in the following locations (outside the UK): Bahrain Bangladesh Belize Bhutan Botswana Brunei Darussalam Cambodia Cyprus Dominica Falkland Islands Gambia Ghana Gibraltar Grenada Guyana Hong Kong Iraq Ireland Isle of Man Jordan Kenya Kuwait Lebanon Macau Malawi Malaysia Maldives Malta Mauritius Myanmar Nigeria Oman Pakistan Qatar Saint Kitts and Nevis Saint Lucia Saint Vincent and the Grenadines Saudi Arabia Seychelles Sierra Leone Singapore Solomon Islands Sri Lanka Tanzania Uganda United Arab Emirates Vanuatu Yemen Zambia Zimbabwe Although not listed, this type of plug is also used in the Channel Islands of Guernsey and Jersey. See also AC power plugs and sockets Mains electricity by country References 0 Electric power in the United Kingdom Electrical safety in the United Kingdom Electrical standards Electrical wiring Mains power connectors
AC power plugs and sockets: British and related types
[ "Physics", "Engineering" ]
12,212
[ "Electrical standards", "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring" ]
41,551,965
https://en.wikipedia.org/wiki/C16H26N2O4
{{DISPLAYTITLE:C16H26N2O4}} The molecular formula C16H26N2O4 (molar mass: 310.39 g/mol, exact mass: 310.1893 u) may refer to: Cetamolol Pamatolol Molecular formulas
C16H26N2O4
[ "Physics", "Chemistry" ]
64
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
30,764,139
https://en.wikipedia.org/wiki/Affine%20plane
In geometry, an affine plane is a two-dimensional affine space. Definitions There are two ways to formally define affine planes, which are equivalent for affine planes over a field. The first way consists in defining an affine plane as a set on which a vector space of dimension two acts simply transitively. Intuitively, this means that an affine plane is a vector space of dimension two in which one has "forgotten" where the origin is. The second way occurs in incidence geometry, where an affine plane is defined as an abstract system of points and lines satisfying a system of axioms. Coordinates and isomorphism All the affine planes defined over a field are isomorphic. More precisely, the choice of an affine coordinate system (or, in the real case, a Cartesian coordinate system) for an affine plane over a field induces an isomorphism of affine planes between and . In the more general situation, where the affine planes are not defined over a field, they will in general not be isomorphic. Two affine planes arising from the same non-Desarguesian projective plane by the removal of different lines may not be isomorphic. Examples Typical examples of affine planes are Euclidean planes, which are affine planes over the reals equipped with a metric, the Euclidean distance. In other words, an affine plane over the reals is a Euclidean plane in which one has "forgotten" the metric (that is, one does not talk of lengths nor of angle measures). Vector spaces of dimension two, in which the zero vector is not considered as different from the other elements. For every field or division ring , the set of the pairs of elements of . The result of removing any single line (and all the points on this line) from any projective plane. Applications In the applications of mathematics, there are often situations where an affine plane without the Euclidean metric is used instead of the Euclidean plane. For example, in a graph, which can be drawn on paper, and in which the position of a particle is plotted against time, the Euclidean metric is not adequate for its interpretation, since the distances between its points or the measures of the angles between its lines have, in general, no physical importance (in the affine plane the axes can use different units, which are not comparable, and the measures also vary with different units and scales). Notes References Sources Planes (geometry) Mathematical physics
Affine plane
[ "Physics", "Mathematics" ]
498
[ "Applied mathematics", "Theoretical physics", "Mathematical objects", "Infinity", "Planes (geometry)", "Mathematical physics" ]
30,766,907
https://en.wikipedia.org/wiki/Reversible%20cellular%20automaton
A reversible cellular automaton is a cellular automaton in which every configuration has a unique predecessor. That is, it is a regular grid of cells, each containing a state drawn from a finite set of states, with a rule for updating all cells simultaneously based on the states of their neighbors, such that the previous state of any cell before an update can be determined uniquely from the updated states of all the cells. The time-reversed dynamics of a reversible cellular automaton can always be described by another cellular automaton rule, possibly on a much larger neighborhood. Several methods are known for defining cellular automata rules that are reversible; these include the block cellular automaton method, in which each update partitions the cells into blocks and applies an invertible function separately to each block, and the second-order cellular automaton method, in which the update rule combines states from two previous steps of the automaton. When an automaton is not defined by one of these methods, but is instead given as a rule table, the problem of testing whether it is reversible is solvable for block cellular automata and for one-dimensional cellular automata, but is undecidable for other types of cellular automata. Reversible cellular automata form a natural model of reversible computing, a technology that could lead to ultra-low-power computing devices. Quantum cellular automata, one way of performing computations using the principles of quantum mechanics, are often required to be reversible. Additionally, many problems in physical modeling, such as the motion of particles in an ideal gas or the Ising model of alignment of magnetic charges, are naturally reversible and can be simulated by reversible cellular automata. Properties related to reversibility may also be used to study cellular automata that are not reversible on their entire configuration space, but that have a subset of the configuration space as an attractor that all initially random configurations converge towards. As Stephen Wolfram writes, "once on an attractor, any system—even if it does not have reversible underlying rules—must in some sense show approximate reversibility." Examples One-dimensional automata A cellular automaton is defined by its cells (often a one- or two-dimensional array), a finite set of values or states that can go into each cell, a neighborhood associating each cell with a finite set of nearby cells, and an update rule according to which the values of all cells are updated, simultaneously, as a function of the values of their neighboring cells. The simplest possible cellular automata have a one-dimensional array of cells, each of which can hold a binary value (either 0 or 1), with each cell having a neighborhood consisting only of it and its two nearest cells on either side; these are called the elementary cellular automata. If the update rule for such an automaton causes each cell to always remain in the same state, then the automaton is reversible: the previous state of all cells can be recovered from their current states, because for each cell the previous and current states are the same. Similarly, if the update rule causes every cell to change its state from 0 to 1 and vice versa, or if it causes a cell to copy the state from a fixed neighboring cell, or if it causes it to copy a state and then reverse its value, it is necessarily reversible. call these types of reversible cellular automata, in which the state of each cell depends only on the previous state of one neighboring cell, "trivial". Despite its simplicity, the update rule that causes each cell to copy the state of a neighboring cell is important in the theory of symbolic dynamics, where it is known as the shift map. A little less trivially, suppose that the cells again form a one-dimensional array, but that each state is an ordered pair (l,r) consisting of a left part l and a right part r, each drawn from a finite set of possible values. Define a transition function that sets the left part of a cell to be the left part of its left neighbor and the right part of a cell to be the right part of its right neighbor. That is, if the left neighbor's state is and the right neighbor's state is , the new state of a cell is the result of combining these states using a pairwise operation defined by the equation . An example of this construction is given in the illustration, in which the left part is represented graphically as a shape and the right part is represented as a color; in this example, each cell is updated with the shape of its left neighbor and the color of its right neighbor. Then this automaton is reversible: the values on the left side of each pair migrate rightwards and the values on the right side migrate leftwards, so the prior state of each cell can be recovered by looking for these values in neighboring cells. The operation used to combine pairs of states in this automaton forms an algebraic structure known as a rectangular band. Multiplication of decimal numbers by two or by five can be performed by a one-dimensional reversible cellular automaton with ten states per cell (the ten decimal digits). Each digit of the product depends only on a neighborhood of two digits in the given number: the digit in the same position and the digit one position to the right. More generally, multiplication or division of doubly infinite digit sequences in any radix , by a multiplier or divisor all of whose prime factors are also prime factors of , is an operation that forms a cellular automaton because it depends only on a bounded number of nearby digits, and is reversible because of the existence of multiplicative inverses. Multiplication by other values (for instance, multiplication of decimal numbers by three) remains reversible, but does not define a cellular automaton, because there is no fixed bound on the number of digits in the initial value that are needed to determine a single digit in the result. There are no nontrivial reversible elementary cellular automata. However, a near-miss is provided by Rule 90 and other elementary cellular automata based on the exclusive or function. In Rule 90, the state of each cell is the exclusive or of the previous states of its two neighbors. This use of the exclusive or makes the transition rule locally invertible, in the sense that any contiguous subsequence of states can be generated by this rule. Rule 90 is not a reversible cellular automaton rule, because in Rule 90 every assignment of states to the complete array of cells has exactly four possible predecessors, whereas reversible rules are required to have exactly one predecessor per configuration. Critters' rule Conway's Game of Life, one of the most famous cellular automaton rules, is not reversible: for instance, it has many patterns that die out completely, so the configuration in which all cells are dead has many predecessors, and it also has Garden of Eden patterns with no predecessors. However, another rule called "Critters" by its inventors, Tommaso Toffoli and Norman Margolus, is reversible and has similar dynamic behavior to Life. The Critters rule is a block cellular automaton in which, at each step, the cells of the automaton are partitioned into 2×2 blocks and each block is updated independently of the other blocks. Its transition function flips the state of every cell in a block that does not have exactly two live cells, and in addition rotates by 180° blocks with exactly three live cells. Because this function is invertible, the automaton defined by these rules is a reversible cellular automaton. When started with a smaller field of random cells centered within a larger region of dead cells, many small patterns similar to Life's glider escape from the central random area and interact with each other. The Critters rule can also support more complex spaceships of varying speeds as well as oscillators with infinitely many different periods. Constructions Several general methods are known for constructing cellular automaton rules that are automatically reversible. Block cellular automata A block cellular automaton is an automaton at which, in each time step, the cells of the automaton are partitioned into congruent subsets (called blocks), and the same transformation is applied independently to each block. Typically, such an automaton will use more than one partition into blocks, and will rotate between these partitions at different time steps of the system. In a frequently used form of this design, called the Margolus neighborhood, the cells of the automaton form a square grid and are partitioned into larger 2 × 2 square blocks at each step. The center of a block at one time step becomes the corner of four blocks at the next time step, and vice versa; in this way, the four cells in each 2 × 2 belong to four different 2 × 2 squares of the previous partition. The Critters rule discussed above is an example of this type of automaton. Designing reversible rules for block cellular automata, and determining whether a given rule is reversible, is easy: for a block cellular automaton to be reversible it is necessary and sufficient that the transformation applied to the individual blocks at each step of the automaton is itself reversible. When a block cellular automaton is reversible, the time-reversed version of its dynamics can also be described as a block cellular automaton with the same block structure, using a time-reversed sequence of partitions of cells into blocks, and with the transition function for each block being the inverse function of the original rule. Simulation of irreversible automata showed how to embed any irreversible -dimensional cellular automaton rule into a reversible -dimensional rule. Each -dimensional slice of the new reversible rule simulates a single time step of the original rule. In this way, Toffoli showed that many features of irreversible cellular automata, such as the ability to simulate arbitrary Turing machines, could also be extended to reversible cellular automata. As Toffoli conjectured and proved, the increase in dimension incurred by Toffoli's method is a necessary payment for its generality: under mild assumptions (such as the translation-invariance of the embedding), any embedding of a cellular automaton that has a Garden of Eden into a reversible cellular automaton must increase the dimension. describes another type of simulation that does not obey Hertling's assumptions and does not change the dimension. Morita's method can simulate the finite configurations of any irreversible automaton in which there is a "quiescent" or "dead" state, such that if a cell and all its neighbors are quiescent then the cell remains quiescent in the next step. The simulation uses a reversible block cellular automaton of the same dimension as the original irreversible automaton. The information that would be destroyed by the irreversible steps of the simulated automaton is instead sent away from the configuration into the infinite quiescent region of the simulating automaton. This simulation does not update all cells of the simulated automaton simultaneously; rather, the time to simulate a single step is proportional to the size of the configuration being simulated. Nevertheless, the simulation accurately preserves the behavior of the simulated automaton, as if all of its cells were being updated simultaneously. Using this method it is possible to show that even one-dimensional reversible cellular automata are capable of universal computation. Second-order cellular automata The second-order cellular automaton technique is a method of transforming any cellular automaton into a reversible cellular automaton, invented by Edward Fredkin and first published by several other authors in 1984. In this technique, the state of each cell in the automaton at time is a function both of its neighborhood at time and of its own state at time . Specifically, the transition function of the automaton maps each neighborhood at time to a permutation on the set of states, and then applies that permutation to the state at time . The reverse dynamics of the automaton may be computed by mapping each neighborhood to the inverse permutation and proceeding in the same way. In the case of automata with binary-valued states (zero or one), there are only two possible permutations on the states (the identity permutation and the permutation that swaps the two states), which may themselves be represented as the exclusive or of a state with a binary value. In this way, any conventional two-valued cellular automaton may be converted to a second-order cellular automaton rule by using the conventional automaton's transition function on the states at time , and then computing the exclusive or of these states with the states at time to determine the states at time . However, the behavior of the reversible cellular automaton determined in this way may not bear any resemblance to the behavior of the cellular automaton from which it was defined. Any second-order automaton may be transformed into a conventional cellular automaton, in which the transition function depends only on the single previous time step, by combining pairs of states from consecutive time steps of the second-order automaton into single states of a conventional cellular automaton. Conserved landscape A one-dimensional cellular automaton found by uses a neighborhood consisting of four contiguous cells. In this automaton, a cell flips its state whenever it occupies the "?" position in the pattern "0?10". No two such patterns can overlap, so the same "landscape" surrounding the flipped cell continues to be present after the transition. In the next step, the cell in the same "?" position will flip again, back to its original state. Therefore, this automaton is its own inverse, and is reversible. Patt performed a brute force search of all two-state one-dimensional cellular automata with small neighborhoods; this search led to the discovery of this automaton, and showed that it was the simplest possible nontrivial one-dimensional two-state reversible cellular automaton. There are no nontrivial reversible two-state automata with three-cell neighborhoods, and all two-state reversible automata with four-cell neighborhoods are simple variants on Patt's automaton. Patt's automaton can be viewed in retrospect as an instance of the "conserved landscape" technique for designing reversible cellular automata. In this technique, a change to the state of a cell is triggered by a pattern among a set of neighbors that do not themselves change states. In this way, the existence of the same pattern can be used to trigger the inverse change in the time-reversed dynamics of the automaton. Patt's automaton has very simple dynamics (all cyclic sequences of configurations have length two), but automata using the same conserved landscape technique with more than one triggering pattern are capable of more complex behavior. In particular they can simulate any second-order cellular automaton. The SALT model of is a special case of the conserved landscape technique. In this model, the cells of an integer grid are split into even and odd subsets. In each time step certain pairs of cells of one parity are swapped, based on the configuration of nearby cells of the other parity. Rules using this model can simulate the billiard ball computer, or support long strings of live cells that can move at many different speeds or vibrate at many different frequencies. Theory A cellular automaton consists of an array of cells, each one of which has a finite number of possible states, together with a rule for updating all cells simultaneously based only on the states of neighboring cells. A configuration of a cellular automaton is an assignment of a state to every cell of the automaton; the update rule of a cellular automaton forms a function from configurations to configurations, with the requirement that the updated value of any cell depends only on some finite neighborhood of the cell, and that the function is invariant under translations of the input array. With these definitions, a cellular automaton is reversible when it satisfies any one of the following conditions, all of which are mathematically equivalent to each other: Every configuration of the automaton has a unique predecessor that is mapped to it by the update rule. The update rule of the automaton is a bijection; that is, a function that is both one-to-one and onto. The update rule is an injective function, that is, there are no two configurations that both map to the same common configuration. This condition is obviously implied by the assumption that the update rule is a bijection. In the other direction, the Garden of Eden theorem for cellular automata implies that every injective update rule is bijective. The time-reversed dynamics of the automaton can be described by another cellular automaton. Clearly, for this to be possible, the update rule must be bijective. In the other direction, if the update rule is bijective, then it has an inverse function that is also bijective. This inverse function must be a cellular automaton rule. The proof of this fact uses the Curtis–Hedlund–Lyndon theorem, a topological characterization of cellular automata rules as the translation-invariant functions that are continuous with respect to the Cantor topology on the space of configurations. The update rule of the automaton is an automorphism of the shift dynamical system defined by the state space and the translations of the lattice of cells. That is, it is a homeomorphism that commutes with the shift map, as the Curtis–Hedlund–Lyndon theorem implies. analyze several alternative definitions of reversibility for cellular automata. Most of these turn out to be equivalent either to injectivity or to surjectivity of the transition function of the automaton; however, there is one more alternative that does not match either of these two definitions. It applies to automata such as the Game of Life that have a quiescent or dead state. In such an automaton, one can define a configuration to be "finite" if it has only finitely many non-quiescent cells, and one can consider the class of automata for which every finite configuration has at least one finite predecessor. This class turns out to be distinct from both the surjective and injective automata, and in some subsequent research, automata with this property have been called invertible finite automata. Testing reversibility It was first shown by that the problem of testing reversibility of a given one-dimensional cellular automaton has an algorithmic solution. Alternative algorithms based on automata theory and de Bruijn graphs were given by and , respectively. Culik begins with the observation that a cellular automaton has an injective transition function if and only if the transition function is injective on the subsets of configurations that are periodic (repeating the same substring infinitely often in both directions). He defines a nondeterministic finite-state transducer that performs the transition rule of the automaton on periodic strings. This transducer works by remembering the neighborhood of the automaton at the start of the string and entering an accepting state when that neighborhood concatenated to the end of the input would cause its nondeterministically chosen transitions to be correct. Culik then swaps the input and output of the transducer. The transducer resulting from this swap simulates the inverse dynamics of the given automaton. Finally, Culik applies previously known algorithms to test whether the resulting swapped transducer maps each input to a single output. Sutner defines a directed graph (a type of de Bruijn graph) in which each vertex represents a pair of assignments of states for the cells in a contiguous sequence of cells. The length of this sequence is chosen to be one less than the neighborhood size of the automaton. An edge in Sutner's graph represents a pair of sequences of cells that overlap in all but one cell, so that the union of the sequences is a full neighborhood in the cellular automaton. Each such edge is directed from the overlapping subsequence on the left to the subsequence on the right. Edges are only included in the graph when they represent compatible state assignments on the overlapping parts of their cell sequences, and when the automaton rule (applied to the neighborhood determined by the potential edge) would give the same results for both assignments of states. By performing a linear-time strong connectivity analysis of this graph, it is possible to determine which of its vertices belong to cycles. The transition rule is non-injective if and only if this graph contains a directed cycle in which at least one vertex has two differing state assignments. These methods take polynomial time, proportional to the square of the size of the state transition table of the input automaton. A related algorithm of determines whether a given rule is surjective when applied to finite-length arrays of cells with periodic boundary conditions, and if so, for which lengths. For a block cellular automaton, testing reversibility is also easy: the automaton is reversible if and only if the transition function on the blocks of the automaton is invertible, and in this case the reverse automaton has the same block structure with the inverse transition function. However, for cellular automata with other neighborhoods in two or more dimensions, the problem of testing reversibility is undecidable, meaning that there cannot exist an algorithm that always halts and always correctly answers the problem. The proof of this fact by is based on the previously known undecidability of tiling the plane by Wang tiles, sets of square tiles with markings on their edges that constrain which pairs of tiles can fit edge-to-edge. Kari defines a cellular automaton from a set of Wang tiles, such that the automaton fails to be injective if and only if the given tile set can tile the entire plane. His construction uses the von Neumann neighborhood, and cells with large numbers of states. In the same paper, Kari also showed that it is undecidable to test whether a given cellular automaton rule of two or more dimensions is surjective (that is, whether it has a Garden of Eden). Reverse neighborhood size In a one-dimensional reversible cellular automaton with states per cell, in which the neighborhood of a cell is an interval of cells, the automaton representing the reverse dynamics has neighborhoods that consist of at most cells. This bound is known to be tight for : there exist -state reversible cellular automata with two-cell neighborhoods whose time-reversed dynamics forms a cellular automaton with neighborhood size exactly . For any integer there are only finitely many two-dimensional reversible -state cellular automata with the von Neumann neighborhood. Therefore, there is a well-defined function such that all reverses of -state cellular automata with the von Neumann neighborhood use a neighborhood with radius at most : simply let be the maximum, among all of the finitely many reversible -state cellular automata, of the neighborhood size needed to represent the time-reversed dynamics of the automaton. However, because of Kari's undecidability result, there is no algorithm for computing and the values of this function must grow very quickly, more quickly than any computable function. Wolfram's classification A well-known classification of cellular automata by Stephen Wolfram studies their behavior on random initial conditions. For a reversible cellular automaton, if the initial configuration is chosen uniformly at random among all possible configurations, then that same uniform randomness continues to hold for all subsequent states. Thus it would appear that most reversible cellular automata are of Wolfram's Class 3: automata in which almost all initial configurations evolve pseudo-randomly or chaotically. However, it is still possible to distinguish among different reversible cellular automata by analyzing the effect of local perturbations on the behavior of the automaton. Making a change to the initial state of a reversible cellular automaton may cause changes to later states to remain only within a bounded region, to propagate irregularly but unboundedly, or to spread quickly, and lists one-dimensional reversible cellular automaton rules exhibiting all three of these types of behavior. Later work by Wolfram identifies the one-dimensional Rule 37R automaton as being particularly interesting in this respect. When run on a finite array of cells with periodic boundary conditions, starting from a small seed of random cells centered within a larger empty neighborhood, it tends to fluctuate between ordered and chaotic states. However, with the same initial conditions on an unbounded set of cells its configurations tend to organize themselves into several types of simple moving particles. Abstract algebra Another way to formalize reversible cellular automata involves abstract algebra, and this formalization has been useful in developing computerized searches for reversible cellular automaton rules. defines a semicentral bigroupoid to be an algebraic structure consisting of a set of elements and two operations and on pairs of elements, satisfying the two equational axioms: for all elements , , and in , , and for all elements , , and in , . For instance, this is true for the two operations in which operation returns its right argument and operation returns its left argument. These axioms generalize the defining axiom (for a single binary operation) of a central groupoid. As Boykett argues, any one-dimensional reversible cellular automaton is equivalent to an automaton in rectangular form, in which the cells are offset a half unit at each time step, and in which both the forward and reverse evolution of the automaton have neighborhoods with just two cells, the cells a half unit away in each direction. If a reversible automaton has neighborhoods larger than two cells, it can be simulated by a reversible automaton with smaller neighborhoods and more states per cell, in which each cell of the simulating automaton simulates a contiguous block of cells in the simulated automaton. The two axioms of a semicentral bigroupoid are exactly the conditions required on the forward and reverse transition functions of these two-cell neighborhoods to be the reverses of each other. That is, every semicentral bigroupoid defines a reversible cellular automaton in rectangular form, in which the transition function of the automaton uses the  operation to combine the two cells of its neighborhood, and in which the  operation similarly defines the reverse dynamics of the automaton. Every one-dimensional reversible cellular automaton is equivalent to one in this form. Boykett used this algebraic formulation as the basis for algorithms that exhaustively list all possible inequivalent reversible cellular automata. Conservation laws When researchers design reversible cellular automata to simulate physical systems, they typically incorporate into the design the conservation laws of the system; for instance, a cellular automaton that simulates an ideal gas should conserve the number of gas particles and their total momentum, for otherwise it would not provide an accurate simulation. However, there has also been some research on the conservation laws that reversible cellular automata can have, independent of any intentional design. The typical type of conserved quantity measured in these studies takes the form of a sum, over all contiguous subsets of cells of the automaton, of some numerical function of the states of the cells in each subset. Such a quantity is conserved if, whenever it takes a finite value, that value automatically remains constant through each time step of the automaton, and in this case it is called a th-order invariant of the automaton. For instance, recall the one-dimensional cellular automaton defined as an example from a rectangular band, in which the cell states are pairs of values (l,r) drawn from sets and of left values and right values, the left value of each cell moves rightwards at each time step, and the right value of each cell moves leftwards. In this case, for each left or right value of the band, one can define a conserved quantity, the total number of cells that have that value. If there are left values and right values, then there are independent first-order-invariants, and any first-order invariant can be represented as a linear combination of these fundamental ones. The conserved quantities associated with left values flow uniformly to the right at a constant rate: that is, if the number of left values equal to within some region of the line takes a certain value at time , then it will take the same value for the shifted region at time . Similarly, the conserved quantities associated with right values flow uniformly to the left. Any one-dimensional reversible cellular automaton may be placed into rectangular form, after which its transition rule may be factored into the action of an idempotent semicentral bigroupoid (a reversible rule for which regions of cells with a single state value change only at their boundaries) together with a permutation on the set of states. The first-order invariants for the idempotent lifting of the automaton rule (the modified rule formed by omitting the permutation) necessarily behave like the ones for a rectangular band: they have a basis of invariants that flow either leftwards or rightwards at a constant rate without interaction. The first-order invariants for the overall automaton are then exactly the invariants for the idempotent lifting that give equal weight to every pair of states that belong to the same orbit of the permutation. However, the permutation of states in the rule may cause these invariants to behave differently from in the idempotent lifting, flowing non-uniformly and with interactions. In physical systems, Noether's theorem provides an equivalence between conservation laws and symmetries of the system. However, for cellular automata this theorem does not directly apply, because instead of being governed by the energy of the system the behavior of the automaton is encoded into its rules, and the automaton is guaranteed to obey certain symmetries (translation invariance in both space and time) regardless of any conservation laws it might obey. Nevertheless, the conserved quantities of certain reversible systems behave similarly to energy in some respects. For instance, if different regions of the automaton have different average values of some conserved quantity, the automaton's rules may cause this quantity to dissipate, so that the distribution of the quantity is more uniform in later states. Using these conserved quantities as a stand-in for the energy of the system can allow it to be analyzed using methods from classical physics. Applications Lattice gas automata A lattice gas automaton is a cellular automaton designed to simulate the motion of particles in a fluid or an ideal gas. In such a system, gas particles move on straight lines with constant velocity, until undergoing elastic collision with other particles. Lattice gas automata simplify these models by only allowing a constant number of velocities (typically, only one speed and either four or six directions of motion) and by simplifying the types of collision that are possible. Specifically, the HPP lattice gas model consists of particles moving at unit velocity in the four axis-parallel directions. When two particles meet on the same line in opposite directions, they collide and are sent outwards from the collision point on the perpendicular line. This system obeys the conservation laws of physical gases, and produces simulations whose appearance resembles the behavior of physical gases. However, it was found to obey unrealistic additional conservation laws. For instance, the total momentum within any single line is conserved. As well, the differences between axis-parallel and non-axis-parallel directions in this model (its anisotropy) is undesirably high. The FHP lattice gas model improves the HPP model by having particles moving in six different directions, at 60 degree angles to each other, instead of only four directions. In any head-on collision, the two outgoing particles are deflected at 60 degree angles from the two incoming particles. Three-way collisions are also possible in the FHP model and are handled in a way that both preserves total momentum and avoids the unphysical added conservation laws of the HPP model. Because the motion of the particles in these systems is reversible, they are typically implemented with reversible cellular automata. In particular, both the HPP and FHP lattice gas automata can be implemented with a two-state block cellular automaton using the Margolus neighborhood. Ising model The Ising model is used to model the behavior of magnetic systems. It consists of an array of cells, the state of each of which represents a spin, either up or down. The energy of the system is measured by a function that depends on the number of neighboring pairs of cells that have the same spin as each other. Therefore, if a cell has equal numbers of neighbors in the two states, it may flip its own state without changing the total energy. However, such a flip is energy-conserving only if no two adjacent cells flip at the same time. Cellular automaton models of this system divide the square lattice into two alternating subsets, and perform updates on one of the two subsets at a time. In each update, every cell that can flip does so. This defines a reversible cellular automaton which can be used to investigate the Ising model. Billiard-ball computation and low-power computing proposed the billiard-ball computer as part of their investigations into reversible computing. A billiard-ball computer consists of a system of synchronized particles (the billiard balls) moving in tracks and guided by a fixed set of obstacles. When the particles collide with each other or with the obstacles, they undergo an elastic collision much as real billiard balls would do. The input to the computer is encoded using the presence or absence of particles on certain input tracks, and its output is similarly encoded using the presence or absence of particles on output tracks. The tracks themselves may be envisioned as wires, and the particles as being Boolean signals transported on those wires. When a particle hits an obstacle, it reflects from it. This reflection may be interpreted as a change in direction of the wire the particle is following. Two particles on different tracks may collide, forming a logic gate at their collision point. As showed, billiard-ball computers may be simulated using a two-state reversible block cellular automaton with the Margolus neighborhood. In this automaton's update rule, blocks with exactly one live cell rotate by 180°, blocks with two diagonally opposite live cells rotate by 90°, and all other blocks remain unchanged. These rules cause isolated live cells to behave like billiard balls, moving on diagonal trajectories. Connected groups of more than one live cell behave instead like the fixed obstacles of the billiard-ball computer. In an appendix, Margolus also showed that a three-state second-order cellular automaton using the two-dimensional Moore neighborhood could simulate billiard-ball computers. One reason to study reversible universal models of computation such as the billiard-ball model is that they could theoretically lead to actual computer systems that consume very low quantities of energy. According to Landauer's principle, irreversible computational steps require a certain minimal amount of energy per step, but reversible steps can be performed with an amount of energy per step that is arbitrarily close to zero. However, in order to perform computation using less energy than Landauer's bound, it is not good enough for a cellular automaton to have a transition function that is globally reversible: what is required is that the local computation of the transition function also be done in a reversible way. For instance, reversible block cellular automata are always locally reversible: the behavior of each individual block involves the application of an invertible function with finitely many inputs and outputs. were the first to ask whether every reversible cellular automaton has a locally reversible update rule. showed that for one- and two-dimensional automata the answer is positive, and showed that any reversible cellular automaton could be simulated by a (possibly different) locally reversible cellular automaton. However, the question of whether every reversible transition function is locally reversible remains open for dimensions higher than two. Synchronization The "Tron" rule of Toffoli and Margolus is a reversible block cellular rule with the Margolus neighborhood. When a 2 × 2 block of cells all have the same state, all cells of the block change state; in all other cases, the cells of the block remain unchanged. As Toffoli and Margolus argue, the evolution of patterns generated by this rule can be used as a clock to synchronize any other rule on the Margolus neighborhood. A cellular automaton synchronized in this way obeys the same dynamics as the standard Margolus-neighborhood rule while running on an asynchronous cellular automaton. Encryption proposed using multidimensional reversible cellular automata as an encryption system. In Kari's proposal, the cellular automaton rule would be the encryption key. Encryption would be performed by running the rule forward one step, and decryption would be performed by running it backward one step. Kari suggests that a system such as this may be used as a public-key cryptosystem. In principle, an attacker could not algorithmically determine the decryption key (the reverse rule) from a given encryption key (forward rule) because of the undecidability of testing reversibility, so the forward rule could be made public without compromising the security of the system. However, Kari did not specify which types of reversible cellular automaton should be used for such a system, or show how a cryptosystem using this approach would be able to generate encryption/decryption key pairs. have proposed an alternative encryption system. In their system, the encryption key determines the local rule for each cell of a one-dimensional cellular automaton. A second-order automaton based on that rule is run for several rounds on an input to transform it into an encrypted output. The reversibility property of the automaton ensures that any encrypted message can be decrypted by running the same system in reverse. In this system, keys must be kept secret, because the same key is used both for encryption and decryption. Quantum computing Quantum cellular automata are arrays of automata whose states and state transitions obey the laws of quantum dynamics. Quantum cellular automata were suggested as a model of computation by and first formalized by . Several competing notions of these automata remain under research, many of which require that the automata constructed in this way be reversible. Physical universality asked whether it was possible for a cellular automaton to be physically universal, meaning that, for any bounded region of the automaton's cells, it should be possible to surround that region with cells whose states form an appropriate support scaffolding that causes the automaton to implement any arbitrary transformation on sets of states within the region. Such an automaton must be reversible, or at least locally injective, because automata without this property have Garden of Eden patterns, and it is not possible to implement a transformation that creates a Garden of Eden. constructed a reversible cellular automaton that is physically universal in this sense. Schaeffer's automaton is a block cellular automaton with two states and the Margolis neighborhood, closely related to the automata for the billiard ball model and for the HPP lattice gas. However, the billiard ball model is not physically universal, as it can be used to construct impenetrable walls preventing the state within some region from being read and transformed. In Schaeffer's model, every pattern eventually decomposes into particles moving diagonally in four directions. Thus, his automaton is not Turing complete. However, Schaeffer showed that it is possible to surround any finite configuration by scaffolding that decays more slowly than it. After the configuration decomposes into particles, the scaffolding intercepts those particles, and uses them as the input to a system of Boolean circuits constructed within the scaffolding. These circuits can be used to compute arbitrary functions of the initial configuration. The scaffolding then translates the output of the circuits back into a system of moving particles, which converge on the initial region and collide with each other to build a copy of the transformed state. In this way, Schaeffer's system can be used to apply any function to any bounded region of the state space, showing that this automaton rule is physically universal. Notes References . . . . . . . . . . . . . . . Reprinted in . . . . . . . . . . . . . Reprinted in , and in . . . . . . . . . . Reprinted in . . . As cited by and . . . . . . . . . . . . . . . . Cellular automata Reversible computing
Reversible cellular automaton
[ "Physics", "Mathematics" ]
8,676
[ "Physical quantities", "Time", "Recreational mathematics", "Reversible computing", "Cellular automata", "Spacetime" ]
21,674,054
https://en.wikipedia.org/wiki/FEE%20method
In mathematics, the FEE method, or fast E-function evaluation method, is the method of fast summation of series of a special form. It was constructed in 1990 by Ekaterina Karatsuba and is so-named because it makes fast computations of the Siegel -functions possible, in particular of . A class of functions, which are "similar to the exponential function," was given the name "E-functions" by Carl Ludwig Siegel. Among these functions are such special functions as the hypergeometric function, cylinder, spherical functions and so on. Using the FEE, it is possible to prove the following theorem: Theorem: Let be an elementary transcendental function, that is the exponential function, or a trigonometric function, or an elementary algebraic function, or their superposition, or their inverse, or a superposition of the inverses. Then Here is the complexity of computation (bit) of the function with accuracy up to digits, is the complexity of multiplication of two -digit integers. The algorithms based on the method FEE include the algorithms for fast calculation of any elementary transcendental function for any value of the argument, the classical constants e, the Euler constant the Catalan and the Apéry constants, such higher transcendental functions as the Euler gamma function and its derivatives, the hypergeometric, spherical, cylinder (including the Bessel) functions and some other functions for algebraic values of the argument and parameters, the Riemann zeta function for integer values of the argument and the Hurwitz zeta function for integer argument and algebraic values of the parameter, and also such special integrals as the integral of probability, the Fresnel integrals, the integral exponential function, the trigonometric integrals, and some other integrals for algebraic values of the argument with the complexity bound which is close to the optimal one, namely The FEE makes it possible to calculate fast the values of the functions from the class of higher transcendental functions, certain special integrals of mathematical physics and such classical constants as Euler's, Catalan's and Apéry's constants. An additional advantage of the method FEE is the possibility of parallelizing the algorithms based on the FEE. FEE computation of classical constants For fast evaluation of the constant one can use the Euler formula and apply the FEE to sum the Taylor series for with the remainder terms which satisfy the bounds and for To calculate by the FEE it is possible to use also other approximations In all cases the complexity is To compute the Euler constant gamma with accuracy up to digits, it is necessary to sum by the FEE two series. Namely, for The complexity is To evaluate fast the constant it is possible to apply the FEE to other approximations. FEE computation of certain power series By the FEE the two following series are calculated fast: under the assumption that are integers, and are constants, and is an algebraic number. The complexity of the evaluation of the series is FEE calculation of the classical constant e For the evaluation of the constant take , terms of the Taylor series for Here we choose , requiring that for the remainder the inequality is fulfilled. This is the case, for example, when Thus, we take such that the natural number is determined by the inequalities: We calculate the sum in steps of the following process. Step 1. Combining in the summands sequentially in pairs we carry out of the brackets the "obvious" common factor and obtain We shall compute only integer values of the expressions in the parentheses, that is the values Thus, at the first step the sum is into At the first step integers of the form are calculated. After that we act in a similar way: combining on each step the summands of the sum sequentially in pairs, we take out of the brackets the 'obvious' common factor and compute only the integer values of the expressions in the brackets. Assume that the first steps of this process are completed. Step (). we compute only integers of the form Here is the product of integers. Etc. Step , the last one. We compute one integer value we compute, using the fast algorithm described above the value and make one division of the integer by the integer with accuracy up to digits. The obtained result is the sum or the constant up to digits. The complexity of all computations is See also Fast algorithms AGM method Computational complexity References External links http://www.ccas.ru/personal/karatsuba/divcen.htm http://www.ccas.ru/personal/karatsuba/algen.htm Numerical analysis Computer arithmetic algorithms Pi algorithms
FEE method
[ "Mathematics" ]
943
[ "Pi algorithms", "Computational mathematics", "Mathematical relations", "Numerical analysis", "Pi", "Approximations" ]
21,676,935
https://en.wikipedia.org/wiki/State%20space%20enumeration
In computer science, state space enumeration are methods that consider each reachable program state to determine whether a program satisfies a given property. As programs increase in size and complexity, the state space grows exponentially. The state space used by these methods can be reduced by maintaining only the parts of the state space that are relevant to the analysis. However, the use of state and memory reduction techniques makes runtime a major limiting factor. See also Formal methods Model checking References Formal methods Logic in computer science Programming language implementation
State space enumeration
[ "Mathematics", "Engineering" ]
107
[ "Software engineering", "Mathematical logic", "Logic in computer science", "Formal methods" ]
21,677,743
https://en.wikipedia.org/wiki/Polytempo
The term polytempo or polytempic is used to describe music in which two or more tempi occur simultaneously. In the Western world, the practice of polytempic music has its roots in the music theory of Henry Cowell, and the early practices of Charles Ives. Later on, composer Elliott Carter, in the fifties, began polymetric experiments in his string quartets that inevitably amounted to polytempic behavior by nature of several competing lines at different surface speeds. At around the same time, composer Henry Brant expanded on Ives's The Unanswered Question to create a spatial music in which entire ensembles, separated by vast distances, play in distinct simultaneous tempi. Some types of African drumming exhibit this phenomenon. Today's composers are employing polytempi as a compositional strategy to create total and complete independence of line in polyphonic music. Composers such as Conlon Nancarrow, David A. Jaffe, Evgeni Kostitsyn, Kyle Gann, Kenneth Jonsson, John Arrigo-Nelson, Brian Ferneyhough, Karlheinz Stockhausen, Frank Zappa, and Peter Thoegersen have used various methods in achieving polytempic effects in their music. Polytempic music also harkens to the rhythmic practices of some Renaissance and medieval composers (see hemiola). Multitemporal music Multitemporal music is composed using sound streams that have different internal tempi or pulse speed, for example one part at 115 bpm and at 105 bpm at the same time. Multitemporal music was first heard in US-Mexican composer Conlon Nancarrow's work, discovered by Hungarian György Ligeti, who undertook the task of bringing Nancarrow's music to the fore. To overcome the limits posed by a human performer in playing a multitemporal score Nancarrow used two modified player-pianos, punching the rolls by hand. One of the few recordings of this composer's work is found in Wergo's "Studies for Player Piano" series. The idea was then proposed by Iannis Xenakis in the early seventies and more recently by Italian born composer Valerio Camporini Faggioni using synthetic and software devices. A similar technique, with the tempi similar to each other is rhythm phasing – a technique introduced by Steve Reich and used especially in minimalist and post-minimalist music. See also Polyrhythm and polymeter Tuplet for Zappa's and Ferneyhough's preferred notation for this concept. Rhythm phasing References Rhythm and meter
Polytempo
[ "Physics" ]
537
[ "Spacetime", "Rhythm and meter", "Physical quantities", "Time" ]
37,333,575
https://en.wikipedia.org/wiki/Bituminous%20limestone
Bituminous limestone is limestone impregnated and sometimes deeply colored with bituminous matter derived from the decomposition of animal and plant remains entombed within the mass or in its vicinity. Uses The amount of bituminous matter or asphalt in the pores of the rock is sometimes sufficient to permit the material being used for asphalt pavements after simply powdering and heating it. Still better results are obtained by mixing it with bituminous sandstone. Sources In the United States, bituminous limestone has been found in Oklahoma, Texas, and Utah. Much bituminous limestone was also mined in Germany, Switzerland, and France, from where large quantities of it were exported to the United States. Notes References Limestone Bitumen-impregnated rocks
Bituminous limestone
[ "Chemistry" ]
154
[ "Asphalt", "Bitumen-impregnated rocks" ]
37,338,762
https://en.wikipedia.org/wiki/Mathematical%20physiology
Mathematical physiology is an interdisciplinary science. Primarily, it investigates ways in which mathematics may be used to give insight into physiological questions. In turn, it also describes how physiological questions can lead to new mathematical problems. The field may be broadly grouped into two physiological application areas: cell physiology – including mathematical treatments of biochemical reactions, ionic flow and regulation of function – and systems physiology – including electrocardiology, circulation and digestion. References Mathematical and theoretical biology Physiology Systems biology
Mathematical physiology
[ "Mathematics", "Biology" ]
95
[ "Physiology", "Mathematical and theoretical biology", "Applied mathematics", "Applied mathematics stubs", "Systems biology" ]
37,338,879
https://en.wikipedia.org/wiki/Thick-skinned%20deformation
Thick-skinned deformation is a geological term which refers to crustal shortening that involves basement rocks and deep-seated faults as opposed to only the upper units of cover rocks above the basement which is known as thin-skinned deformation. While thin-skinned deformation is common in many different localities, thick-skinned deformation requires much more strain to occur and is a rarer type of deformation. Definition Different processes can deform rocks, the deformation is almost always the result of stress. This stress leads to the formation of fault and fold structures, both can either extend or shorten of the Earth's crust. Thick-skinned deformation specifically affects deep crystalline rock of the basement and may extend deeper into the lower crust. Thin-skinned deformation affects the upper crustal layers and does not deform the deeper basement. Causes Thick-skinned deformation is most commonly a result of crustal shortening and occurs when the region is undergoing horizontal compression. This frequently occurs in at the sites of continental collisions where orogenesis, or mountain building, is taking place and during which the crust is shortened horizontally and thickened vertically. The massive compressional forces involved in such a collision cause the basement rock and all of the units above it to deform. Deformation occurs in the form of both folds and thrust faults and may form a fold and thrust belt along the collisional zone or as crustal flow. At convergent plate boundaries two plates move towards each other as one is subducted downwards beneath the other but when the crust of two continents meet at a convergent zone neither one of them will be subducted due to their low density. As the two continents are pushed together by tectonic processes a large amount of stress is put on the rock. Eventually deformation will occur in one or multiple ways in order to relieve the stress. Folds Folding usually occurs in areas with a very slow strain rate or when the rock being deformed is relatively weak and ductile. As folding occurs the units of rock bend forming anticlines, ridges, and synclines, valleys. While the true thickness of the underlying crust may not be equal to the elevation changes of the resulting mountains and hills, the average crustal thickness is greater than before the deformation occurred. One way in which folding can occur in such a formation is by a small amount of subduction of one plate. One continent may be partially overridden by the other but since the plate is far too light to sink it will uplift the overriding plate creating very large folds that deform the entire crust. Faults Thrust faults are another common form of deformation to occur in these areas. Faulting is generally the result of greater strain rates and stronger or more brittle rocks. These faults have a high angle and cause thickening by uplifting the rock onto itself. These types of faults are identified by the vertically repeating stratigraphy that they produce. During a collision when the strain reaches the breaking point of the rock a fracture will form in the rock. This fracture cuts across layers of rock to form a ramp which will allow movement to dissipate the accumulated strain. Under compression the upper hanging wall rises and overrides the lower foot wall. Crustal flow The final type of deformation is crustal flow. This type of deformation is only able to occur when the crustal material is heated to a very high temperature, approximately 2/3 of its melting temperature. When this occurs in a collisional zone then the rock can be deformed by creep and will behave similarly to a fluid over the long periods of geologic time. Examples Himalayan Mountains - The Himalayas of Tibet are at the location of a continental collision between the Indian and Asian continents and contains many examples of thick-skinned deformation. Andes Mountains - The Andes of South America are at the location of an subduction zone and are formed by thick-skinned deformation. References Sarkarinejad, K., and Goftari F., 2019. Thick-skinned and thin-skinned tectonics of the Zagros orogen, Iran: Constraints from structural, microstructural and kinematics analyses. Journal of Asian Earth Sciences 170, 249-273. External links Thick-Skinned and Thin-Skinned Tectonics: A Global Perspective by O. Adrian Pfiffner (2017) Deformation (mechanics) Plate tectonics
Thick-skinned deformation
[ "Materials_science", "Engineering" ]
875
[ "Deformation (mechanics)", "Materials science" ]
37,339,579
https://en.wikipedia.org/wiki/C20H23N3O2
{{DISPLAYTITLE:C20H23N3O2}} The molecular formula C20H23N3O2 (molar mass: 337.41 g/mol, exact mass: 337.1790 u) may refer to: LSM-775, or N-Morpholinyllysergamide E-52862, or S1RA
C20H23N3O2
[ "Chemistry" ]
81
[ "Isomerism", "Set index articles on molecular formulas" ]
37,339,934
https://en.wikipedia.org/wiki/Pasteurella%20virus%20F108
Pasteurella virus F108 is a temperate bacteriophage (a virus that infects bacteria) of the family Myoviridae, genus Hpunavirus. Its morphology is complex, with hexagonal head and a long contractile tail. References Myoviridae
Pasteurella virus F108
[ "Biology" ]
61
[ "Virus stubs", "Viruses" ]
37,340,338
https://en.wikipedia.org/wiki/Mu%20to%20E%20Gamma
The Mu to E Gamma (MEG) is a particle physics experiment dedicated to measuring the decay of the muon into an electron and a photon, a decay mode which is heavily suppressed in the Standard Model by lepton flavour conservation, but enhanced in supersymmetry and grand unified theories. It is located at the Paul Scherrer Institute and began taking data September 2008. Results In May 2016 the MEG experiment published the world's leading upper limit on the branching ratio of this decay: at 90% confidence level, based on data collected in 2009–2013. This improved the MEG limit from the prior MEGA experiment by a factor of about 28. Apparatus MEG uses a continuous muon beam (3 × 107/s) incident on a plastic target. The decay is reconstructed to look for a back-to-back positron and monochromatic photon (52.8 MeV). A liquid xenon scintillator with photomultiplier tubes measure the photon energy, and a drift chamber in a magnetic field detects the positrons. The MEG collaboration presented upgrade plans for MEG-II at the Particles and Nuclei International Conference 2014, with one order of magnitude greater sensitivity, and increased muon production, to begin data taking in 2017. More experiments are planned to explore rare muon transitions, such as Comet (experiment), Mu2e and Mu3e. References External links MEG-II experiment record on INSPIRE-HEP Particle experiments
Mu to E Gamma
[ "Physics" ]
299
[ "Particle physics stubs", "Particle physics" ]
37,342,010
https://en.wikipedia.org/wiki/Canon%20arithmeticus
The Canon arithmeticus is a set of mathematical tables of indices and powers with respect to primitive roots for prime powers less than 1000, originally published by . The tables were at one time used for arithmetical calculations modulo prime powers, though like many mathematical tables, they have now been replaced by digital computers. Jacobi also reproduced Burkhardt's table of the periods of decimal fractions of 1/p and Ostrogradsky's tables of primitive roots of primes less than 200 and gave tables of indices of some odd numbers modulo powers of 2 with respect to the base 3 . Although the second edition of 1956 has Jacobi's name on the title, it has little in common with the first edition apart from the topic: the tables were completely recalculated, usually with a different choice of primitive root, by Wilhelm Patz. Jacobi's original tables use 10 or −10 or a number with a small power of this form as the primitive root whenever possible, while the second edition uses the smallest possible positive primitive root . The term "canon arithmeticus" is occasionally used to mean any table of indices and powers of primitive roots. References See also A. W. Faber Model 366, a discrete slide rule incorporating similar concepts to the Canon arithmeticus Mathematics books History of mathematics Modular arithmetic Mathematical tables
Canon arithmeticus
[ "Mathematics" ]
270
[ "Arithmetic", "Mathematical tables", "Modular arithmetic", "Number theory" ]
23,169,183
https://en.wikipedia.org/wiki/Connected%20ring
In mathematics, especially in the field of commutative algebra, a connected ring is a commutative ring A that satisfies one of the following equivalent conditions: A possesses no non-trivial (that is, not equal to 1 or 0) idempotent elements; the spectrum of A with the Zariski topology is a connected space. Examples and non-examples Connectedness defines a fairly general class of commutative rings. For example, all local rings and all (meet-)irreducible rings are connected. In particular, all integral domains are connected. Non-examples are given by product rings such as Z × Z; here the element (1, 0) is a non-trivial idempotent. Generalizations In algebraic geometry, connectedness is generalized to the concept of a connected scheme. References Commutative algebra Ring theory
Connected ring
[ "Mathematics" ]
177
[ "Fields of abstract algebra", "Commutative algebra", "Ring theory" ]
23,169,189
https://en.wikipedia.org/wiki/Irreducible%20ring
In mathematics, especially in the field of ring theory, the term irreducible ring is used in a few different ways. A (meet-)irreducible ring is a ring in which the intersection of two non-zero ideals is always non-zero. A directly irreducible ring is a ring which cannot be written as the direct sum of two non-zero rings. A subdirectly irreducible ring is a ring with a unique, non-zero minimum two-sided ideal. A ring with an irreducible spectrum is a ring whose spectrum is irreducible as a topological space. "Meet-irreducible" rings are referred to as "irreducible rings" in commutative algebra. This article adopts the term "meet-irreducible" in order to distinguish between the several types being discussed. Meet-irreducible rings play an important part in commutative algebra, and directly irreducible and subdirectly irreducible rings play a role in the general theory of structure for rings. Subdirectly irreducible algebras have also found use in number theory. This article follows the convention that rings have multiplicative identity, but are not necessarily commutative. Definitions The terms "meet-reducible", "directly reducible" and "subdirectly reducible" are used when a ring is not meet-irreducible, or not directly irreducible, or not subdirectly irreducible, respectively. The following conditions are equivalent for a commutative ring R: R is meet-irreducible; the zero ideal in R is irreducible, i.e. the intersection of two non-zero ideals of A always is non-zero. The following conditions are equivalent for a ring R: R is directly irreducible; R has no central idempotents except for 0 and 1. The following conditions are equivalent for a ring R: R is subdirectly irreducible; when R is written as a subdirect product of rings, then one of the projections of R onto a ring in the subdirect product is an isomorphism; The intersection of all non-zero ideals of R is non-zero. The following conditions are equivalent for a commutative ring R: the spectrum of R is irreducible. R possesses exactly one minimal prime ideal (this prime ideal may be the zero ideal); Examples and properties If R is subdirectly irreducible or meet-irreducible, then it is also directly irreducible, but the converses are not true. All integral domains are meet-irreducible, but not all integral domains are subdirectly irreducible (e.g. Z). In fact, a commutative ring is a domain if and only if it is both meet-irreducible and reduced. A commutative ring is a domain if and only if its spectrum is irreducible and it is reduced. The quotient ring Z/4Z is a ring which has all three senses of irreducibility, but it is not a domain. Its only proper ideal is 2Z/4Z, which is maximal, hence prime. The ideal is also minimal. The direct product of two non-zero rings is never directly irreducible, and hence is never meet-irreducible or subdirectly irreducible. For example, in Z × Z the intersection of the non-zero ideals {0} × Z and Z × {0} is equal to the zero ideal {0} × {0}. Commutative directly irreducible rings are connected rings; that is, their only idempotent elements are 0 and 1. Generalizations Commutative meet-irreducible rings play an elementary role in algebraic geometry, where this concept is generalized to the concept of an irreducible scheme. Notes Commutative algebra Ring theory
Irreducible ring
[ "Mathematics" ]
825
[ "Fields of abstract algebra", "Commutative algebra", "Ring theory" ]
23,172,405
https://en.wikipedia.org/wiki/HD%2073267
HD 73267 is a star in the southern constellation Pyxis, near the western constellation border with Puppis. It has an apparent visual magnitude of 8.889 and can be viewed with a small telescope. The distance to HD 73267 is 164 light years based on parallax, and it is drifting further away with a radial velocity of +51.8 km/s. It has an absolute magnitude of 5.24. This object is a G-type main-sequence star with a stellar classification of G5V. It is roughly eight billion years old with a near-solar metallicity and is spinning with a projected rotational velocity of 1.65 km/s, giving it a rotation period of around 33 days. The star has 90% of the mass and size of the Sun. It is radiating 78% of the Sun's luminosity from its photosphere at an effective temperature of 5387 K. Planetary system In October 2008, a candidate planet was discovered orbiting this star. This object was detected using the radial velocity method by search programs conducted using the HARPS spectrograph. Subsequent analysis of collected data suggests the presence of an additional long-period planet in the system with at least 83% of the mass of Jupiter. In 2022, the inclination and true mass of HD 73267 b were measured, and the presence of a second planet was confirmed using a combination of radial velocity and astrometry. See also List of extrasolar planets References External links G-type main-sequence stars Planetary systems with one confirmed planet Pyxis Durchmusterung objects 073267 042202
HD 73267
[ "Astronomy" ]
336
[ "Pyxis", "Constellations" ]
23,173,874
https://en.wikipedia.org/wiki/Vienna%20rectifier
The Vienna Rectifier is a pulse-width modulation rectifier, invented in 1993 by Johann W. Kolar at TU Wien, a public research university in Vienna, Austria. Features The Vienna Rectifier provides the following features: Three-phase three-level three-switch PWM rectifier with controlled output voltage Three-wire input, no connection to neutral Ohmic mains behaviour Boost system (continuous input current) Unidirectional power flow High power density Low conducted common-mode electro-magnetic interference (EMI) emissions Simple control to stabilize the neutral point potential Low complexity, low realization effort Low Reliable behaviour (guaranteeing ohmic mains behaviour) under heavily unbalanced mains voltages and in case of mains failure Topology The Vienna Rectifier is a unidirectional three-phase three-switch three-level Pulse-width modulation (PWM) rectifier. It can be seen as a three-phase diode bridge with an integrated boost converter. Applications The Vienna Rectifier is useful wherever six-switch converters are used for achieving sinusoidal mains current and controlled output voltage, when no energy feedback from the load into the mains is available. In practice, use of the Vienna Rectifier is advantageous when space is at a sufficient premium to justify the additional hardware cost. These include: Telecommunications power supplies. Uninterruptible power supplies. Input stages of AC-drive converter systems. Figure 2 shows the top and bottom views of an air-cooled 10 kW-Vienna Rectifier (400 kHz PWM), with sinusoidal input current s and controlled output voltage. Dimensions are 250mm x 120mm x 40mm, resulting in a power density of 8.5 kW/dm3. The total weight of the converter is 2.1 kg Current and voltage waveforms Figure 3 shows the system behaviour, calculated using the power-electronics circuit simulator. Between the output voltage midpoint (0) and the mains midpoint (M) the common mode voltage u0M appears, as is characteristic in three-phase converter systems. Current control and balance of the neutral point at the DC-side It is possible to separately control the input current shape in each branch of the diode bridge by inserting a bidirectional switch into the node, as shown in Figure 3. The switch Ta controls the current by controlling the magnetization of the inductor. When the bi-directional switch is turned on, the input voltage is applied across the inductor and the current in the inductor rises linearly. Turning off the switch causes the voltage across the inductor to reverse and the current to flow through the freewheeling diodes Da+ and Da-, decreasing linearly. By controlling the switch on-time, the topology is able to control the current in phase with the mains voltage, presenting a resistive load behavior (Power-factor correction capability). To generate a sinusoidal power input which is in phase with the voltage the average voltage space vector over a pulse-period must satisfy: For high switching frequencies or low inductivities we require () . The available voltage space vectors required for the input voltage are defined by the switching states and the direction of the phase currents. For example, for , i.e. for the phase-range of the period() the phase of the input current space vector is ). Fig. 4 shows the conduction states of the system, and from this we get the input space vectors shows in Fig. 5 See also Warsaw rectifier References Electronic circuits Electric power conversion Power electronics Rectifiers 20th-century inventions
Vienna rectifier
[ "Engineering" ]
770
[ "Electronic engineering", "Electronic circuits", "Power electronics" ]
23,174,224
https://en.wikipedia.org/wiki/Kernighan%E2%80%93Lin%20algorithm
The Kernighan–Lin algorithm is a heuristic algorithm for finding partitions of graphs. The algorithm has important practical application in the layout of digital circuits and components in electronic design automation of VLSI. Description The input to the algorithm is an undirected graph with vertex set , edge set , and (optionally) numerical weights on the edges in . The goal of the algorithm is to partition into two disjoint subsets and of equal (or nearly equal) size, in a way that minimizes the sum of the weights of the subset of edges that cross from to . If the graph is unweighted, then instead the goal is to minimize the number of crossing edges; this is equivalent to assigning weight one to each edge. The algorithm maintains and improves a partition, in each pass using a greedy algorithm to pair up vertices of with vertices of , so that moving the paired vertices from one side of the partition to the other will improve the partition. After matching the vertices, it then performs a subset of the pairs chosen to have the best overall effect on the solution quality . Given a graph with vertices, each pass of the algorithm runs in time . In more detail, for each , let be the internal cost of a, that is, the sum of the costs of edges between a and other nodes in A, and let be the external cost of a, that is, the sum of the costs of edges between a and nodes in B. Similarly, define , for each . Furthermore, let be the difference between the external and internal costs of s. If a and b are interchanged, then the reduction in cost is where is the cost of the possible edge between a and b. The algorithm attempts to find an optimal series of interchange operations between elements of and which maximizes and then executes the operations, producing a partition of the graph to A and B. Pseudocode Source: function Kernighan-Lin(G(V, E)) is determine a balanced initial partition of the nodes into sets A and B do compute D values for all a in A and b in B let gv, av, and bv be empty lists for n := 1 to |V| / 2 do find a from A and b from B, such that g = D[a] + D[b] − 2×c(a, b) is maximal remove a and b from further consideration in this pass add g to gv, a to av, and b to bv update D values for the elements of A = A \ a and B = B \ b end for find k which maximizes g_max, the sum of gv[1], ..., gv[k] if g_max > 0 then Exchange av[1], av[2], ..., av[k] with bv[1], bv[2], ..., bv[k] until (g_max ≤ 0) return G(V, E) See also Fiduccia–Mattheyses algorithm References Combinatorial optimization Combinatorial algorithms Heuristic algorithms
Kernighan–Lin algorithm
[ "Mathematics" ]
645
[ "Combinatorial algorithms", "Computational mathematics", "Combinatorics" ]
23,175,578
https://en.wikipedia.org/wiki/Brevianamide
Brevianamides are indole alkaloids that belong to a class of naturally occurring 2,5-diketopiperazines produced as secondary metabolites of fungi in the genus Penicillium and Aspergillus. Structurally similar to paraherquamides, they are a small class compounds that contain a bicyclo[2.2.2]diazoctane ring system. One of the major secondary metabolites in Penicillium spores, they are responsible for inflammatory response in lung cells. History Originally isolated from Pennicillum compactum in 1969, brevianamide A has shown insecticidal activity. Further studies showed that a minor secondary metabolite, brevianamide B, has an epimeric center at the spiro-indoxyl quaternary center. Both were found to fluoresce under long-wave ultraviolet radiation. Furthermore, under irradaton, brevianamide A has been shown to isomerize to brevianamide B. Biosynthesis While the biosynthesis has not been conclusively elucidated, brevianamide A and B are constructed from tryptophan, proline, and an isoprene unit. Total synthesis The total synthesis of several brevianamides have been reported, for brevianamide-B and for brevianamide-E. Biological activity Tests for antibiotic effectiveness against E. coli, A. fecalis, B. subtilis, S. aureus, and P. aeruginosa were negative. Also, no inhibitory action was shown against A. niger, A. flavis, P. crustosum, F. graminearum, F. moniliforme, Alternara sp., and Cladosporium sp. However, some insecticidal activity has been shown in one study, possibly showing some use as an insecticide for food crops. In mammalian (mice lung cell) studies, brevianamide A has shown to induce cytoxicity in cells. Furthermore, ELISA assays showed elevated levels of tumor necrosis factor-alpha (TNF-A), macrophage inflammatory protein-2 (MIP-2), and interleukin 6 (IL-6). Therefore, brevianamide A may not be a suitable insecticide in food crops. See also Brevianamide F References Indole alkaloids Spiro compounds Diketopiperazines Total synthesis Ketones
Brevianamide
[ "Chemistry" ]
525
[ "Indole alkaloids", "Ketones", "Functional groups", "Organic compounds", "Alkaloids by chemical classification", "Chemical synthesis", "Total synthesis", "Spiro compounds" ]
29,264,509
https://en.wikipedia.org/wiki/Green%20rust
Green rust is a generic name for various green crystalline chemical compounds containing iron(II) and iron(III) cations, the hydroxide () anion, and another anion such as carbonate (), chloride (), or sulfate (), in a layered double hydroxide (LDH) structure. The most studied varieties are the following: carbonate green rust – GR ():[()12]2+ · [·2]2−; chloride green rust – GR ():[()8]+ · [·n]−; sulfate green rust – GR ():[()12]2+ · [·2]2−. Other varieties reported in the literature are bromide , fluoride , iodide , nitrate , and selenate . Green rust was first recognized as a corrosion crust on iron and steel surfaces. It occurs in nature as the mineral fougerite. Structure The crystal structure of green rust can be understood as the result of inserting the foreign anions and water molecules between brucite-like layers of iron(II) hydroxide, . The latter has an hexagonal crystal structure, with layer sequence AcBAcB... , where A and B are planes of hydroxide ions, and c those of (iron(II), ferrous) cations. In green rust, some cations get oxidized to (iron(III), ferric). Each triple layer AcB, which is electrically neutral in the hydroxide, becomes positively charged. The anions then intercalate between those triple layers and restore the electroneutrality. There are two basic structures of green rust, "type 1" and "type 2". Type 1 is exemplified by the chloride and carbonate varieties. It has a rhombohedral crystal structure similar to that of pyroaurite (). The layers are stacked in the sequence AcBiBaCjCbAkA ...; where A, B, and C represent planes, a, b, and c are layers of mixed and cations, and i, j, and k are layers of the intercalated anions and water molecules. The c crystallographic parameter is 22.5–22.8 Å for the carbonate, and about 24 Å for the chloride. Type 2 green rust is exemplified by the sulfate variety. It has an hexagonal crystal structure as minerals of the sjogrenite () group, with layers probably stacked in the sequence AcBiAbCjA... Chemical properties In oxidizing environment, green rust generally turns into oxyhydroxides, namely α- (goethite) and γ- (lepidocrocite). Oxidation of the carbonate variety can be retarded by wetting the material with hydroxyl-containing organic compounds such as glycerol or glucose, even though they do not penetrate the structure. Some variety of green rust is stabilized also by an atmosphere with high partial pressure. Sulfate green rust has been shown to reduce nitrate and nitrite in solution to ammonium , with concurrent oxidation of to . Depending on the cations in the solution, the nitrate anions replaced the sulfate in the intercalation layer, before the reduction. It was conjectured that green rust may be formed in the reducing alkaline conditions below the surface of marine sediments and may be connected to the disappearance of oxidized species like nitrate in that environment. Suspensions of carbonate green rust and orange γ- in water react over a few days producing a black precipitate of magnetite . Occurrence Iron and steel corrosion Green rust compounds were identified in green corrosion crusts that form on iron and steel surfaces, in alternating aerobic and anaerobic conditions, by water containing anions such as chloride, sulfate, carbonate, or bicarbonate. They are considered to be intermediates in the oxidative corrosion of iron to form iron(III) oxyhydroxides (ordinary brown rust). Green rust may be formed either directly from metallic iron or from iron(II) hydroxide ()2. Reducing conditions in soils On the basis of Mössbauer spectroscopy, green rust is suspected to occur as mineral in certain bluish-green soils that are formed in alternating redox conditions, and turn ochre once exposed to air. Green rust has been conjectured to be present in the form of the mineral fougerite (). Biologically mediated formation Hexagonal crystals of green rust (carbonate and/or sulfate) have also been obtained as byproducts of bioreduction of ferric oxyhydroxides by dissimilatory iron-reducing bacteria, such as Shewanella putrefaciens, that couple the reduction of with the oxidation of organic matter. This process has been conjectured to occur in soil solutions and aquifers. In one experiment, a 160 mM suspension of orange lepidocrocite γ- in a solution containing formate (), incubated for 3 days with a culture of Shewanella putrefaciens, turned dark green due to the conversion of the hydroxide to GR(), in the form of hexagonal platelets with diameter ~7 μm. In this process, the formate was oxidized to bicarbonate which provided the carbonate anions for the formation of green rust. The active bacteria were necessary for the formation of green rust. Laboratory preparation Air oxidation methods Green rust compounds can be synthesized at ambient temperature and pressure, from solutions containing iron(II) cations, hydroxide anions, and the appropriate intercalatory anions, such as chloride, sulfate, or carbonate. The result is a suspension of ferrous hydroxide () in a solution of the third anion. This suspension is oxidized by stirring under air, or bubbling air through it. Since the product is very prone to oxidation, it is necessary to monitor the process and exclude oxygen once the desired ratio of and is achieved. One method first combines an iron(II) salt with sodium hydroxide (NaOH) to form the ferrous hydroxide suspension. Then the sodium salt of the third anion is added, and the suspension is oxidized by stirring under air. For example, carbonate green rust can be prepared by mixing solutions of iron(II) sulfate and sodium hydroxide; then adding sufficient amount of sodium carbonate solution, followed by the air oxidation step. Sulfate green rust can be obtained by mixing solutions of ·4 and NaOH to precipitate then immediately adding sodium sulfate and proceeding to the air oxidation step. A more direct method combines a solution of iron(II) sulfate with NaOH, and proceeding to the oxidizing step. The suspension must have a slight excess of (in the ratio of 0.5833 for each ) for green rust to form; however, too much of it will produce instead an insoluble basic iron sulfate, iron(II) sulfate hydroxide . The production of green rust is lower as temperature increases. Stoichiometric Fe(II)/Fe(III) methods An alternate preparation of carbonate green rust first produces a suspension of iron(III) hydroxide in an iron(II) chloride solution, and bubbles carbon dioxide through it. In a more recent variant, solutions of both iron(II) and iron(III) salts are first mixed, then a solution of NaOH is added, all in the stoichiometric proportions of the desired green rust. No oxidation step is then necessary. Electrochemistry Carbonate green rust films have also been obtained from the electrochemical oxidation of iron plates. References Corrosion Hydroxide minerals Iron compounds Redox
Green rust
[ "Chemistry", "Materials_science" ]
1,584
[ "Redox", "Metallurgy", "Corrosion", "Electrochemistry", "nan", "Materials degradation" ]