id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
23,291,159 | https://en.wikipedia.org/wiki/Lipid-based%20nanoparticle | Lipid-based nanoparticles are very small spherical particles composed of lipids. They are a novel pharmaceutical drug delivery system (part of nanoparticle drug delivery), and a novel pharmaceutical formulation. There are many subclasses of lipid-based nanoparticles such as: lipid nanoparticles (LNPs), solid lipid nanoparticles (SLNs), and nanostructured lipid carriers (NLCs).
Sometimes the term "LNP" describes all lipid-based nanoparticles. In specific applications, LNPs describe a specific type of lipid-based nanoparticle, such as the LNPs used for the mRNA vaccine.
Using LNPs for drug delivery was first approved in 2018 for the siRNA drug Onpattro. LNPs became more widely known late in 2020, as some COVID-19 vaccines that use RNA vaccine technology coat the fragile mRNA strands with PEGylated lipid nanoparticles as their delivery vehicle (including both the Moderna and the Pfizer–BioNTech COVID-19 vaccines).
Characteristics
A lipid nanoparticle is typically spherical with an average diameter between 10 and 1000 nanometers. LNPs are made up of phospholipids, cholesterols, ionizable lipids, and polyethylene glycol-derived lipids (PEGylated lipids). Each of these components play a key role in LNPs used for mRNA vaccines that target SARS-CoV-2 (the virus that causes COVID-19). The ionizable cationic lipids bind to mRNA, PEGylated lipids stabilize LNPs, and phospholipids and cholesterol give LNPs their structure. Because of rapid clearance by the immune system of the positively charged lipid, neutral ionizable amino lipids were developed. A novel squaramide lipid (a partially aromatic four-membered ring that can participate in pi–pi interactions) has been used as part of the delivery system used, for example, by Moderna.
Solid lipid nanoparticles (SLNs) possess a solid lipid core matrix that solubilizes lipophilic molecules. Surfactants (emulsifiers) stabilize the lipid core. The emulsifier used depends on administration routes, and is more limited for parenteral administrations. The term "lipid" refers to a broader class of molecules, and includes triglycerides (e.g. tristearin), diglycerides (e.g. glycerol bahenate), monoglycerides (e.g. glycerol monostearate), fatty acids (e.g. stearic acid), steroids (e.g. cholesterol), and waxes (e.g. cetyl palmitate). All classes of emulsifiers (with respect to charge and molecular weight) have been used to stabilize the lipid dispersion. It has been found that the combination of emulsifiers might prevent particle agglomeration more efficiently.
An SLN is generally spherical and consists of a solid lipid core stabilized by a surfactant. The core lipids can be fatty acids, acylglycerols, waxes, and mixtures of these surfactants. Biological membrane lipids, such as phospholipids, sphingomyelins, bile salts (sodium taurocholate), and sterols (cholesterol) are used as stabilizers. Biological lipids having minimum carrier cytotoxicity and the solid state of the lipid permit better controlled drug release due to increased mass transfer resistance.
Nanostructured lipid carriers (NLCs) are lipid-based nanoparticles that contain a mixture of solid and liquid lipids in the central core of the lipid carrier. NLCs are derived from SLNs by injecting liquid lipids into the solid core, resulting in a non-uniform internal core. This modification allows for higher drug capacity and more controlled drug delivery.
Synthesis
Different formulation procedures include high shear homogenization and ultrasound, solvent emulsification/evaporation, or microemulsion. Obtaining size distributions in the range of 30-180 nm is possible using ultrasonication at the cost of long sonication time. Solvent-emulsification is suitable in preparing small, homogeneously sized lipid nanoparticles dispersions with the advantage of avoiding heat.
The obtained LNP formulation can be filled into sterile containers and subjected to final quality control. However, various measures to monitor and evaluate product quality are integrated in every step of LNP manufacturing and include testing of polydispersity, particle size, drug loading efficiency and endotoxin levels.
Applications
Development of solid lipid nanoparticles is one of the emerging fields of lipid nanotechnology (for a review on lipid nanotechnology, see ) with several potential applications in drug delivery, clinical medicine and research, as well as in other disciplines. Due to their unique size-dependent properties, lipid nanoparticles can possibly develop new therapeutics. The ability to incorporate drugs into nanocarriers offers a new prototype in drug delivery that could hold great promise for attaining bioavailability enhancement along with controlled and site-specific drug delivery. SLNs are also considered to well tolerated in general, due to their composition from physiologically similar lipids.
The conventional approaches such as use of permeation enhancers, surface modification, prodrug synthesis, complex formation and colloidal lipid carrier-based strategies have been developed for the delivery of drugs to intestinal lymphatics. In addition, polymeric nanoparticles, self-emulsifying delivery systems, liposomes, microemulsions, micellar solutions and recently, solid lipid nanoparticles (SLN) have been exploited as probable possibilities as carriers for oral intestinal lymphatic delivery.
Drug delivery
Solid lipid nanoparticles can function as the basis for oral and parenteral drug delivery systems. SLNs combine the advantages of lipid emulsion and polymeric nanoparticle systems while overcoming the temporal and in vivo stability issues that troubles the conventional as well as polymeric nanoparticles drug delivery approaches. It has been proposed that SLNs have many advantages over other colloidal carriers i.e. incorporation of lipophilic and hydrophilic drugs feasible, no biotoxicity of the carrier, avoidance of organic solvents, possibility of controlled drug release and drug targeting, increased drug stability and no problems with respect to large scale production. Various functions such as molecules for targeting, PEG chains for stealth properties, or thiol groups for adhesion via disulfide bond formation can be immobilized on their surface. A recent study has demonstrated the use of solid lipid nanoparticles as a platform for oral delivery of the nutrient mineral iron, by incorporating the hydrophilic molecule ferrous sulfate (FeSO4) in a lipid matrix composed of stearic acid. Carvedilol-loaded solid lipid nanoparticles were prepared using hot-homogenization technique for oral delivery with compritol and poloxamer 188 as the lipid and surfactant, respectively. Another example of drug delivery using SLN would be oral solid SLN suspended in distilled water, which was synthesized to trap drugs within the SLN structure. Upon indigestion, the SLNs are exposed to gastric and intestinal acids that dissolve the SLNs and release the drugs into the system.
Many nano-structured systems have been employed for ocular drug delivery. SLNs have been looked at as a potential drug carrier system since the 1990s. SLNs do not show biotoxicity as they are prepared from physiological lipids. SLNs are useful in ocular drug delivery as they can enhance the corneal absorption of drugs and improve the ocular bioavailability of both hydrophilic and lipophilic drugs. SLNs have another advantage of allowing autoclave sterilization, a necessary step towards formulation of ocular preparations.
Advantages of SLNs include the use of physiological lipids (which decreases the danger of acute and chronic toxicity), the avoidance of organic solvents, a potential wide application spectrum (dermal, per os, intravenous) and the high pressure homogenization as an established production method. Additionally, improved bioavailability, protection of sensitive drug molecules from the outer environment (e.g. water, light), and even controlled release characteristics were claimed by the incorporation of poorly water-soluble drugs in the solid lipid matrix. Moreover, SLNs can carry both lipophilic and hydrophilic drugs, and are more affordable compared to polymeric/surfactant-based carriers.
Nucleic acids
A significant obstacle to using LNPs as a delivery vehicle for nucleic acids is that in nature, lipids and nucleic acids both carry a negative electric charge—meaning they do not easily mix with each other. While working at Syntex in the mid-1980s, Philip Felgner pioneered the use of artificially-created cationic lipids (positively-charged lipids) to bind lipids to nucleic acids in order to transfect the latter into cells. However, by the late 1990s, it was known from in vitro experiments that this use of cationic lipids had undesired side effects on cell membranes.
During the late 1990s and 2000s, Pieter Cullis, while at the University of British Columbia, developed ionizable cationic lipids which are "positively charged at an acidic pH but neutral in the blood." Cullis also led the development of a technique involving careful adjustments to pH during the process of mixing ingredients in order to create LNPs which could safely pass through the cell membranes of living organisms. As of 2021, the current understanding of LNPs formulated with such ionizable cationic lipids is that they enter cells through receptor-mediated endocytosis and end up inside endosomes. The acidity inside the endosomes causes LNPs' ionizable cationic lipids to acquire a positive charge, and this is thought to allow LNPs to escape from endosomes and release their RNA payloads.
From 2005 into the early 2010s, LNPs were investigated as a drug delivery system for small interfering RNA (siRNA) drugs. In 2009, Cullis co-founded a company called Acuitas Therapeutics to commercialize his LNP research; Acuitas worked on developing LNPs for Alnylam Pharmaceuticals's siRNA drugs. In 2018, the FDA approved Alnylam's siRNA drug Onpattro (patisiran), the first drug to use LNPs as the drug delivery system.
By that point in time, siRNA drug developers like Alnylam were already looking at other options for future drugs like chemical conjugate systems, but during the 2010s, the earlier research into using LNPs for siRNA became a foundation for new research into using LNPs for mRNA. Lipids intended for short siRNA strands did not work well for much longer mRNA strands, which led to extensive research during the mid-2010s into the creation of novel ionizable cationic lipids appropriate for mRNA. As of late 2020, several mRNA vaccines for SARS-CoV-2 use LNPs as their drug delivery system, including both the Moderna COVID-19 vaccine and the Pfizer–BioNTech COVID-19 vaccines. Moderna uses its own proprietary ionizable cationic lipid called SM-102, while Pfizer and BioNTech licensed an ionizable cationic lipid called ALC-0315 from Acuitas.
Lymphatic absorption mechanism
Elucidation of intestinal lymphatic absorption mechanism from solid lipid nanoparticles using Caco-2 cell line as in vitro model was developed. Several researchers have shown the enhancement of oral bioavailibility of poorly water-soluble drugs when encapsulated in solid lipid nanoparticle. This enhanced bioavailibility is achieved via lymphatic delivery. To elucidate the absorption mechanism, from solid lipid nanoparticle, human excised Caco-2 cell monolayer could be alternative tissue for development of an in-vitro model to be used as a screening tool before animal studies are undertaken. The results obtained in this model suggested that the main absorption mechanism of carvedilol loaded solid lipid nanoparticle could be endocytosis and, more specifically, clathrin-mediated endocytosis.
See also
Nanomedicine, the general field
Micelle, lipid cored
Liposome, lipid bilayer shell, an earlier form with some limitations
Lipoplex, a complex of plasmid or linear DNA and lipids
Targeted drug delivery
mRNA-1273, from Moderna, uses LNPs
BNT162b2, from BioNTech/Pfizer, uses LNPs
References
Further reading
External links
Nanoparticles by composition
Lipids | Lipid-based nanoparticle | [
"Chemistry"
] | 2,762 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Lipids"
] |
23,292,566 | https://en.wikipedia.org/wiki/Dess%E2%80%93Martin%20oxidation | The Dess–Martin oxidation is an organic reaction for the oxidation of primary alcohols to aldehydes and secondary alcohols to ketones using Dess–Martin periodinane.
It is named after the American chemists Daniel Benjamin Dess and James Cullen Martin who developed the periodinane reagent in 1983.
The reaction uses a hypervalent iodine reagent similar to 2-iodoxybenzoic acid to selectively and mildly oxidize alcohols to aldehydes or ketones. The reaction is commonly conducted in chlorinated solvents such as dichloromethane or chloroform. The reaction can be done at room temperature and is quickly complete. Many other functional groups will not be affected by this reaction.
The Dess–Martin oxidation may be preferable to other oxidation reactions as it is very mild, avoids the use of toxic chromium reagents, does not require large excess or co-oxidants, and for its ease of work up.
The reaction produces two equivalents of acetic acid. It can be buffered with pyridine or sodium bicarbonate in order to protect acid-labile compounds.
The rate of oxidation can be increased by the addition of water to the reaction mixture.
References
Organic oxidation reactions
Name reactions | Dess–Martin oxidation | [
"Chemistry"
] | 271 | [
"Name reactions",
"Organic oxidation reactions",
"Organic redox reactions",
"Organic reactions"
] |
23,293,621 | https://en.wikipedia.org/wiki/Glymidine%20sodium | Glymidine sodium (INN, also known as glycodiazine; trade name Gondafon) is a sulfonamide antidiabetic drug, structurally related to the sulfonylureas. It was first reported in 1964, and introduced to clinical use in Europe in the mid to late 1960s.
References
Sulfonamides
Pyrimidines
Ethers
Phenol ethers
Abandoned drugs | Glymidine sodium | [
"Chemistry"
] | 89 | [
"Functional groups",
"Drug safety",
"Organic compounds",
"Ethers",
"Abandoned drugs"
] |
23,294,123 | https://en.wikipedia.org/wiki/Creatinolfosfate | Creatinolfosfate (creatinol-O-phosphate, creatinol phosphate, COP) is a cardiac preparation, not to be confused with phosphocreatine.
Guanidines
Organophosphates | Creatinolfosfate | [
"Chemistry"
] | 53 | [
"Guanidines",
"Functional groups"
] |
44,621,719 | https://en.wikipedia.org/wiki/Bismuth%E2%80%93indium | The elements bismuth and indium have relatively low melting points when compared to other metals, and their alloy bismuth–indium (Bi–In) is classified as a fusible alloy. It has a melting point lower than the eutectic point of the tin–lead alloy. The most common application of the Bi-In alloy is as a low temperature solder, which can also contain, besides bismuth and indium, lead, cadmium, and tin.
Metals
Bismuth
Bismuth has many unique characteristics. When solidifying, bismuth's volume expands by roughly 2.32%. Its electrical resistance is twice as high in its solid state than in its liquid form. Bismuth has one of the lowest thermal conductivities of pure elemental metals. It is fragile, highly diamagnetic and it has a magnetic susceptibility of −1.68×10−5 mks. Bismuth is used as catalyst in the production of plastics and cosmetics, as an additive in steel alloys, and in electronics. It has a rhombohedral (Biα) structure, with an atomic radius of 1.54 Å, electronegativity of 1.83, and valence of +3 and +5.
Indium
Indium is a metal softer than lead (hardness of 0.9 HB), permitting it to be scratched by a nail. It is also malleable, ductile and has a thermal conductivity value of 0.78 W/m°C (85 °C). It also has the capacity of wetting glass, quartz and other ceramic materials. It maintains the plasticity and ductility when exposed to cryogenic environments and has a big gap between the melting point and the boiling point (156.6 °C and 2080 °C respectively). Under compression, it has high plasticity that allows almost unlimited deformation (2.14 MPa of compression resistance) and under tensile it has low elongation (4 MPa of tensile resistance). Indium is used in dental alloys, semiconductor components, nuclear reactor panels, sodium lamps, strengthening factor in lead-based welds and low melting temperature welds. The metal has a body centered tetragonal structure, atomic radius of 1.63 Å, electronegativity of 1.81 and valence of +3 or +5, being the trivalent the more common.
Common compositions of alloys
The most common application of this alloy is as a solder, with the composition of 95wt% of In and 5wt% of Bi. The liquidus line of this composition occurs at 423 K (150 °C; 302 °F), and the solidus line at 398 K (125 °C; 257 °F), being the first solid phase to be formed during the cooling process In, with Bi as a substitutional solid solution.
With a smaller application area, due to difficulties on the process of synthesizing, is the alloy composed by 33 wt% of In and 67wt% of Bi. This alloy presents a eutectic temperature of 382 K (109 °C; 228.2 °F). The resistance to thermal fatigue of this material is higher, but the quantity of slag when compared to the alloy between tin and lead.
There is, on the market a solder composed by 49 wt% of Bi, 21 wt% of In, 18 wt% of Pb, and 12 wt% of Sn, called commercially solder 136. This alloy presents a density of 8.58 g/cm3, tensile strength of 43 MPa, toughness of 14HB, eutectic temperature of 331 K (58 °C; 136.4 °F), thermic coefficient of expansion of 12.8×10−6/K. It is used to parts where precision is necessary, as in inspections, and fusible cores to wax patterns compounds.
Another alloy also on the market is the solder 117, composed of 44.7 wt% of Bi, 22.60 wt% of Pb, 19 wt% of In, 8.30 wt% of Sn, and 5.30 wt% of Cd. The density of this alloy is 8.86g/cm3, tensile strength of 37 MPa, toughness of 12HB, eutectic temperature of 320 K (47 °C; 116.6 °F). It is also used to parts on inspection equipment, spindles for machining (polishing), molds for development of prosthesis and dental molds.
Other commercially produced compositions include
Solder 174: 26 wt% of In, 17 wt% of Sn, and 57 wt% of Bi, presenting a eutectic temperature of 352 K (79 °C; 174.2 °F).
32.5 wt% of Bi, 16.5 wt% of Sn, and 51 wt% of In, presenting a eutectic temperature of 333 K (60 °C; 140 °F).
48 wt% of Bi, 25.63 wt% of Pb, 12.77 wt% of Sn, 9.6 wt% of Cd and 4 wt% of In, present a liquids temperature of 338 K (65 °C; 149 °F), and a solidus temperature of 334 K (61 °C; 141.8 °F).
The influence of each element
Antimony increases strength without affecting wettability.
Bismuth significantly improves the wettability of the solder. When the composition is more than 47% Bi, the alloy will expand upon cooling.
Cadmium quickly oxidizes, resulting in tarnish and slow soldering. It improves the mechanical properties of the alloys.
Indium lowers the melting point at a rate of 1.45 °C per 1 wt% of added In. It easily oxidizes, enables soldering for cryogenic applications, and allows soldering of nonmetals. It facilitates the fabrication process if compared with Bi.
Lead, in presence of In, forms a compound that has a phase change at 387 K (114 °C; 237.20 °F).
Phase diagram and solubility
Three intermetallic phases exist at room temperature in the Bi-In system: BiIn, Bi3In5 and BiIn2. Above the room temperature there is another phase named ε.
The solubility of the basic elements is 0–0.005 wt% of In in the Bi sublattice and ~0–14 wt% of Bi in the In sites. These values can be explained by the Hume-Rothery rules, where the crystalline structure must to be the same, the atomic radius must differ 15% or less, the valency must to be the same and the electronegativity of the two components must to be similar.
Main points on the equilibrium diagram.
When the two elements are mixed together, the alloy between Bi and In presents three eutectic points, being:
When cooled from the melt, Bi-In alloys form lamellar structures. There is one eutectoid point on the diagram, at 83 wt% of In. The eutectoid temperature is 322 K (49 °C; 120.20 °F). In the cooling process the phase ε will form BiIn2 and In. In the peritectic point, with the composition of 86 wt% of In, the liquid and the already formed In(s) will result in the phase ε. There are three intermetallic phases formed in the equilibrium:
BiIn (from 00005 to 35.4 wt% of In), with a tetragonal structure and 2 atoms per unit cell.
Bi3In5 (from 47.5 to 97.97 wt% of In), with a tetragonal structure and 4 atoms per unit cell.
BiIn2 (from 52.5 to 53.5 wt% of In), having a hexagonal structure with 2 atoms per unit cell.
There is regions on the diagram with were determinate thermodynamically due the process of formation take too much time or difficulties on the visualization of the phase.
The lowest fusion value is observed at 345.7 K (72.7 °C; 162.86 °F) and 66.7 wt% of In. In a cooling process the phases that will be formed is the BiIn2 and ε. There is also a metastable phase BiIn3, occurring at 62 wt% of In.
General considerations
Fusible alloys present a precipitation hardening (aging), so the mechanic properties will be dependent of the melting conditions, solidification rate, time since the melting, and the conditions in which the alloy will be used. Hence the advantages of the Bi-In alloy, when compared to the traditional ones based on Sn or Pb, is a larger thermal fatigue resistance, and a lower melting point. Disadvantages of Bi-In alloys are relatively low ductility and higher percentage of produced slag.
References
Materials science
Alloys | Bismuth–indium | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,877 | [
"Applied and interdisciplinary physics",
"Materials science",
"Chemical mixtures",
"Alloys",
"nan"
] |
44,622,064 | https://en.wikipedia.org/wiki/Interfacial%20polymerization | Interfacial polymerization is a type of step-growth polymerization in which polymerization occurs at the interface between two immiscible phases (generally two liquids), resulting in a polymer that is constrained to the interface. There are several variations of interfacial polymerization, which result in several types of polymer topologies, such as ultra-thin films, nanocapsules, and nanofibers, to name just a few.
History
Interfacial polymerization (then termed "interfacial polycondensation") was first discovered by Emerson L. Wittbecker and Paul W. Morgan in 1959 as an alternative to the typically high-temperature and low-pressure melt polymerization technique. As opposed to melt polymerization, interfacial polymerization reactions can be accomplished using standard laboratory equipment and under atmospheric conditions.
This first interfacial polymerization was accomplished using the Schotten–Baumann reaction, a method to synthesize amides from amines and acid chlorides. In this case, a polyamide, usually synthesized via melt polymerization, was synthesized from diamine and diacid chloride monomers. The diacid chloride monomers were placed in an organic solvent (benzene) and the diamene monomers in a water phase, such that when the monomers reached the interface they would polymerize.
Since 1959, interfacial polymerization has been extensively researched and used to prepare not only polyamides but also polyanilines, polyimides, polyurethanes, polyureas, polypyrroles, polyesters, polysulfonamides, polyphenyl esters and polycarbonates. In recent years, polymers synthesized by interfacial polymerization have been used in applications where a particular topological or physical property is desired, such as conducting polymers for electronics, water purification membranes, and cargo-loading microcapsules.
Mechanism
The most commonly used interfacial polymerization methods fall into 3 broad types of interfaces: liquid-solid interfaces, liquid-liquid interfaces, and liquid-in-liquid emulsion interfaces. In the liquid-liquid and liquid-in-liquid emulsion interfaces, either one or both liquid phases may contain monomers. There are also other interface categories, rarely used, including liquid-gas, solid-gas, and solid-solid.
In a liquid-solid interface, polymerization begins at the interface, and results in a polymer attached to the surface of the solid phase. In a liquid-liquid interface with monomer dissolved in one phase, polymerization occurs on only one side of the interface, whereas in liquid-liquid interfaces with monomer dissolved in both phases, polymerization occurs on both sides. An interfacial polymerization reaction may proceed either stirred or unstirred. In a stirred reaction, the two phases are combined using vigorous agitation, resulting in a higher interfacial surface area and a higher polymer yield. In the case of capsule synthesis, the size of the capsule is directly determined by the stirring rate of the emulsion.
Although interfacial polymerization appears to be a relatively straightforward process, there are several experimental variables that can be modified in order to design specific polymers or modify polymer characteristics. Some of the more notable variables include the identity of the organic solvent, monomer concentration, reactivity, solubility, the stability of the interface, and the number of functional groups present on the monomers. The identity of the organic solvent is of utmost importance, as it affects several other factors such as monomer diffusion, reaction rate, and polymer solubility and permeability. The number of functional groups present on the monomer is also important, as it affects the polymer topology: a di-substituted monomer will form linear chains whereas a tri- or tetra-substituted monomer forms branched polymers.
Most interfacial polymerizations are synthesized on a porous support in order to provide additional mechanical strength, allowing delicate nano films to be used in industrial applications. In this case, a good support would consist of pores ranging from 1 to 100 nm. Free-standing films, by contrast, do not use a support, and are often used to synthesize unique topologies such as micro- or nanocapsules. In the case of polyurethanes and polyamides especially, the film can be pulled continuously from the interface in an unstirred reaction, forming "ropes" of polymeric film. As the polymer precipitates, it can be withdrawn continuously.
It is interesting to note that the molecular weight distribution of polymers synthesized by interfacial polymerization is broader than the Flory–Schulz distribution due to the high concentration of monomers near the interfacial site. Because the two solutions used in this reaction are immiscible and the rate of reaction is high, this reaction mechanism tends to produce a small number of long polymer chains of high molecular weight.
Mathematical Models
Interfacial polymerization has proven difficult to model accurately due to its nature as a nonequilibrium process. These models provide either analytical or numerical solutions. The wide range of variables involved in interfacial polymerization has led to several different approaches and several different models. One of the more general models of interfacial polymerization, summarized by Berezkin and co-workers, involves treating interfacial polymerization as a heterogenous mass transfer combined with a second-order chemical reaction. In order to take into account different variables, this interfacial polymerization model is divided into three scales, yielding three different models: the kinetic model, the local model, and the macrokinetic model.
The kinetic model is based on the principles of kinetics, assumes uniform chemical distribution, and describes the system at a molecular level. This model takes into account thermodynamic qualities such as mechanisms, activation energies, rate constants, and equilibrium constants. The kinetic model is typically incorporated into either the local or the macrokinetic model in order to provide greater accuracy.
The local model is used to determine the characteristics of polymerization at a section around the interface, termed the diffusion boundary layer. This model can be used to describe a system in which the monomer distribution and concentration are inhomogeneous, and is restricted to a small volume. Parameters determined using the local model include the mass transfer weight, the degree of polymerization, topology near the interface, and the molecular weight distribution of the polymer. Using local modeling, the dependence of monomer mass transfer characteristics and polymer characteristics as a function of kinetic, diffusion, and concentration factors can be analyzed. One approach to calculating a local model can be represented by the following differential equation:
in which ci is the molar concentration of functional groups in the ith component of a monomer or polymer, t is the elapsed time, y is a coordinate normal to the surface/interface, Di is the molecular diffusion coefficient of the functional groups of interest, and Ji is the thermodynamic rate of reaction. Although precise, no analytical solution exists for this differential equation, and as such solutions must be found using approximate or numerical techniques.
In the macrokinetic model, the progression of an entire system is predicted. One important assumption of the macrokinetic model is that each mass transfer process is independent, and can therefore be described by a local model. The macrokinetic model may be the most important, as it can provide feedback on the efficiency of the reaction process, important in both laboratory and industrial applications.
More specific approaches to modeling interfacial polymerization are described by Ji and co-workers, and include modeling of thin-film composite (TFC) membranes, tubular fibers, hollow membranes, and capsules. These models take into account both reaction- and diffusion-controlled interfacial polymerization under non-steady-state conditions. One model is for thin film composite (TFC) membranes, and describes the thickness of the composite film as a function of time:
Where A0, B0, C0, D0, and E0 are constants determined by the system, X is the film thickness, and Xmax is the maximum value of film thickness, which can be determined experimentally.
Another model for interfacial polymerization of capsules, or encapsulation, is also described:
Where A0, B0, C0, D0, E0, I1, I2, I3, and I4 are constants determined by the system and Rmin is the minimum value of the inside diameter of the polymeric capsule wall.
There are several assumptions made by these and similar models, including but not limited to uniformity of monomer concentration, temperature, and film density, and second-order reaction kinetics.
Applications
Interfacial polymerization has found much use in industrial applications, especially as a route to synthesize conducting polymers for electronics. Conductive polymers synthesized by interfacial polymerization such as polyaniline (PANI), Polypyrrole (PPy), poly(3,4-ethylenedioxythiophene), and polythiophene (PTh) have found applications as chemical sensors, fuel cells, supercapacitors, and nanoswitches.
Sensors
PANI nanofibers are the most commonly used for sensing applications. These nanofibers have been shown to detect various gaseous chemicals, such as hydrogen chloride (HCl), ammonia (NH3), Hydrazine (N2H4), chloroform (CHCl3), and methanol (CH3OH). PANI nanofibers can be further fined-tuned by doping and modifying the polymer chain conformation, among other methods, to increase selectivity to certain gases. A typical PANI chemical sensor consists of a substrate, an electrode, and a selective polymer layer. PANI nanofibers, like other chemiresistors, detect by a change in electrical resistance/conductivity in response to the chemical environment.
Fuel Cells
PPy-coated ordered mesoporouscarbon (OMC) composites can be used in direct methanol fuel cell applications. The polymerization of PPy onto the OMC reduces interfacial electrical resistances without altering the open mesopore structure, making PPy-coated OMC composites a more ideal material for fuel cells than plain OMCs.
Separation/Purification Membranes
Composite polymer films synthesized via a liquid-solid interface are the most commonly used to synthesize membranes for reverse osmosis and other applications. One added benefits of using polymers prepared by interfacial polymerization is that several properties, such as pore size and interconnectivity, can be fined-tuned to create a more ideal product for specific applications. For example, synthesizing a polymer with a pore size somewhere between the molecular size of hydrogen gas () and carbon dioxide () results in a membrane selectively-permeable to , but not to , effectively separating the compounds.
Cargo-loading Micro- and Nanocapsules
Compared to previous methods of capsule synthesis, interfacial polymerization is an easily modified synthesis that results in capsules with a wide range of properties and functionalities. Once synthesized, the capsules can enclose drugs, quantum dots, and other nanoparticles, to list a few examples. Further fine-tuning of the chemical and topological properties of these polymer capsules could prove an effective route to create drug-delivery systems.
See also
Polymerization
Interfacial polycondensation
References
Polymerization reactions
Polymers | Interfacial polymerization | [
"Chemistry",
"Materials_science"
] | 2,366 | [
"Polymers",
"Polymerization reactions",
"Polymer chemistry"
] |
44,622,167 | https://en.wikipedia.org/wiki/Transition%20metal%20alkyne%20complex | In organometallic chemistry, a transition metal alkyne complex is a coordination compound containing one or more alkyne ligands. Such compounds are intermediates in many catalytic reactions that convert alkynes to other organic products, e.g. hydrogenation and trimerization.
Synthesis
Transition metal alkyne complexes are often formed by the displacement of labile ligands by the alkyne. For example, a variety of cobalt-alkyne complexes arise by the reaction of alkynes with dicobalt octacarbonyl.
Many alkyne complexes are produced by reduction of metal halides:
Cp2TiCl2 + Mg + Me3SiC≡CSiMe3 → Cp2Ti[(CSiMe3)2] + MgCl2
Structure and bonding
The coordination of alkynes to transition metals is similar to that of alkenes. The bonding is described by the Dewar–Chatt–Duncanson model. Upon complexation the C-C bond elongates and the alkynyl carbon bends away from 180º. For example, in the phenylpropyne complex Pt(PPh3)2(MeC2Ph), the C-C distance is 1.277(25) vs 1.20 Å for a typical alkyne. The C-C-C angle distorts 40° from linearity upon complexation. Because the bending induced by complexation, strained alkynes such as cycloheptyne and cyclooctyne are stabilized by complexation.
The C≡C vibration of alkynes occurs near 2300 cm−1 in the IR spectrum. This mode shifts upon complexation to around 1800 cm−1, indicating a weakening of the C-C bond.
η2-coordination to a single metal center
When bonded side-on to a single metal atom, an alkyne serves as a dihapto usually two-electron donor. For early metal complexes, e.g., Cp2Ti(C2R2), strong π-backbonding into one of the π* antibonding orbitals of the alkyne is indicated. This complex is described as a metallacyclopropene derivative of Ti(IV). For late transition metal complexes, e.g., Pt(PPh3)2(MeC2Ph), the π-backbonding is less prominent, and the complex is assigned oxidation state 0.
In some complexes, the alkyne is classified as a four-electron donor. In these cases, both pairs of pi-electrons donate to the metal. This kind of bonding was first implicated in complexes of the type W(CO)(R2C2)3.
η2, η2-coordination bridging two metal centers
Because alkynes have two π bonds, alkynes can form stable complexes in which they bridge two metal centers. The alkyne donates a total of four electrons, with two electrons donated to each of the metals. And example of a complex with this bonding scheme is η2-diphenylacetylene-(hexacarbonyl)dicobalt(0).
Benzyne complexes
Transition metal benzyne complexes represent a special case of alkyne complexes since the free benzynes are not stable in the absence of the metal.
Applications
Metal alkyne complexes are intermediates in the semihydrogenation of alkynes to alkenes:
C2R2 + H2 → cis-C2R2H2
This transformation is conducted on a large scale in refineries, which unintentionally produce acetylene during the production of ethylene. It is also useful in the preparation of fine chemicals. Semihydrogenation affords cis alkenes.
Metal-alkyne complexes are also intermediates in the metal-catalyzed trimerization and tetramerizations. Cyclooctatetraene is produced from acetylene via the intermediacy of metal alkyne complexes. Variants of this reaction are exploited for some syntheses of substituted pyridines.
The Pauson–Khand reaction provides a route to cyclopentenones via the intermediacy of cobalt-alkyne complexes.
Acrylic acid was once prepared by the hydrocarboxylation of acetylene:
C2H2 + H2O + CO → H2C=CHCO2H
With the shift away from coal-based (acetylene) to petroleum-based feedstocks (olefins), catalytic reactions with alkynes are not widely practiced industrially.
Polyacetylene has been produced using metal catalysis involving alkyne complexes.
Cuprous chloride also catalyzes the dimerization of acetylene to vinylacetylene, once used as a precursor to various polymers such a neoprene. Mechanistic studies suggest that this reaction proceeds by insertion of acetylene into a copper(I) acetylide complex.
References
Organometallic chemistry
Transition metals
Coordination chemistry | Transition metal alkyne complex | [
"Chemistry"
] | 1,044 | [
"Organometallic chemistry",
"Coordination chemistry"
] |
33,412,595 | https://en.wikipedia.org/wiki/Streak%20seeding | Streak seeding is a method first described during ICCBM-3 by Enrico Stura to induce crystallization in a straight line into a sitting or hanging drop for protein crystallization by introducing microseeds. The purpose is to control nucleation and understand the parameters that make crystals grow. It is also used to test any particular set of conditions to check if crystals could grow under such conditions.
The technique is relatively simple. A cat whisker is used to dislodge seeds from a crystal. The whisker is passed through the drop starting from one side of the drop and ending on the opposite side of the drop in one smooth motion. To allow for vapour diffusion equilibration, the well in which the drop has been placed is resealed. The same procedure is repeated for all the drops whose conditions need testing.
References
Crystallography | Streak seeding | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 177 | [
"Materials science stubs",
"Materials science",
"Crystallography stubs",
"Crystallography",
"Condensed matter physics"
] |
33,413,026 | https://en.wikipedia.org/wiki/Artificial%20muscle | Artificial muscles, also known as muscle-like actuators, are materials or devices that mimic natural muscle and can change their stiffness, reversibly contract, expand, or rotate within one component due to an external stimulus (such as voltage, current, pressure or temperature). The three basic actuation responses—contraction, expansion, and rotation—can be combined within a single component to produce other types of motions (e.g. bending, by contracting one side of the material while expanding the other side). Conventional motors and pneumatic linear or rotary actuators do not qualify as artificial muscles, because there is more than one component involved in the actuation.
Owing to their high flexibility, versatility and power-to-weight ratio compared with traditional rigid actuators, artificial muscles have the potential to be a highly disruptive emerging technology. Though currently in limited use, the technology may have wide future applications in industry, medicine, robotics and many other fields.
Comparison with natural muscles
While there is no general theory that allows for actuators to be compared, there are "power criteria" for artificial muscle technologies that allow for specification of new actuator technologies in comparison with natural muscular properties. In summary, the criteria include stress, strain, strain rate, cycle life, and elastic modulus. Some authors have considered other criteria (Huber et al., 1997), such as actuator density and strain resolution. As of 2014, the most powerful artificial muscle fibers in existence can offer a hundredfold increase in power over equivalent lengths of natural muscle fibers.
Researchers measure the speed, energy density, power, and efficiency of artificial muscles; no one type of artificial muscle is the best in all areas.
Types
Artificial muscles can be divided into three major groups based on their actuation mechanism.
Electric field actuation
Electroactive polymers (EAPs) are polymers that can be actuated through the application of electric fields. Currently, the most prominent EAPs include piezoelectric polymers, dielectric actuators (DEAs), electrostrictive graft elastomers, liquid crystal elastomers (LCE) and ferroelectric polymers. While these EAPs can be made to bend, their low capacities for torque motion currently limit their usefulness as artificial muscles. Moreover, without an accepted standard material for creating EAP devices, commercialization has remained impractical. However, significant progress has been made in EAP technology since the 1990s.
Ion-based actuation
Ionic EAPs are polymers that can be actuated through the diffusion of ions in an electrolyte solution (in addition to the application of electric fields). Current examples of ionic electroactive polymers include polyelectrode gels, ionomeric polymer metallic composites (IPMC), conductive polymers, pyromellitamide gels, and electrorheological fluids (ERF). In 2011, it was demonstrated that twisted carbon nanotubes could also be actuated by applying an electric field.
Pneumatic actuation
Pneumatic artificial muscles (PAMs) operate by filling a pneumatic bladder with pressurized air. Upon applying gas pressure to the bladder, isotropic volume expansion occurs, but is confined by braided wires that encircle the bladder, translating the volume expansion to a linear contraction along the axis of the actuator. PAMs can be classified by their operation and design; namely, PAMs feature pneumatic or hydraulic operation, overpressure or underpressure operation, braided/netted or embedded membranes and stretching membranes or rearranging membranes. Among the most commonly used PAMs today is a cylindrically braided muscle known as the McKibben Muscle, which was first developed by J. L. McKibben in the 1950s.
Thermal actuation
Fishing line
Artificial muscles constructed from ordinary fishing line and sewing thread can lift 100 times more weight and generate 100 times more power than a human muscle of the same length and weight.
Individual macromolecules are aligned with the fiber in commercially available polymer fibers. By winding them into coils, researchers make artificial muscles that contract at speeds similar to human muscles.
A (untwisted) polymer fiber, such as polyethelene fishing line or nylon sewing thread, unlike most materials, shortens when heated—up to about 4% for a 250 K increase in temperature. By twisting the fiber and winding the twisted fiber into a coil,
heating causes the coil to tighten up and shorten by up to 49%. Researchers found another way to wind the coil such that heating causes the coil to lengthen by 69%.
One application of thermally-activated artificial muscles is to automatically open and close windows, responding to temperature without using any power.
Carbon nanotubes
Tiny artificial muscles composed of twisted carbon nanotubes filled with paraffin are 200 times stronger than human muscle.
Shape-memory alloys
Shape-memory alloys (SMAs), liquid crystalline elastomers, and metallic alloys that can be deformed and then returned to their original shape when exposed to heat, can function as artificial muscles. Thermal actuator-based artificial muscles offer heat resistance, impact resistance, low density, high fatigue strength, and large force generation during shape changes. In 2012, a new class of electric field-activated, electrolyte-free artificial muscles called "twisted yarn actuators" were demonstrated, based on the thermal expansion of a secondary material within the muscle's conductive twisted structure. It has also been demonstrated that a coiled vanadium dioxide ribbon can twist and untwist at a peak torsional speed of 200,000 rpm.
Control systems
The three types of artificial muscles have different constraints that affect the type of control system they require for actuation. It is important to note, however, that control systems are often designed to meet the specifications of a given experiment, with some experiments calling for the combined use of a variety of different actuators or a hybrid control schema. As such, the following examples should not be treated as an exhaustive list of the variety of control systems that may be employed to actuate a given artificial muscle.
EAP Control
Electro-Active Polymers (EAPs) offer lower weight, faster response, higher power density and quieter operation when compared to traditional actuators. Both electric and ionic EAPs are primarily actuated using feedback control loops, better known as closed-loop control systems.
Pneumatic control
Currently there are two types of Pneumatic Artificial Muscles (PAMs). The first type has a single bladder surrounded by a braided sleeve and the second type has a double bladder.
Single bladder surrounded by a braided sleeve
Pneumatic artificial muscles, while lightweight and inexpensive, pose a particularly difficult control problem as they are both highly nonlinear and have properties, such as temperature, that fluctuate significantly over time. PAMs generally consist of rubber and plastic components. As these parts come into contact with each other during actuation, the PAM's temperature increases, ultimately leading to permanent changes in the structure of the artificial muscle over time. This problem has led to a variety of experimental approaches. In summary (provided by Ahn et al.), viable experimental control systems include PID control, adaptive control (Lilly, 2003), nonlinear optimal predictive control (Reynolds et al., 2003), variable structure control (Repperger et al., 1998; Medrano-Cerda et al.,1995), gain scheduling (Repperger et al., 1999), and various soft computing approaches including neural network Kohonen training algorithm control (Hesselroth et al.,1994), neural network/nonlinear PID control (Ahn and Thanh, 2005), and neuro-fuzzy/genetic control (Chan et al., 2003; Lilly et al., 2003).
Control problems regarding highly nonlinear systems have generally been addressed through a trial-and-error approach through which "fuzzy models" (Chan et al., 2003) of the system's behavioral capacities could be teased out (from the experimental results of the specific system being tested) by a knowledgeable human expert. However, some research has employed "real data" (Nelles O., 2000) to train up the accuracy of a given fuzzy model while simultaneously avoiding the mathematical complexities of previous models. Ahn et al.'s experiment is simply one example of recent experiments that use modified genetic algorithms (MGAs) to train up fuzzy models using experimental input-output data from a PAM robot arm.
Double bladder
This actuator consists of an external membrane with an internal flexible membrane dividing the interior of the muscle into two portions. A tendon is secured to the membrane, and exits the muscle through a sleeve so that the tendon can contract into the muscle. A tube allows air into the internal bladder, which then rolls out into the external bladder. A key advantage of this type of pneumatic muscle is that there is no potentially frictive movement of the bladder against an outer sleeve.
Thermal control
SMA artificial muscles, while lightweight and useful in applications that require large force and displacement, also present specific control challenges; namely, SMA artificial muscles are limited by their hysteretic input-output relationships and bandwidth limitations. As Wen et al. discuss, the SMA phase transformation phenomenon is "hysteretic" in that the resulting output SMA strand is dependent on the history of its heat input. As for bandwidth limitations, the dynamic response of an SMA actuator during hysteretic phase transformations is very slow due to the amount of time required for the heat to transfer to the SMA artificial muscle. Very little research has been conducted regarding SMA control due to assumptions that regard SMA applications as static devices; nevertheless, a variety of control approaches have been tested to address the control problem of hysteretic nonlinearity.
Generally, this problem has required the application of either open-loop compensation or closed-loop feedback control. Regarding open-loop control, the Preisach model has often been used for its simple structure and ability for easy simulation and control (Hughes and Wen, 1995). As for closed-loop control, a passivity-based approach analyzing SMA closed loop stability has been used (Madill and Wen, 1994). Wen et al.'s study provides another example of closed-loop feedback control, demonstrating the stability of closed-loop control in SMA applications through applying a combination of force feedback control and position control on a flexible aluminum beam actuated by an SMA made from Nitinol.
Chemical control
Chemomechanical polymers containing groups which are either pH-sensitive or serve as selective recognition site for specific chemical compounds can serve as actuators or sensors. The corresponding gels swell or shrink reversibly in response to such chemical signals. A large variety of supramolulecular recognition elements can be introduced into gel-forming polymers, which can bind and use as initiator metal ions, different anions, aminoacids, carbohydrates, etc. Some of these polymers exhibit mechanical response only if two different chemicals or initiators are present, thus performing as logical gates. Such chemomechanical polymers hold promise also for targeted drug delivery. Polymers containing light absorbing elements can serve as photochemically controlled artificial muscles.
Applications
Artificial muscle technologies have wide potential applications in biomimetic machines, including robots, industrial actuators and powered exoskeletons. EAP-based artificial muscles offer a combination of light weight, low power requirements, resilience and agility for locomotion and manipulation. Future EAP devices will have applications in aerospace, automotive industry, medicine, robotics, articulation mechanisms, entertainment, animation, toys, clothing, haptic and tactile interfaces, noise control, transducers, power generators, and smart structures.
Pneumatic artificial muscles also offer greater flexibility, controllability and lightness compared to conventional pneumatic cylinders. Most PAM applications involve the utilization of McKibben-like muscles. Thermal actuators such as SMAs have various military, medical, safety, and robotic applications, and could furthermore be used to generate energy through mechanical shape changes.
See also
Artificial cell
Electronic nose
Electronic skin
References
Robotics hardware
Smart materials | Artificial muscle | [
"Materials_science",
"Engineering"
] | 2,527 | [
"Smart materials",
"Materials science",
"Robotics engineering",
"Robotics hardware"
] |
38,918,026 | https://en.wikipedia.org/wiki/Drop%20test | A drop test is a method of testing the in-flight characteristics of prototype or experimental aircraft and spacecraft by raising the test vehicle to a specific altitude and then releasing it. Test flights involving powered aircraft, particularly rocket-powered aircraft, may be referred to as drop launches due to the launch of the aircraft's rockets after release from its carrier aircraft.
In the case of unpowered aircraft, the test vehicle falls or glides after its release in an unpowered descent to a landing site. Drop tests may be used to verify the aerodynamic performance and flight dynamics of the test vehicle, to test its landing systems, or to evaluate survivability of a planned or crash landing. This allows the vehicle's designers to validate computer flight models, wind tunnel testing, or other theoretical design characteristics of an aircraft or spacecraft's design.
High-altitude drop tests may be conducted by carrying the test vehicle aboard a mothership to a target altitude for release. Low-altitude drop tests may be conducted by releasing the test vehicle from a crane or gantry.
Aircraft and lifting-body testing
Carrier landing simulation tests
The landing gear on aircraft used on aircraft carriers must be stronger than those on land-based aircraft, due to higher approach speeds and sink rates during carrier landings. As early as the 1940s, drop tests were conducted by lifting a carrier-based plane such as the Grumman F6F Hellcat to a height of ten feet and then dropped, simulating the impact of a landing at . The F6F was ultimately dropped from a height of , demonstrating it could absorb twice the force of a carrier landing. Drop tests are still used in the development and testing of carrier-based aircraft; in 2010, the Lockheed Martin F-35C Lightning II underwent drop tests to simulate its maximum descent rate of during carrier landings.
Experimental aircraft
Numerous experimental and prototype aircraft have been drop tested or drop launched. Many powered X-planes, including the Bell X-1, Bell X-2, North American X-15, Martin Marietta X-24A and X-24B, Orbital Sciences X-34, Boeing X-40, and NASA X-43A were specifically designed to be drop launched. test articles of the unpowered NASA X-38 were also drop tested, from altitudes of up to , in order to study its aerodynamic and handling qualities, autonomous flight capabilities, and deployment of its steerable parafoil.
Some experimental aircraft designed for airborne launches, such as the Northrop HL-10, have made both unpowered drop tests and powered drop launches. Prior to powered flights using its rocket engine, the HL-10 made 11 unpowered drop flights in order to study the handling qualities and stability of the lifting body in flight.
Balls 8 mothership
Early experimental aircraft, such as the X-1 and X-2, were carried aboard modified B-29 and B-50 bombers. In the 1950s, the United States Air Force provided NASA with a B-52 bomber to be used as a mothership for the experimental X-15. Built in 1955, the B-52 was only the 10th to come off the assembly line, and was used by the Air Force for flight testing before turning it over to NASA. Flying with NASA tail number 008, the plane was nicknamed Balls 8 by Air Force pilots, following a tradition of referring to aircraft numbered with multiple zeroes as "Balls" plus the final number.
Balls 8 received significant modifications in order to carry the X-15. A special pylon, designed to carry and release the X-15, was installed under the right wing between the fuselage and inboard engine. A notch was also cut out of one of the right wing's flaps so that the plane could accommodate the X-15's vertical tail. Balls 8 was one of two such bombers modified to carry the X-15; while the other plane was retired in 1969 after the end of the X-15 program, NASA continued using Balls 8 for drop tests until it was retired in 2004. During its 50-year career, Balls 8 carried numerous experimental vehicles including the HL-10, X-24A, X-24B, X-38, and X-43A.
X-24B role in Space Shuttle development
During the design of the Space Shuttle orbiter in the 1970s, engineers debated whether to design the orbiter to glide to an unpowered landing or equip the orbiter with pop-out jet engines in order to make a powered landing. While powered landing design required carrying the engines and jet fuel, adding weight and complexity to the orbiter, engineers began favoring the powered landing option. In response, NASA conducted unpowered drop tests of the X-24B to demonstrate the feasibility of landing a lifting-body aircraft in unpowered flight. In 1975, the X-24B aircraft was dropped from a Balls 8 at an altitude of above the Mojave Desert, and then ignited rocket engines to increase speed and propel it to . Once the rocket engine cut off, the high-speed and high-altitude conditions permitted the X-24B to simulate the path of a Space Shuttle orbiter under post-atmospheric reentry conditions. The X-24B successfully made two unpowered precision landings at Edwards Air Force Base, demonstrating the feasibility of an unpowered lifting body design for the Space Shuttle. These successes convinced those in charge of the Space Shuttle program to commit to an unpowered landing design, which would save weight and increase the orbiter's payload capacity.
Space Shuttle Enterprise
In 1977, a series of drop tests of the were conducted to test the Space Shuttle's flight characteristics. Because the Space Shuttle is designed to glide unpowered during its descent and landing, a series of drop tests using a test orbiter were used to demonstrate that the orbiter could be successfully controlled in unpowered flight. These drop tests, known as the Approach and Landing Test program, used a modified Boeing 747, known as the Shuttle Carrier Aircraft or SCA, to carry Enterprise to an altitude of . After a series of captive-flight tests in which the orbiter was not released, five free-flight tests were performed in August through October 1977.
While free-flight tests of Enterprise involved the release of an unpowered aircraft from a powered aircraft, these tests were not typical of drop testing because the orbiter was actually carried and released from a position above the SCA. This arrangement was potentially dangerous because it placed Enterprise in free flight directly in front of the SCA's tail fin immediately after release. As a result, the "drop" was conducted by using a series of carefully planned maneuvers to minimize the risk of aircraft collision. Immediately after release, the Enterprise would climb to the right while the SCA performed a shallow dive to the left, allowing for quick vertical and horizontal separation between the two aircraft.
Dream Chaser
In mid-2013, Sierra Nevada Corporation plans to conduct drop tests of its Dream Chaser prototype commercial spaceplane. The uncrewed first flight test will drop the Dream Chaser prototype from an altitude of by a Columbia 234-UT helicopter, where it is planned that the vehicle will autonomously fly to an unpowered landing at Dryden Flight Research Center. The Dream Chaser successfully completed the free-flight and passed the drop test on November 11 over the Mojave Desert. The uncrewed vehicle made a landing at Edwards Air Force Base.
Crewed capsule testing
Drop tests of prototype crewed space capsules may be done to test the survivability of landing, primarily by testing the capsule's descent characteristics and its post-reentry landing systems. These tests are typically carried out uncrewed prior to any human spaceflight testing.
Apollo command module
In 1963, North American Aviation built BP-19A, an uncrewed boilerplate Apollo command module for use in drop testing. NASA conducted a series of tests in 1964 which involved dropping BP-19A from a C-133 Cargomaster in order to test the capsule's parachute systems prior to the start of crewed testing of the Apollo spacecraft.
Orion capsule
In 2011 and 2012, NASA conducted a series of short drop tests on the survivability of water landings in its Orion crewed capsule by repeatedly dropping an Orion test vehicle into a large water basin. The tests simulated water landings at speeds varying from by changing the height of the drop gantry above the basin. The range of landing velocities allowed NASA to simulate a range of possible entry and landing conditions during water landings.
In 2011 and 2012, NASA also conducted drop tests of the Orion test vehicle's parachute systems and land-based landing capabilities. In each test, the Orion spacecraft was dropped from a C-17 or C-130 cargo plane. For testing, the capsule is mounted on a pallet system and placed inside the cargo aircraft. Parachutes on the pallet are used to pull the pallet and capsule out of the rear of the aircraft; the capsule then separates from the pallet and begins its free fall descent.
On March 4, 2012, a C-17 dropped an Orion test article from an altitude of . The capsule's parachutes successfully deployed between , slowing the spacecraft to a landing on ground in the Arizona desert. The capsule landed at a speed of , well below the designed maximum touchdown speed.
Boeing CST-100
In September 2011, Boeing conducted a series of drop tests, carried out in the Mojave Desert of southeast California, to validate the design of the CST-100 capsule's parachute and airbag cushioning landing systems. The airbags are located underneath the heat shield of the CST-100, which is designed to be separated from the capsule while under parachute descent at about altitude. The tests were carried out at ground speeds between in order to simulate cross wind conditions at the time of landing. Bigelow Aerospace built the mobile test rig and conducted the tests.
In April 2012, Boeing conducted another drop test of its CST-100 prototype space capsule in order to test the capsule's landing systems. The test vehicle was raised by helicopter to an altitude of and then released; the capsule's three main parachutes then deployed successfully and slowed the capsule's descent. Immediately prior to landing, the capsule's six airbags inflated underneath the capsule in order to absorb some of the impact energy from landing. Similar drop tests are planned in order to conduct additional airbag testing, as well as drogue chute and heat shield jettison tests.
Helicopter testing
In 2009 and 2010, NASA conducted a pair of drop tests to study the survivability of helicopter crashes. Using an MD 500 helicopter donated by the U.S. Army, NASA dropped the helicopter at an angle from an altitude of to simulate a hard helicopter landing. Sophisticated crash test dummies with simulated internal organs were located inside the helicopter and used to assess internal injuries from such a crash. Due to extensive damage to the test helicopter after the second test, no third test was planned.
References
Aerospace engineering
Product testing | Drop test | [
"Engineering"
] | 2,244 | [
"Aerospace engineering"
] |
38,919,556 | https://en.wikipedia.org/wiki/C20H29NO3 | {{DISPLAYTITLE:C20H29NO3}}
The molecular formula C20H29NO3 (molar mass: 331.45 g/mol, exact mass: 331.2147 u) may refer to:
ADDA (amino acid)
Ditran (JB-329)
EA-3167
Molecular formulas | C20H29NO3 | [
"Physics",
"Chemistry"
] | 72 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
38,919,712 | https://en.wikipedia.org/wiki/Neutron%E2%80%93proton%20ratio | The neutron–proton ratio (N/Z ratio or nuclear ratio) of an atomic nucleus is the ratio of its number of neutrons to its number of protons. Among stable nuclei and naturally occurring nuclei, this ratio generally increases with increasing atomic number. This is because electrical repulsive forces between protons scale with distance differently than strong nuclear force attractions. In particular, most pairs of protons in large nuclei are not far enough apart, such that electrical repulsion dominates over the strong nuclear force, and thus proton density in stable larger nuclei must be lower than in stable smaller nuclei where more pairs of protons have appreciable short-range nuclear force attractions.
For many elements with atomic number Z small enough to occupy only the first three nuclear shells, that is up to that of calcium (Z = 20), there exists a stable isotope with N/Z ratio of one. The exceptions are beryllium (N/Z = 1.25) and every element with odd atomic number between 9 and 19 inclusive (though in those cases N = Z + 1 always allows for stability). Hydrogen-1 (N/Z ratio = 0) and helium-3 (N/Z ratio = 0.5) are the only stable isotopes with neutron–proton ratio under one. Uranium-238 has the highest N/Z ratio of any primordial nuclide at 1.587, while mercury-204 has the highest N/Z ratio of any known stable isotope at 1.55. Radioactive decay generally proceeds so as to change the N/Z ratio to increase stability. If the N/Z ratio is greater than 1, alpha decay increases the N/Z ratio, and hence provides a common pathway towards stability for decays involving large nuclei with too few neutrons. Positron emission and electron capture also increase the ratio, while beta decay decreases the ratio.
Nuclear waste exists mainly because nuclear fuel has a higher stable N/Z ratio than its fission products.
Semi-empirical description
For stable nuclei, the neutron-proton ratio is such that the binding energy is at a local minimum or close to a minimum.
From the liquid drop model, this bonding energy is approximated by empirical Bethe–Weizsäcker formula
Given a value of and ignoring the contributions of nucleon spin pairing (i.e. ignoring the term), the binding energy is a quadratic expression in that is minimized when the neutron-proton ratio is .
See also
Nuclear fission
Nuclear drip line
References
Nuclear physics
Ratios | Neutron–proton ratio | [
"Physics",
"Mathematics"
] | 511 | [
"Arithmetic",
"Nuclear and atomic physics stubs",
"Ratios",
"Nuclear physics"
] |
38,922,521 | https://en.wikipedia.org/wiki/Normalized%20chromosome%20value | Normalized chromosome value (NCV) is a mathematical calculation for comparing each chromosome under tested in cell free DNA (cfDNA) for detecting genetic disorder of the fetus. NCV calculation removes variation within and between sequencing runs to optimize test precision.
References
Genetics | Normalized chromosome value | [
"Biology"
] | 56 | [
"Genetics"
] |
21,804,458 | https://en.wikipedia.org/wiki/Supercritical%20steam%20generator | A supercritical steam generator is a type of boiler that operates at supercritical pressure and temperature, frequently used in the production of electric power.
In contrast to a subcritical boiler in which steam bubbles form, a supercritical steam generator operates above the critical pressure and temperature . Under these conditions, the liquid water density decreases smoothly with no phase change, becoming indistinguishable from steam. The water temperature drops below the critical point as it does work in a high pressure turbine and enters the generator's condenser, resulting in slightly less fuel use. The efficiency of power plants with supercritical steam generators is higher than with subcritical steam because thermodynamic efficiency is directly related to the magnitude of their temperature drop. At supercritical pressure the higher temperature steam is converted more efficiently to mechanical energy in the turbine (as given by Carnot's theorem).
Technically, the term "boiler" should not be used for a supercritical pressure steam generator as boiling does not occur.
History of supercritical steam generation
Contemporary supercritical steam generators are sometimes referred to as Benson boilers. In 1922, Mark Benson was granted a patent for a boiler designed to convert water into steam at high pressure.
Safety was the main concern behind Benson's concept. Earlier steam generators were designed for relatively low pressures of up to about , corresponding to the state of the art in steam turbine development at the time. One of their distinguishing technical characteristics was the riveted water/steam separator drum. These drums were where the water filled tubes were terminated after having passed through the boiler furnace.
These header drums were intended to be partially filled with water and above the water there was a baffle filled space where the boiler's steam and water vapour collected. The entrained water droplets were collected by the baffles and returned to the water pan. The mostly-dry steam was piped out of the drum as the separated steam output of the boiler. These drums were often the source of boiler explosions, usually with catastrophic consequences.
However, this drum could be completely eliminated if the evaporation separation process was avoided altogether. This would happen if water entered the boiler at a pressure above the critical pressure (); was heated to a temperature above the critical temperature () and then expanded (through a simple nozzle) to dry steam at some lower subcritical pressure. This could be obtained at a throttle valve located downstream of the evaporator section of the boiler.
As development of Benson technology continued, boiler design soon moved away from the original concept introduced by Mark Benson. In 1929, a test boiler that had been built in 1927 began operating in the thermal power plant at Gartenfeld in Berlin for the first time in subcritical mode with a fully open throttle valve. The second Benson boiler began operation in 1930 without a pressurizing valve at pressures between at the Berlin cable factory. This application represented the birth of the modern variable-pressure Benson boiler. After that development, the original patent was no longer used. The "Benson boiler" name, however, was retained.
1957: Unit 6 at the Philo Power Plant in Philo, Ohio was the first commercial supercritical steam-electric generating unit in the world, and it could operate short-term at ultra-supercritical levels. It took until 2012 for the first US coal-fired plant designed to operate at ultra-supercritical temperatures to be opened, John W. Turk Jr. Coal Plant in Arkansas.
Two innovations have been projected to improve once-through steam generators:
A new type of heat-recovery steam generator based on the Benson boiler has operated successfully at the Cottam combined-cycle power plant in central England.
The vertical tubing in the combustion chamber walls of coal-fired steam generators combines the operating advantages of the Benson system with the design advantages of the drum-type boiler. Construction of a first reference plant, the Yaomeng power plant in China, commenced in 2001.
On 3 June 2014, the Australian government's research organization CSIRO announced that they had generated 'supercritical steam' at a pressure of and in what it claims is a world record for solar thermal energy.
Definitions
These definitions regarding steam generation were found in a report on coal production in China investigated by the Center for American Progress.
Subcritical – up to and (the critical point of water)
Supercritical – up to the ; requires advanced materials
Ultra-supercritical – up to and pressure levels of (additional innovations, not specified, would allow even more efficiency)
Nuclear power plant steam typically enters turbines at subcritical values – for U-Tube Steam Generators and , with comparable temperature and pressure for Once Through Steam Generators type.
The term "advanced ultra-supercritical" (AUSC) or "700°C technology" is sometimes used to describe generators where the water is above .
The term High-Efficiency, Low-Emissions ("HELE") has been used by the coal industry to describe supercritical and ultra-supercritical coal generation.
Industry leading (as of 2019) Mitsubishi Hitachi Power Systems charts its gas turbine combined cycle power generation efficiency (lower heating value) at well under 55% for gas turbine inlet temp of , roughly 56% for , about 58% for , and 64% for , all of which far exceed (due to Carnot efficiency) thresholds for AUSC or Ultra-supercritical technology, which are still limited by the steam temperature.
See also
Supercritical water reactor
Boiler
Notes
External links
Thermopedia, "Benson boiler"
Boilers
Chemical equipment
Steam boilers
Steam engines
Steam generators
Power station technology | Supercritical steam generator | [
"Chemistry",
"Engineering"
] | 1,153 | [
"Chemical equipment",
"Boilers",
"Pressure vessels",
"nan"
] |
21,805,416 | https://en.wikipedia.org/wiki/Transmission-line%20matrix%20method | The transmission-line matrix (TLM) method is a space and time discretising method for computation of electromagnetic fields. It is based on the analogy between the electromagnetic field and a mesh of transmission lines. The TLM method allows the computation of complex three-dimensional electromagnetic structures and has proven to be one of the most powerful time-domain methods along with the finite difference time domain (FDTD) method. The TLM was first explored by British electrical engineer Raymond Beurle while working at English Electric Valve Company in Chelmsford. After he had been appointed professor of electrical engineering at the University of Nottingham in 1963 he jointly authored an article, "Numerical solution of 2-dimensional scattering problems using a transmission-line matrix", with Peter B. Johns in 1971.
Basic principle
The TLM method is based on Huygens' model of wave propagation and scattering and the analogy between field propagation and transmission lines. Therefore, it considers the computational domain as a mesh of transmission lines, interconnected at nodes. In the figure on the right is considered a simple example of a 2D TLM mesh with a voltage pulse of amplitude 1 V incident on the central node. This pulse will be partially reflected and transmitted according to the transmission-line theory. If we assume that each line has a characteristic impedance , then the incident pulse sees effectively three transmission lines in parallel with a total impedance of . The reflection coefficient and the transmission coefficient are given by
The energy injected into the node by the incident pulse and the total energy of the scattered pulses are correspondingly
Therefore, the energy conservation law is fulfilled by the model.
The next scattering event excites the neighbouring nodes according to the principle described above. It can be seen that every node turns into a secondary source of spherical wave. These waves combine to form the overall waveform. This is in accordance with Huygens principle of light propagation.
In order to show the TLM schema we will use time and space discretisation. The time-step will be denoted with and the space discretisation intervals with , and . The absolute time and space will therefore be , , , , where is the time instant and are the cell coordinates. In case the value will be used, which is the lattice constant. In this case the following holds:
where is the free space speed of light.
The 2D TLM node
The scattering matrix of an 2D TLM node
If we consider an electromagnetic field distribution in which the only non-zero components are , and (i.e. a TE-mode distribution), then Maxwell's equations in Cartesian coordinates reduce to
We can combine these equations to obtain
The figure on the right presents a structure referred to as a series node. It describes a block of space dimensions , and that consists of four ports. and are the distributed inductance and capacitance of the transmission lines. It is possible to show that a series node is equivalent to a TE-wave, more precisely the mesh current I, the x-direction voltages (ports 1 and 3) and the y-direction voltages (ports 2 and 4) may be related to the field components , and . If the voltages on the ports are considered, , and the polarity from the above figure holds, then the following is valid
where .
and dividing both sides by
Since and substituting gives
This reduces to Maxwell's equations when .
Similarly, using the conditions across the capacitors on ports 1 and 4, it can be shown that the corresponding two other Maxwell equations are the following:
Having these results, it is possible to compute the scattering matrix of a shunt node. The incident voltage pulse on port 1 at time-step k is denoted as . Replacing the four line segments from the above figure with their Thevenin equivalent it is possible to show that the following equation for the reflected voltage pulse holds:
If all incident waves as well as all reflected waves are collected in one vector, then this equation may be written down for all ports in matrix form:
where and are the incident and the reflected pulse amplitude vectors.
For a series node the scattering matrix S has the following form
Connection between TLM nodes
In order to describe the connection between adjacent nodes by a mesh of series nodes, look at the figure on the right. As the incident pulse in timestep k+1 on a node is the scattered pulse from an adjacent node in timestep k, the following connection equations are derived:
By modifying the scattering matrix inhomogeneous and lossy materials can be modelled. By adjusting the connection equations it is possible to simulate different boundaries.
The shunt TLM node
Apart from the series node, described above there is also the shunt TLM node, which represents a TM-mode field distribution. The only non-zero components of such wave are , , and . With similar considerations as for the series node the scattering matrix of the shunt node can be derived.
3D TLM models
Most problems in electromagnetics require a three-dimensional grid. As we now have structures that describe TE and TM-field distributions, intuitively it seems possible to define a combination of shunt and series nodes providing a full description of the electromagnetic field. Such attempts have been made, but because of the complexity of the resulting structures they proved to be not very useful. Using the analogy that was presented above leads to calculation of the different field components at physically separated points. This causes difficulties in providing simple and efficient boundary definitions. A solution to these problems was provided by Johns in 1987, when he proposed the structure known as the symmetrical condensed node (SCN), presented in the figure on the right. It consists of 12 ports because two field polarisations are to be assigned to each of the 6 sides of a mesh cell.
The topology of the SCN cannot be analysed using Thevenin equivalent circuits. More general energy and charge conservation principles are to be used.
The electric and the magnetic fields on the sides of the SCN node number (l,m,n) at time instant k may be summarised in 12-dimensional vectors
They can be linked with the incident and scattered amplitude vectors via
where is the field impedance, is the vector of the amplitudes of the incident waves to the node, and is the vector of the scattered amplitudes. The relation between the incident and scattered waves is given by the matrix equation
The scattering matrix S can be calculated. For the symmetrical condensed node with ports defined as in the figure the following result is obtained
where the following matrix was used
The connection between different SCNs is done in the same manner as for the 2D nodes.
Open-sourced code implementation of 3D-TLM
The George Green Institute for Electromagnetics Research (GGIEMR) has open-sourced an efficient implementation of 3D-TLM, capable of parallel computation by means of MPI named GGITLM and available online.
References
C. Christopoulos, The Transmission Line Modeling Method: TLM, Piscataway, NY, IEEE Press, 1995.
Russer, P., Electromagnetics, Microwave Circuit and Antenna Design for Communications Engineering, Second edition, Artec House, Boston, 2006,
P. B. Johns and M.O'Brien. "Use of the transmission line modelling (t.l.m) method to solve nonlinear lumped networks", The Radio Electron and Engineer. 1980.
J. L. Herring, Developments in the Transmission-Line Modelling Method for Electromagnetic Compatibility Studies, PhD thesis, University of Nottingham, 1993.
Mansour Ahmadian, Transmission Line Matrix (TLM) modelling of medical ultrasound PhD thesis, University of Edinburgh 2001
Computational electromagnetics
Electromagnetism
Electrodynamics | Transmission-line matrix method | [
"Physics",
"Mathematics"
] | 1,561 | [
"Electromagnetism",
"Physical phenomena",
"Computational electromagnetics",
"Computational physics",
"Fundamental interactions",
"Electrodynamics",
"Dynamical systems"
] |
21,807,867 | https://en.wikipedia.org/wiki/Acoustic%20resonance%20spectroscopy | Acoustic resonance spectroscopy (ARS) is a method of spectroscopy in the acoustic region, primarily the sonic and ultrasonic regions. ARS is typically much more rapid than HPLC and NIR. It is non destructive and requires no sample preparation as the sampling waveguide can simply be pushed into a sample powder/liquid or in contact with a solid sample.
To date, the AR spectrometer has successfully differentiated and quantified sample analytes in various forms; (tablets, powders, and liquids). It has been used to measure and monitor the progression of chemical reactions, such as the setting and hardening of concrete from cement paste to solid. Acoustic spectrometry has also been used to measure the volume fraction of colloids in a dispersion medium, as well as for the investigation of physical properties of colloidal dispersions, such as aggregation and particle size distribution. Typically, these experiments are carried out with sinusoidal excitation signals and the experimental observation of signal attenuation. From a comparison of theoretical attenuation to experimental observation, the particle size distribution and aggregation phenomena are inferred.
History
Dipen Sinha of the Los Alamos National Laboratory developed ARS in 1989. Most published work in acoustics has been in the ultrasonic region and their instrumentation has dealt with propagation through a medium and not a resonance effect. One of the first, if not the first publication related to acoustic resonance was in 1988 in the journal of Applied Spectroscopy. The researchers designed a V-shaped quartz rod instrument that utilized ultrasonic waves to obtain signatures of microliters of different liquids. The researchers did not have any type of classification statistics or identification protocols; the researchers simply observed ultrasonic resonance signatures with these different materials. Specifically, Sinha was working on developing an ARS instrument that can detect nuclear, chemical, and biological weapons. By 1996, he had successfully developed a portable ARS unit that can be used in a battlefield. The unit can detect and identify deadly chemicals that are stored in containers in matter of minutes. In addition, the instrument was further developed by a different research group (Dr. Robert Lodder, University of Kentucky) and their work was also published in Applied Spectroscopy. The researchers created a V-shaped instrument that could breach the sonic and ultrasonic regions creating more versatility. The term acoustic resonance spectrometer was coined for the V-shaped spectrometer as well. Since the study in 1994, the ARS has evolved and been used to differentiate wood species, differentiate pharmaceutical tablets, determine burn rates and determine dissolution rates of tablets. In 2007 Analytical Chemistry featured the past and current work of the lab of Dr. Lodder discussing the potential of acoustics in the analytical chemistry and engineering fields.
Theory
Vibrations
There are two main types of vibrations: free and forced. Free vibrations are the natural or normal modes of vibration for a substance. Forced vibrations are caused by some sort of excitation to make the analyte resonate beyond its normal modes. ARS employs forced vibrations upon the analyte unlike most commonly used techniques which use free vibrations to measure the analyte. ARS excites multiple normal modes by sweeping the excitation frequency of an analyte with no internal vibrations to obtain a resonance spectrum. These resonance frequencies greatly depend on the type of analyte being measured and also depend greatly on the physical properties of the analyte itself (mass, shape, size, etc.). The physical properties will greatly influence the range of frequencies produced by the resonating analyte. In general small analytes have megahertz frequencies while larger analytes can be only a few hundred hertz. The more complex the analyte the more complex the resonance spectrum.
Quartz Rod
The ARS is essentially set up to create a fingerprint for different samples by constructive and destructive interferences. Figure 1 is a schematic of the quartz rod ARS which illustrates the path of the sound through the quartz rod. A function generator is the source though any device that is capable of outputting sound in voltage form could be used (i.e. CD player, MP3 player or sound card). White noise is generated and the voltage is converted into a sound wave by a piezoelectric disc coupled to the quartz rod. The sound resonates down the quartz rod which is shown as a blue sinusoidal wave and two key interactions occur. A portion of the energy (red) is introduced into the sample and interacts in a specific manner dependent of the sample and another portion of the energy (blue) continues unaltered through the quartz rod. The two energies will still have the same frequency though they will have changes in their phase and possibly amplitude. The two waves recombine after the sample and constructive or destructive interference occurs depending on the phase shift and amplitude change due to the sample. The altered combined energy is converted to an electrical voltage by another piezoelectric disc at the end of the quartz rod. The voltage is then recorded onto a computer by a sound card. The sample is coupled to the quartz rod at constant pressure which is monitored by a pressure transducer which also acts as the sample holder. Rubber grommets are used to secure the quartz rod to a stable stand minimizing coupling of the rod to the surroundings. Broadband white noise is used to obtain a full spectrum; however, most sound cards only pick up between 20 and 22,050 Hz. The waveform that is sent to the computer is a time-based signal of the interactions of white noise with the sample. Fast Fourier transform (FFT) is performed on the waveform to transform the time-based signal into the more useful frequency spectrum.
Detection limits
A multidimensional population translation experiment was utilized to determine the detection limits of an ARS device, Populations with small multidimensional separation, in this case aspirin and ibuprofen, were used to determine that tablets with a 0.08 mm difference in thickness, 0.0046 g mass difference, and a difference in density of 0.01658 g/mL were not separable by ARS. Using vitamin C and acetaminophen for the largest multidimensional separation, tablets with a thickness difference of 0.27 mm, 0.0756 g mass difference, and 0.01157 g/mL density difference in density were inseparable. Experimentally the dynamic range of ARS is a factor of ten.
Applications
One potential application of ARS involves the rapid and nondestructive identification of drug tablet verification. Currently, there are no unfailing methods to eliminate contaminated or mislabeled products, a process which sometimes results in millions of pills having to be recalled. More studies need to be completed to determine if ARS could be used as a process analytical technique in industry to prevent problems with pills before they are shipped. ARS may also be useful for quantifying the active ingredient in pharmaceutical ointments and gels.
References
Spectroscopy
Acoustics
1989 in science
20th-century inventions | Acoustic resonance spectroscopy | [
"Physics",
"Chemistry"
] | 1,419 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Classical mechanics",
"Acoustics",
"Spectroscopy"
] |
21,809,607 | https://en.wikipedia.org/wiki/NC-SI | NC-SI, abbreviated from network controller sideband interface, is an electrical interface and protocol defined by the Distributed Management Task Force (DMTF). The NC-SI enables the connection of a baseboard management controller (BMC) to one or more network interface controllers (NICs) in a server computer system for the purpose of enabling out-of-band system management. This allows the BMC to use the network connections of the NIC ports for the management traffic, in addition to the regular host traffic.
The NC-SI defines a control communication protocol between the BMC and NICs. The NC-SI is supported over several transports and physical interfaces.
Hardware interface
The RMII-based transport (RBT) interface defined by NC-SI is based on the RMII specification with some modifications that allow connection of multiple network controllers to a single BMC. The NC-SI can also operate over a variety of other electrical interfaces, including SMBus and PCI Express when used over the Management Component Transport Protocol (MCTP).
The table below sums up the signals comprising the RBT interface.
Traffic types
The NC-SI defines two fundamental types of traffic, pass-through and control traffic. Pass-through traffic consists of data exchanged between the BMC and the network via the NC-SI interface. Control traffic is used to inventory and configure aspects of NIC operation and control the NC-SI interface.
Control traffic is broken down into three sub-types:
Commands, sent from the BMC to one of the NICs
Responses, sent by the NICs as results of the commands
Asynchronous event notifications (AENs), sent asynchronously by the NICs and equivalently to interrupts, upon the occurrence of the specified event
When the NC-SI is used over RBT, standard Ethernet framing is used for all traffic types. Control traffic is identified by using an EtherType of 0x88F8. When the NC-SI is used in conjunction with MCTP, MCTP provides the packetization methodology and traffic type identification.
See also
Management Component Transport Protocol (MCTP)
Platform Management Components Intercommunication (PMCI)
References
External links
DMTF Homepage
NC-SI Specification rev 1.1.0
NC-SI over MCTP Binding Specification rev 1.2.2
DMTF standards
Out-of-band management | NC-SI | [
"Technology"
] | 489 | [
"Computer standards",
"DMTF standards"
] |
31,835,842 | https://en.wikipedia.org/wiki/4%2C4%27-Azobis%284-cyanopentanoic%20acid%29 | 4,4′-Azobis(4-cyanopentanoic acid) (ACPA) is a free radical initiator used in polymer synthesis. ACPA is a water-soluble initiator used in both heterogeneous and homogeneous free-radical polymerizations. It is used as an initiator in reversible addition−fragmentation chain transfer polymerization (RAFT). When heated to decomposition, c. 70 °C, it releases N2 and produces 2 equivalents of reactive radicals capable of initiating polymerization.
References
Azo compounds
Nitriles
Dicarboxylic acids
Radical initiators | 4,4'-Azobis(4-cyanopentanoic acid) | [
"Chemistry",
"Materials_science"
] | 131 | [
"Radical initiators",
"Functional groups",
"Organic compounds",
"Polymer chemistry",
"Reagents for organic chemistry",
"Nitriles",
"Organic compound stubs",
"Organic chemistry stubs"
] |
31,836,156 | https://en.wikipedia.org/wiki/Textile%20Research%20Journal | The Textile Research Journal is a peer-reviewed scientific journal that covers the field of materials science, especially as applying to textiles. The journal's editor is Dong Zhang. It was established in 1931 and is published by SAGE Publications.
Abstracting and indexing
The journal is abstracted and indexed in Scopus, and the Science Citation Index Expanded. According to the Journal Citation Reports, its 2020 impact factor is 1.820, ranking it 9th out of 25 journals in the category "Materials Science, Textiles".
References
External links
SAGE Publishing academic journals
English-language journals
Materials science journals
Academic journals established in 1931
Textile journals | Textile Research Journal | [
"Materials_science",
"Engineering"
] | 125 | [
"Materials science stubs",
"Materials science",
"Materials science journals",
"Materials science journal stubs",
"Textile journals"
] |
31,843,107 | https://en.wikipedia.org/wiki/Ferroelectric%20ceramics | Ferroelectric ceramics is a special group of minerals that have ferroelectric properties: the strong dependence of the dielectric constant of temperature, electrical field, the presence of hysteresis and others.
The first widespread ferroelectric ceramics material, which had ferroelectric properties not only in the form of a single crystal, but in the polycrystalline state, i.e. in the form of ceramic barium titanate was BaO•TiO2, which is important now. Add to it some m-Liv not significantly change its properties. A significant nonlinearity of capacitance capacitor having ferroelectric ceramics materials, so-called varikondy, types of VC-1 VC-2, VC-3 and others.
Ferroelectric ceramics are used in capacitors, sensors, actuators, non-volatile memory, and medical devices due to their ability to maintain and reverse electric polarization. Due to their piezoelectric and pyroelectric properties, they are also used in energy harvesting, infrared detectors, and ultrasonic transducers. Additionally, they are employed in RF filters, electro-optic devices, and various energy-efficient applications.
References
Гірничий енциклопедичний словник: в 3 т. / За ред. В. С. Білецького. — Донецьк: Східний видавничий дім, 2001–2004.
Materials science
Ceramic materials
Ferroelectric materials | Ferroelectric ceramics | [
"Physics",
"Materials_science",
"Engineering"
] | 353 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Ferroelectric materials",
"Materials science",
"Materials",
"Electrical phenomena",
"Ceramic materials",
"nan",
"Ceramic engineering",
"Hysteresis",
"Matter"
] |
31,843,527 | https://en.wikipedia.org/wiki/Commodification%20of%20water | The commodification of water refers to the process of turning water, especially freshwater, from a public good into a tradable commodity also known as an economic good. This transformation introduces water as a product into a market which previously did not have water as a tradable item. Usually, this is done in the hope of seeing the resource be managed more efficiently. The commodification of water has increased significantly during the 20th century, along with the concerns for water scarcity and environmental degradation.
The emergence of the commodification of water was centered around two main views: that people might soon struggle to access water, and that government regulation of environmentally damaging behavior was ineffective. Commodification is theoretically rooted in the neoclassical discourse which says that by assigning an economic value to a good or service, one can prevent misuse. The commodification of water, although not new, is considered part of a more recent market-based approach to water governance and provokes both approval and disapproval from stakeholders.
Through the establishment of Western private property rights and market mechanisms, some argue that water will be allocated more efficiently. Karen Bakker describes this market-based approach proposed by neoliberals as "market environmentalism": a method of resource regulation that promises economic and environmental objectives can be met in tandem. To this extent the commodification of water can be viewed as an extension of capitalist and market tendencies into new spaces and social relations. Karl Marx termed this phenomenon, "primitive accumulation". For this reason there remains serious doubt as to whether commodification of water can help improve access to freshwater supplies and conserve water as a resource.
Origins of commodification of water
Water is a basic need of life and presently, an estimated one billion persons do not have access to safe drinking water, and even more have inadequate sanitation. Global institutions, including the United Nations, warn of the impact of a growing global population and the effects of climate change on the ability of people to access freshwater. This is especially concerning considering that the bottled water market has consistently earned more than four billion U.S. Dollars a year since the turn of the century. This makes the debate over improving current and future water provision an urgent one and therefore thrusts discussion over approaches to water governance into the foreground to avert a looming crisis. This theory was phrased in Fortune Magazine as:
"Water promises to be to the 21st century what oil was to the 20th century: the precious commodity that determines the wealth of nations"
Issues surrounding the provision of water are nothing new; however, the approach to the problematic has changed dramatically during the last century. For the majority of the 20th century water was publicly provisioned in an era of the Keynesian welfare state. The state incurred high capital costs in building long-lasting infrastructure that could readily supply the population with universal access to water in the pursuit of economic growth and industrialisation. The emphasis was on social equity, with water resources state owned and centrally regulated through command and control regulation. The emphasis was on providing universal access and supply led solutions. This approach was heavily criticized during the late 20th century and under the prevailing ethos of neoliberal economic globalization, commodification of water was increasingly presented as the answer. The ability of the state to continue provision of water efficiently was questioned in the latter half of the 20th century in parallel with the environmentalist movement which raised awareness of the resulting environmental degradation and ecological disturbances. The fiscal crisis of the 1970s decreased public spending in most developed nations, leading to further deterioration of state-run infrastructure and further exacerbating problems of provision. Together with critics' insistence on state inability to operate efficiently these factors created an impetus for change in water governance. The precipitated change in attitude as to how water should be governed was market-based governance, proposed by neoliberals, and becoming the dominant approach to environmental problems. This shift in attitude led to the intensification of the commodification of water.
Commodification
In neoclassical terms, a commodity is a good or service that can be traded or exchanged in the marketplace for another commodity or money. Commodification is rooted in Marxist political theory and entails the creation of an economic good that previously was not prescribed an economic value. This takes place through the application of market mechanisms with the intended result being a standardized class of goods or services. Once commodified an economic good can be bought or sold at a price determined by market exchange, and as such market values replace social values previously attached to the good. It is this transformation from a public good to an economic good that neoliberals claim leads to better management and allocation of a resource, such as water. In accordance with welfare economics, this view infers the more efficiently managed a resource is the higher a society's welfare. This neoliberal sentiment of water as an economic good not unlike any other is visible in a quote from The Economist: "Only by accepting water as a tradable commodity will sensible decisions be possible" (The Economist, 1992).
Theoretical explanation for commodification
The theoretical reasoning for proposing commodification as an answer to environmental problems can be related back to Garrett Hardin's work "The Tragedy of the Commons". In this he proposed that environmental problems do not have a technical solution because they are common resource problems. Water has historically been classified a "common good" or part of the global commons which has led to overexploitation and poor management. According to Hardins' theory multiple individuals acting both independently and rationally will continue to deplete common resources in the pursuit of self-interest. Concerns surrounding the overexploitation of water created it as a scarce resource prompting commodification as an effort to protect it. For a commodification to be achieved the commons are enclosed into private property which provides the motivating force for conservation and efficient management in the absence of strong collective action. Commodification places an economic value on an environmental resource which seeks to include and internalize the costs of using it within economic calculations. The logic proceeds, if a resource can be valued correctly it can be protected. To ascertain an economic value and produce a tradable commodity, commodification requires the natural object to be removed from its biophysical context thus transforming its identity and value. Through commodification water becomes responsive to market forces which are assumed to be better equipped at allocating resources and regulating environmentally damaging behavior than command and control regulation thus providing justification for the shift in attitude.
Market-based approach
The creation of water as a private good and a scarce resource enabled a market-based approach to be put forward as the best available solution to protect it. This shift towards market-based solutions was not limited to water and was typical of a macroeconomic neoliberal approach to the environment. The market approach assumes that private actors will act rationally to maximize self-interest given the best information available. Markets are proposed to effectively pool knowledge allowing interaction between many stakeholders, and as a result are more effective at producing collective action and promoting public interest when compared to regulatory control. Through commodification water is paid for on the basis of market determined supply and demand instead of ability to pay. The supposed ability of market mechanisms to realize a resource's true 'value' is assumed to lead to its protection and conservation. "Market environmentalism" best describes this sentiment and emerged from the same line of thinking as ecological modernization, proposing the market as the solution and not the cause of the problem whereby the previously antagonistic relationship between economic growth and environmental protection is reconciled allowing both objectives to be achieved. This is appealing to policymakers and private interests alike in that it envisages solutions within the capitalist system.
Government to governance
In light of this, the commodification of water can be viewed as a market-based governance approach which seeks to confront conflicts between public and private interests and as such part of a broader shift in focus 'government' to 'governance'. Governance represents a new method by which society is governed which seeks to involve more stakeholders in decision making. The release of the water sector from state ownership and subsequent efforts to commodify water allow for more individual actors to participate in decision making thereby increasing the probability of consensual decisions being produced, which would not have been possible when decisions were previously made by one actor, the government. The states role in environmental problems was realigned and scaled down to be positioned as just one of many stakeholders aligned along horizontal networks. Through public/private partnerships it is hoped that resource management will take place more effectively through the pooling of more knowledge from a wider range of stakeholders.
Criticisms of commodification
Although the extent to which water has been commodified is of debate, attempts to do so have led to improvements in biological and chemical water quality as the environment has been prioritised to a greater degree in decision making. The benefits of commodification are well documented by its neoliberal proponents however criticisms concerning commodification and market environmentalism as a solution to environmental problems are less considered. Commodification inherently requires the enclosure of public assets to allow trade within the market place as economic goods. Criticism of this process identifies commodification as a systemic flaw within the capitalist system. Marx's theory of primitive accumulation describes how the capitalist system needs to continually expand into non-capitalist sectors which would have originally taken place through imperialism. Marx's criticism of commodification refers to this reckless addiction to growth and extends to the manner in which it changes a good's materiality so that natural objects lose their use value simply in exchange for a price. He believed that commodification transformed not only goods but relationships previously untouched by commerce, harming society in the process. David Harvey built upon Marx's theory and coined the phrase "accumulation by dispossession" which refers to this notion of expansion but considers it inherent within the capitalist system, which will find ways other than imperialism to achieve its goal. This form of capital accumulation tends to direct wealth away from the poor towards the elite and direct capital from the public to the private sector. This has exacerbated social inequality and directed natural resources away from their geographical context causing damage to ecosystems across the globe.
The commodification of water has created a situation whereby the provision of the resource is in the hands a select few multinationals, with the top two multinationals controlling approximately 75% of the industry. This 'looting of the commons' has led to amplification of already existing problems within water governance. Commodification necessitates a full recovery price and the removal of cross-subsidies to ensure free market trade. In South Africa this has led to thousands of disconnections from the water supply for those who cannot pay; commentators fear that this has harmed the health of the nation's people and decreased social equality further.
The formation of private public partnerships (PPP) is the standard model for transferring public goods to private goods with the aim to reconcile conflict between the public and private sector. They are promoted by global institutions such as the World Bank and the International Monetary Fund as the best available way to manage water resources efficiently and are rapidly increasing in number providing evidence for the global trend of commodification. The aforementioned institutions promote such behaviour by imposing lending agreements on developing nations requiring them to adopt their neoliberal principles, which leaves national governments in the developing world little choice but to adopt such practices. PPPs are intended to increase the involvement of a wider range of stakeholders through horizontal networks including NGOs, civil society, and the public and private sector, however the increasing influence of multinational companies may serve to undermine this. Multinational water companies due to their enormous size are able to exert strong pressure on national governments to cooperate with their demands. PPPs have recently been implicated in projects that have overexploited natural resources in the name of profit. The relative power of multinationals in comparison to other stakeholders engineers a dominant bargaining power in decision making. With the support of various institutions together with the intrinsic urge of capitalism to expand into new areas this trend looks set to continue.
Likelihood of full commodification
Conferences formed to address the issues in water governance such as the Third World Water Forum are becoming more apparent in the 21st century; however these can often fall foul to the same endemic problems outlined above. NGOs and members of civil society criticised the Third World Water Forum for failing to declare water a human right and continuing to prefer commodification as the solution to the current water crisis. They argue that the world's poor stand to become worse off as a result of commodification as objectives of social equality and universal access are traded in for economic efficiency and profit. The social inequality and environmental degradation that have arisen are proof that economic valuation failed to take into account key social and environmental costs of using water. Nevertheless, there is opposition to the continued commodification which Karl Polanyi termed ‘counter movement’. In this case they are concerned with returning water back to the global commons. NGOs and members of civil society have formed voluntary networks with the aim of banning future decisions to further commodify water. These movements have arisen in opposition to capitalist accumulation through globalisation and are serving to decrease the trend in commodification. Full commodification faces difficulties theoretically as it relies on an economic good or service being standardised and readily exchangeable in the market place irrespective of its spatial and temporal dimensions. Bakker argues that this is nearly impossible for water due to its biophysical characteristics that contravene all efforts to fully commodify. Capitalism depends on a changing balance between (re-)commodification and decommodification, which as Bob Jessop points out means that the processes of commodification, decommodification and recommodification will continue to appear in ‘waves’ due to capitalism's continual pursuit of accumulation by dispossession.
Water Colonization
Only one half of one percent of water on earth is fresh, liquid, and accessible. Some communities in the Middle East, South-Central Africa, Northern China, Western United States, and Mexico live in areas completely devoid of fresh water. More than half of the completely dry countries are in Sub-Saharan Africa, affecting a total of nearly one billion people. Despite this, water bottling corporations, energy development companies, and mining operations continue to siphon water from politically or economically poorer nearby communities. Using water in that way at this rate is primarily detrimental to poorer marginalized and indigenous communities. Examples of this are common worldwide, but are especially dangerous when the oppressive companies set their sights on drier regions of the world. Water colonization expresses itself in ways similar to grander colonization. The capturing and acquisition of land by violence or coercion, adaptation to the culture or proceedings of the nearby people, and misguided post-hoc justification are all aspects of colonization, even on this smaller scale. The consequences are also very similar. At threat of state violence, the residents of the community are rendered politically powerless, often end up paying for the colonization in all ways, and are left with the political, economic, and environmental consequences. The major difference is that this colonization is usually conducted by the local or state governments or corporations. Although these operations don't often include violence, there have been examples of paramilitary groups being hired to defend corporations. These paramilitary groups have been known to kill protesters, community members, and accidental trespassers. They are also mostly purchased by the large corporations, especially for mining operations.
Resistance Against Corporate Control and Limited Choices
The commodification of water has resulted in limited consumer choice due to corporate concentration and retail channel restrictions. In the water industry, corporate consolidation has led to exclusive distribution contracts, restricting the availability of diverse water brands in retail spaces, schools, and restaurants in the USA and Canada.
Consumers often lack the opportunity to exercise choice or express resistance through their purchasing decisions. Unlike commodities such as coffee or organic vegetables where consumers can opt for ethically sourced or fair-traded options, the commodification of water often restricts consumer agency. Limited shelf space, exclusive contracts, and narrow retail channels limit the availability of alternative, ethically conscious water choices. Most consumers face constraints when attempting to resist bottled water commodification. The act of opting for tap water or using refillable bottles can be inconspicuous, while the prevalence of bottled water usage is overtly visible in daily litter and consumer behavior.
Despite these limitations, there are emerging voices of resistance against the commodification of water. A survey conducted among young adults revealed that about 34 percent chose not to purchase bottled water due to concerns about pricing, environmental impact, and objections to supporting large corporations.
Online platforms, including websites and blogs, serve as arenas for voicing resentment and suspicion towards the commodification of water. Criticism is aimed at the concept of paying for water, the environmental impact of plastic usage, and the profit motives of corporations. This emotional response highlights broader societal concerns about corporate manipulation and exploitation within consumer culture. Notably, instances such as the recall of a popular water brand, which was revealed to be filtered tap water, exemplify public sentiments against perceived corporate deceit and manipulation, contributing to a broader sense of mistrust within consumer culture.
See also
Water resource policy
Water privatisation
Commodification of nature
Accumulation by dispossession
Tragedy of the commons
Neoliberalism
Capitalism
Human needs
Blue Gold: World Water Wars
External links
References
Environmental economics
Environmental issues with water
Commodification | Commodification of water | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,600 | [
"Hydrology",
"Water supply",
"Environmental economics",
"Environmental engineering",
"Environmental social science"
] |
41,651,205 | https://en.wikipedia.org/wiki/Chain%20fountain | The chain fountain phenomenon, also known as the self-siphoning beads, Mould effect, or Newton beads is a physical phenomenon observed with a chain placed inside a jar. One end of the chain is pulled from the jar and is allowed to fall under the influence of gravity. This process establishes a self-sustaining flow of the chain which rises over the edge and goes down to the floor or ground beneath it, as if being sucked out of the jar by an invisible siphon. For chains with small adjacent beads, the arc can ascend into the air over and above the edge of the jar with a noticeable gap; this gap is greater when the chain falls farther.
The self-siphoning effect is also observed in non-Newtonian fluids.
History
The self-siphoning phenomenon has been known for some time, and had become a topic of public discussion many times in the past. Science entertainer Steve Spangler presented this phenomenon on TV in 2009, both with beads and viscoelastic liquids. This phenomenon is classically known as Newton's beads.
The effect is most pronounced when using a long ball chain. The higher the jar containing the chain is placed above the ground, the higher the chain will rise above the jar during the "siphoning" phase. As demonstrated in an experiment, when the jar is placed above the ground and the chain is sufficiently long, the arc of the chain fountain can reach a height of about above the jar.
The phenomenon with the rising chain was already described in 2011 as an open problem for the 2012 International Young Physicists' Tournament (IYPT) and subsequently brought to widespread public attention in a video made by science presenter Steve Mould (namesake of the effect) in 2013. Mould's YouTube video in which he demonstrated the phenomenon of self-siphoning rising beads, and his subsequent proposed explanation on a BBC show, brought the problem to the attention of academics John Biggins and Mark Warner of Cambridge University, who published their findings in Proceedings of the Royal Society about what they called "chain fountain" or "Mould effect".
Explanation
A variety of explanations have been proposed as to how the phenomenon can best be explained in terms of kinematic physics concepts such as energy and momentum. Biggins and Warner suggest that the origin of the upward force is related to the stiffness of the chain links, and the bending restrictions of each chain joint.
Furthermore, because the beads of the chain can drag laterally within the jar across other stationary links, the moving beads of the chain can bounce or jump vertically when they strike the immobile links. This effect contributes to the chain's movement, but is not the primary cause.
In non-Newtonian fluids
The self-syphoning phenomena can be also observed in viscoelastic fluids that are mainly composed of long polymers, like polyethylene glycol.
See also
Catenary
References
Notes
External links
Science demonstrations
Articles containing video clips
fountain
Effects of gravity
Falling
Rheology | Chain fountain | [
"Chemistry"
] | 607 | [
"Rheology",
"Fluid dynamics"
] |
41,660,225 | https://en.wikipedia.org/wiki/Bismuth%28III%29%20nitrate | Bismuth(III) nitrate is a salt composed of bismuth in its cationic +3 oxidation state and nitrate anions. The most common solid form is the pentahydrate. It is used in the synthesis of other bismuth compounds. It is available commercially. It is the only nitrate salt formed by a group 15 element, indicative of bismuth's metallic nature.
Preparation and reactions
Bismuth nitrate can be prepared by the reaction of bismuth metal and concentrated nitric acid.
Bi + 4HNO3 → Bi(NO3)3 + 2H2O + NO
It dissolves in nitric acid but is readily hydrolysed to form a range of oxynitrates when the pH increases above 0.
It is also soluble in acetone, acetic acid and glycerol but practically insoluble in ethanol and ethyl acetate.
Some uses in organic synthesis have been reported for example the nitration of aromatic compounds and selective oxidation of sulfides to sulfoxides.
Bismuth nitrate forms insoluble complexes with pyrogallol and cupferron and these have been the basis of gravimetric methods of determining bismuth content.
On heating bismuth nitrate can decompose forming nitrogen dioxide, NO2.
Structure
The crystal form is triclinic, and contains 10 coordinate Bi3+, (three bidentate nitrate ions and four water molecules).
References
Bismuth nitrate
Nitrates | Bismuth(III) nitrate | [
"Chemistry"
] | 311 | [
"Oxidizing agents",
"Nitrates",
"Salts"
] |
37,474,095 | https://en.wikipedia.org/wiki/Adolph%20Winkler%20Goodman | Adolph Winkler Goodman (July 20, 1915 – July 30, 2004) was an American mathematician who contributed to number theory, graph theory and to the theory of univalent functions: The conjecture on the coefficients of multivalent functions named after him is considered the most interesting challenge in the area after the Bieberbach conjecture, proved by Louis de Branges in 1985.
Life and work
In 1948, he made a mathematical conjecture on coefficients of -valent functions, first published in his Columbia University dissertation thesis and then in a closely following paper. After the proof of the Bieberbach conjecture by Louis de Branges, this conjecture is considered the most interesting challenge in the field, and he himself and coauthors answered affirmatively to the conjecture for some classes of -valent functions. His researches in the field continued in the paper Univalent functions and nonanalytic curves, published in 1957: in 1968, he published the survey Open problems on univalent and multivalent functions, which eventually led him to write the two-volume book Univalent Functions.
Apart from his research activity, He was actively involved in teaching: he wrote several college and high school textbooks including Analytic Geometry and the Calculus, and the five-volume set Algebra from A to Z.
He retired in 1993, became a Distinguished Professor Emeritus in 1995, and died in 2004.
Selected works
Notes
Biographical references
References
.
.
.
.
Additional sources
20th-century American mathematicians
21st-century American mathematicians
Complex analysts
American mathematical analysts
American number theorists
Graph theorists
1915 births
2004 deaths | Adolph Winkler Goodman | [
"Mathematics"
] | 310 | [
"Mathematical relations",
"Graph theory",
"Graph theorists"
] |
37,478,118 | https://en.wikipedia.org/wiki/Open-circuit%20test | The open-circuit test, or no-load test, is one of the methods used in electrical engineering to determine the no-load impedance in the excitation branch of a transformer.
The no load is represented by the open circuit, which is represented on the right side of the figure as the "hole" or incomplete part of the circuit.
Method
The secondary of the transformer is left open-circuited. A wattmeter is connected to the primary. An ammeter is connected in series with the primary winding. A voltmeter is optional since the applied voltage is the same as the voltmeter reading. Rated voltage is applied at primary.
If the applied voltage is normal voltage then normal flux will be set up. Since iron loss is a function of applied voltage, normal iron loss will occur. Hence the iron loss is maximum at rated voltage. This maximum iron loss is measured using the wattmeter. Since the impedance of the series winding of the transformer is very small compared to that of the excitation branch, all of the input voltage is dropped across the excitation branch. Thus the wattmeter measures only the iron loss. This test only measures the combined iron losses consisting of the hysteresis loss and the eddy current loss. Although the hysteresis loss is less than the eddy current loss, it is not negligible. The two losses can be separated by driving the transformer from a variable frequency source since the hysteresis loss varies linearly with supply frequency and the eddy current loss varies with the frequency squared.
Hysteresis and eddy current loss:
Since the secondary of the transformer is open, the primary draws only no-load current, which will have some copper loss. This no-load current is very small and because the copper loss in the primary is proportional to the square of this current, it is negligible. There is no copper loss in the secondary because there is no secondary current.
The secondary side of the transformer is left open, so there is no load on the secondary side. Therefore, power is no longer transferred from primary to secondary in this approximation, and negligible current goes through the secondary windings. Since no current passes through the secondary windings, no magnetic field is created, which means zero current is induced on the primary side. This is crucial to the approximation because it allows us to ignore the series impedance since it is assumed that no current passes through this impedance.
The parallel shunt component on the equivalent circuit diagram is used to represent the core losses. These core losses come from the change in the direction of the flux and eddy currents. Eddy current losses are caused by currents induced in the iron due to the alternating flux. In contrast to the parallel shunt component, the series component in the circuit diagram represents the winding losses due to the resistance of the coil windings of the transformer.
Current, voltage and power are measured at the primary winding to ascertain the admittance and power-factor angle.
Another method of determining the series impedance of a real transformer is the short-circuit test.
Calculations
The current is very small.
If is the wattmeter reading then,
That equation can be rewritten as,
Thus,
Impedance
By using the above equations, and can be calculated as,
Thus,
or
Admittance
The admittance is the inverse of impedance. Therefore,
The conductance can be calculated as,
Hence the susceptance,
or
Here,
is the wattmeter reading
is the applied rated voltage
is the no-load current
is the magnetizing component of no-load current
is the core loss component of no-load current
is the exciting impedance
is the exciting admittance
See also
Short-circuit test
Thévenin's theorem
Blocked rotor test
Circle diagram
References
Electrical tests
Electric transformers | Open-circuit test | [
"Engineering"
] | 778 | [
"Electrical engineering",
"Electrical tests"
] |
37,478,772 | https://en.wikipedia.org/wiki/Prelude%20FLNG | Prelude FLNG is a floating liquefied natural gas (FLNG) platform owned by Shell plc and built by the Technip–Samsung Consortium (TSC) in South Korea for a joint venture between Royal Dutch Shell, KOGAS, and Inpex. The hull was launched in December 2013.
It is long, wide, tall, and made with more than 260,000 tonnes of steel, beating Seawise Giant (the previous record holder) as the world's longest vessel. The vessel displaces around 600,000 tonnes when fully loaded, more than five times the displacement of a . It is the world's largest FLNG platform, as well as the largest FLNG facility constructed to date.
Construction
The main double-hulled structure was built by the Technip Samsung Consortium in the Samsung Heavy Industries Geoje shipyard in South Korea. Construction was officially started when the first metal was cut for the substructure in October 2012. The Turret Mooring System was subcontracted to SBM and built in Drydocks World Dubai, United Arab Emirates. The MEG (monoethylene glycol) reclamation unit by Fjords Processing Norway and built in South Korea is the only topside module subcontracted. Other equipment such as subsea wellheads were constructed at other locations around the world. It was launched on 30 November 2013 with no superstructure (accommodation and process plant).
The vessel is moored by its turret to 16 seabed driven steel piles, each long and in diameter.
Subsea equipment was built by FMC Technologies, and Emerson is the main supplier of automation systems and uninterruptible power supply systems. By July 2015, all 14 gas plant modules were installed.
Cost and funding
Prelude FLNG was approved for funding by Shell in 2011.
Analyst estimates in 2013 for the cost of the vessel were between to $12.6 billion. Shell estimated in 2014 that the project would cost up to per million tons of production capacity. Competitive pressures from an increase in the long-term production capabilities of North American gas fields due to hydraulic fracturing technologies and increasing Russian export capabilities may reduce the actual profitability of the venture from what was anticipated in 2011. In 2021, the WAToday news website reported that it was believed that the ship had cost at least , though Shell has never confirmed the actual cost.
Operations
The Prelude FLNG system was built for use in the Prelude and Concerto gas fields in the Browse LNG Basin, off the coast of Australia; drilling and gas production were planned to begin in 2016. The system has a planned life expectancy of 25 years. The Prelude and Concerto fields are expected to produce 5.3 million tonnes of liquid and condensate per year; this includes 3.6 million tonnes of liquefied natural gas, 1.3 million tonnes of condensate, and 400,000 tonnes of liquefied petroleum gas.
Natural gas will be extracted from wells and liquefied by chilling it to . The ability to produce and offload LNG to large LNG carriers is an important innovation, which reduces costs and removes the need for long pipelines to land-based LNG processing plants. However, fitting all the equipment onto a single floating facility was a significant challenge.
The system is designed to withstand Category 5 cyclones, although workers may be evacuated before that on an EC225 rescue helicopter. According to plans, it will produce 110,000 BOE per day.
On 25 July 2017, after a journey of from its construction site in South Korea, Prelude arrived on site in Western Australian waters. It was expected to become operational in 2018. On 26 December 2018, Royal Dutch Shell announced that initial production had begun at Prelude. Shell said that wells had been opened and that the start-up and ramp-up phases were underway.
Prelude was shut down in February 2020 after a reported electrical problem. The platform had previously suffered two incidents that saw the unintended release of gas, which NOPSEMA described as "dangerous". It restarted production in January 2021.
The ship's electrical supply was disrupted by a small fire on 2 December 2021. This led to the cessation of production and the evacuation of most of the crew.
As a result of repeated environmental and safety mishaps, NOPSEMA ordered the supermajor to not resume production for an indefinite period of time, pending Shell's ability to prove updated practices. According to NOPSEMA, Shell "did not have a sufficient understanding of the risks of the power system on the facility, including failure mechanisms, interdependencies, and recovery", adding that "power loss directly impacted critical safety systems along with the ability to safely evacuate crew by boat or helicopter."
In April 2022, the vessel resumed operations. Operations were again partially stopped and then fully stopped during a strike which lasted 11 weeks until 25 August 2022.
A Lego model of Prelude was built for a Shell trade show in 2014. At nearly long, the model currently resides in the foyer of Shell's head office in Perth, Australia.
References
External links
Shell plc
Liquefied natural gas
Natural gas platforms
Floating production storage and offloading vessels
Ships built by Samsung Heavy Industries | Prelude FLNG | [
"Chemistry",
"Engineering"
] | 1,060 | [
"Structural engineering",
"Floating production storage and offloading vessels",
"Petroleum technology",
"Natural gas platforms"
] |
26,190,852 | https://en.wikipedia.org/wiki/Margules%20activity%20model | The Margules activity model is a simple thermodynamic model for the excess Gibbs free energy of a liquid mixture introduced in 1895 by Max Margules. After Lewis had introduced the concept of the activity coefficient, the model could be used to derive an expression for the activity coefficients of a compound i in a liquid, a measure for the deviation from ideal solubility, also known as Raoult's law.
In 1900, Jan Zawidzki proved the model via determining the composition of binary mixtures condensed at different temperatures by their refractive indices.
In chemical engineering the Margules Gibbs free energy model for liquid mixtures is better known as the Margules activity or activity coefficient model. Although the model is old it has the characteristic feature to describe extrema in the activity coefficient, which modern models like NRTL and Wilson cannot.
Equations
Excess Gibbs free energy
Margules expressed the intensive excess Gibbs free energy of a binary liquid mixture as a power series of the mole fractions xi:
In here the A, B are constants, which are derived from regressing experimental phase equilibria data.
Frequently the B and higher order parameters are set to zero. The leading term assures that the excess Gibbs energy becomes zero at x1=0 and x1=1.
Activity coefficient
The activity coefficient of component i is found by differentiation of the excess Gibbs energy towards xi.
This yields, when applied only to the first term and using the Gibbs–Duhem equation,:
In here A12 and A21 are constants which are equal to the logarithm of the limiting activity coefficients: and respectively.
When , which implies molecules of same molecular size but different polarity, the equations reduce to the one-parameter Margules activity model:
In that case the activity coefficients cross at x1=0.5 and the limiting activity coefficients are equal. When A=0 the model reduces to the ideal solution, i.e. the activity of a compound is equal to its concentration (mole fraction).
Extrema
Using simple algebraic manipulation, it can be stated that increases or decreases monotonically within all range, if or with , respectively.
When and , the activity coefficient curve of component 1 shows a maximum and compound 2 minimum at:
Same expression can be used when and , but in this situation the activity coefficient curve of component 1 shows a minimum and compound 2 a maximum.
It is easily seen that when A12=0 and A21>0 that a maximum in the activity coefficient of compound 1 exists at x1=1/3. Obviously, the activity coefficient of compound 2 goes at this concentration through a minimum as a result of the Gibbs-Duhem rule.
The binary system Chloroform(1)-Methanol(2) is an example of a system that shows a maximum in the activity coefficient of Chloroform. The parameters for a description at 20 °C are A12=0.6298 and A21=1.9522. This gives a minimum in the activity of Chloroform at x1=0.17.
In general, for the case A=A12=A21, the larger parameter A, the more the binary systems deviates from Raoult's law; i.e. ideal solubility. When A>2 the system starts to demix in two liquids at 50/50 composition; i.e. plait point is at 50 mol%. Since:
For asymmetric binary systems, A12≠A21, the liquid-liquid separation always occurs for
Or equivalently:
The plait point is not located at 50 mol%. It depends on the ratio of the limiting activity coefficients.
Recommended values
An extensive range of recommended values for the Margules parameters can be found in the literature. Selected values are provided in the table below.
See also
Van Laar equation
Literature
External links
Ternary systems Margules
Physical chemistry
Thermodynamic models | Margules activity model | [
"Physics",
"Chemistry"
] | 820 | [
"Applied and interdisciplinary physics",
"Thermodynamic models",
"Thermodynamics",
"nan",
"Physical chemistry"
] |
26,191,241 | https://en.wikipedia.org/wiki/Ernst%20angle | In nuclear magnetic resonance spectroscopy and magnetic resonance imaging, the Ernst angle is the flip angle (a.k.a. "tip" or "nutation" angle) for excitation of a particular spin that gives the maximal signal intensity in the least amount of time when signal averaging over many transients. In other words, the highest signal-to-noise ratio can be achieved in a given amount of time. This relationship was described by Richard R. Ernst, winner of the 1991 Nobel Prize in Chemistry.
Consider a single pulse sequence consisting of (1) an excitation pulse with flip angle , (2) the recording of the time domain signal (Free induction decay, FID) for a duration known as acquisition time , and (3) a delay until the next excitation pulse (here called interpulse delay ). This sequence is repeated back-to-back many times and the sum or the average of all recorded FIDs ("transients") is calculated. If the longitudinal relaxation time of the specific spin in question is short compared to the sum of and , the spins (or the spin ensembles) are fully or close to fully relaxed. Then a 90° flip angle will yield the maximum signal intensity (or signal-to-noise ratio) per number of averaged FIDs. For shorter intervals between excitation pulses compared to the longitudinal relaxation, partial longitudinal relaxation until the next excitation pulse leads to signal loss in the subsequent FID. This signal loss can be minimized by reducing the flip angle. The optimal signal-to-noise ratio for a given combination of longitudinal relaxation time and delay between excitation pulses is obtained at the Ernst angle:
.
For example, to obtain the highest signal-to-noise ratio for a signal with set to match the signal's , the optimal flip angle is 68°.
An NMR spectrum or an in vivo MR spectrum most of the time consists of signals of more than one spin species which can exhibit different longitudinal relaxation times. Therefore, the calculated Ernst angle may apply only to the selected one of the many signals in the spectrum and other signals may be less intense than at their own Ernst angle. In contrast in standard MRI, the detected signal of interest is predominantly that of a single spin species, the water 1H spins.
This relationship is especially important in magnetic resonance imaging where the sum of interscan delay and acquisition time is often short relative to the signal's value. In the MRI community, this sum is often known as repetition time , thus
,
and, consequently,
References
Nuclear magnetic resonance spectroscopy
Magnetic resonance imaging | Ernst angle | [
"Physics",
"Chemistry"
] | 528 | [
"Nuclear magnetic resonance",
"Spectrum (physical sciences)",
"Magnetic resonance imaging",
"Nuclear magnetic resonance spectroscopy",
"Spectroscopy"
] |
26,194,315 | https://en.wikipedia.org/wiki/Harmonious%20set | In mathematics, a harmonious set is a subset of a locally compact abelian group on which every weak character may be uniformly approximated by strong characters. Equivalently, a suitably defined dual set is relatively dense in the Pontryagin dual of the group. This notion was introduced by Yves Meyer in 1970 and later turned out to play an important role in the mathematical theory of quasicrystals. Some related concepts are model sets, Meyer sets, and cut-and-project sets.
Definition
Let Λ be a subset of a locally compact abelian group G and Λd be the subgroup of G generated by Λ, with discrete topology. A weak character is a restriction to Λ of an algebraic homomorphism from Λd into the circle group:
A strong character is a restriction to Λ of a continuous homomorphism from G to T, that is an element of the Pontryagin dual of G.
A set Λ is harmonious if every weak character may be approximated by
strong characters uniformly on Λ. Thus for any ε > 0 and any weak character χ, there exists a strong character ξ such that
If the locally compact abelian group G is separable and metrizable (its topology may be defined by a translation-invariant metric) then harmonious sets admit another, related, description. Given a subset Λ of G and a positive ε, let Mε be the subset of the Pontryagin dual of G consisting of all characters that are almost trivial on Λ:
Then Λ is harmonious if the sets Mε are relatively dense in the sense of Besicovitch: for every ε > 0 there exists a compact subset Kε of the Pontryagin dual such that
Properties
A subset of a harmonious set is harmonious.
If Λ is a harmonious set and F is a finite set then the set Λ + F is also harmonious.
The next two properties show that the notion of a harmonious set is nontrivial only when the ambient group is neither compact nor discrete.
A finite set Λ is always harmonious. If the group G is compact then, conversely, every harmonious set is finite.
If G is a discrete group then every set is harmonious.
Examples
Interesting examples of multiplicatively closed harmonious sets of real numbers arise in the theory of diophantine approximation.
Let G be the additive group of real numbers, θ >1, and the set Λ consist of all finite sums of different powers of θ. Then Λ is harmonious if and only if θ is a Pisot number. In particular, the sequence of powers of a Pisot number is harmonious.
Let K be a real algebraic number field of degree n over Q and the set Λ consist of all Pisot or Salem numbers of degree n in K. Then Λ is contained in the open interval (1,∞), closed under multiplication, and harmonious. Conversely, any set of real numbers with these 3 properties consists of all Pisot or Salem numbers of degree n in some real algebraic number field K of degree n.
See also
Almost periodic function
References
Yves Meyer, Algebraic numbers and harmonic analysis, North-Holland Mathematical Library, vol.2, North-Holland, 1972
Harmonic analysis
Diophantine approximation
Tessellation | Harmonious set | [
"Physics",
"Mathematics"
] | 661 | [
"Tessellation",
"Euclidean plane geometry",
"Number theory",
"Mathematical relations",
"Planes (geometry)",
"Diophantine approximation",
"Approximations",
"Symmetry"
] |
26,194,441 | https://en.wikipedia.org/wiki/Meyer%20set | In mathematics, a Meyer set or almost lattice is a relatively dense set X of points in the Euclidean plane or a higher-dimensional Euclidean space such that its Minkowski difference with itself is uniformly discrete. Meyer sets have several equivalent characterizations; they are named after Yves Meyer, who introduced and studied them in the context of diophantine approximation. Nowadays Meyer sets are best known as mathematical model for quasicrystals. However, Meyer's work precedes the discovery of quasicrystals by more than a decade and was entirely motivated by number theoretic questions.
Definition and characterizations
A subset X of a metric space is relatively dense if there exists a number r such that all points of X are within distance r of X, and it is uniformly discrete if there exists a number ε such that no two points of X are within distance ε of each other. A set that is both relatively dense and uniformly discrete is called a Delone set. When X is a subset of a vector space, its Minkowski difference X − X is the set {x − y | x, y in X} of differences of pairs of elements of X.
With these definitions, a Meyer set may be defined as a relatively dense set X for which X − X is uniformly discrete. Equivalently, it is a Delone set for which X − X is Delone, or a Delone set X for which there exists a finite set F with X − X ⊂ X + F
Some additional equivalent characterizations involve the set
defined for a given X and ε, and approximating (as ε approaches zero) the definition of the reciprocal lattice of a lattice. A relatively dense set X is a Meyer set if and only if
For all ε > 0, Xε is relatively dense, or equivalently
There exists an ε with 0 < ε < 1/2 for which Xε is relatively dense.
A character of an additively closed subset of a vector space is a function that maps the set to the unit circle in the plane of complex numbers, such that the sum of any two elements is mapped to the product of their images. A set X is a harmonious set if, for every character χ on the additive closure of X and every ε > 0, there exists a continuous character on the whole space that ε-approximates χ. Then a relatively dense set X is a Meyer set if and only if it is harmonious.
Examples
Meyer sets include
The points of any lattice
The vertices of any rhombic Penrose tiling
The Minkowski sum of another Meyer set with any nonempty finite set
Any relatively dense subset of another Meyer set
References
Metric geometry
Crystallography
Lattice points | Meyer set | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 542 | [
"Lattice points",
"Materials science",
"Crystallography",
"Condensed matter physics",
"Number theory"
] |
26,196,392 | https://en.wikipedia.org/wiki/Ovarian%20tissue%20cryopreservation | Ovarian tissue cryopreservation is cryopreservation of tissue of the ovary of a female.
Indications
Cryopreservation of ovarian tissue is of interest to women who want fertility preservation beyond the natural limit, or whose reproductive potential is threatened by cancer therapy, for example in hematologic malignancies or breast cancer. It can be performed on prepubertal girls at risk for premature ovarian failure, and this procedure is as feasible and safe as comparable operative procedures in children.
Procedure
The procedure is to take a part of the ovary and carry out slow freezing before storing it in liquid nitrogen whilst therapy is undertaken. Tissue can then be thawed and implanted near the fallopian, either orthotopic (on the natural location) or heterotopic (on the abdominal wall), where it starts to produce new eggs, allowing normal conception to take place. A study of 60 procedures concluded that ovarian tissue harvesting appears to be safe. A study has also concluded that culturing a thawed fetal ovarian tissue for a few days before transplanting can be beneficial to the development of follicles.
Strips of cortical ovarian tissue can also be cryopreserved, but it must be re-implanted into the body to allow the encapsulated immature follicles to complete their maturation. In vitro maturation has been performed experimentally, but the technique is not yet clinically available. With this technique, cryopreserved ovarian tissue could possibly be used to make oocytes that can directly undergo in vitro fertilization.
Potential for Pregnancy
Women with malignant diseases that undergo treatment utilizing irradiation or gonadotoxic drugs, have an increase probability of losing ovarian function resulting in infertility. The ovarian cortical tissue harbors majority of the ovarian pool of follicles. Once a patient is cured from their malignant disease, the tissue can be thawed and then transplanted for the possibility of restoring ovarian function. Following auto-transplantation, patients showed resumption of ovarian activity to include the first menstruation at 14 to 25 weeks, and follicular development 8 to 21 weeks.
Risk of cancer recurrence
For autotransplantation of cryopreserved ovarian tissue in cancer survivors, metastases have been repeatedly detected in ovarian tissue obtained from patients with leukemia, as well as in one patient with Ewing sarcoma. Ovarian tissue autotransplantation may pose a risk of cancer recurrence in patients with colorectal, gastric and endometrial cancer. However, no metastases have been detected in ovarian tissue from lymphoma and breast cancer patients who have been undergoing ovarian tissue cryopreseration.
History
The first transplant of cryopreserved ovarian tissue was performed in New York by Kutluk Oktay in 1999, but it did not restore menstrual cycles to the patient. In 2004 Jacques Donnez in Belgium reported the first successful birth from frozen tissue using a protocol developed in Roger Gosden’s laboratory, where Oktay had studied.
In 1997 samples of ovarian cortex were taken from a woman with Hodgkin's lymphoma and cryopreserved by slow freezing (Planer, UK) for banking in liquid nitrogen. The patient had premature ovarian failure after chemotherapy. In 2003, after freeze-thawing, orthotopic autotransplantation of ovarian cortical tissue was done by laparoscopy and five months after reimplantation regular ovulatory cycles were reinitiated. Eleven months after re-implantation a viable intrauterine pregnancy was confirmed, which resulted in the delivery of a healthy baby. Donnez’s claims have been challenged because there was no absolute proof if the mother was infertile before treatment. However, Sherman Silber in St. Louis, Missouri, and another of Gosden’s collaborators, Dror Meirow at the Sheba Medical Center in Israel, and subsequently others have proven beyond doubt the technique is effective. Healthy babies of both genders have been born.
The first birth following transplantation of ovarian tissue stored at a central cryo bank and transported overnight has been achieved by centers of the Fertiprotekt network in Germany 2011. This demonstrated that ovarian tissue can be stored centrally in specialized centers.
References
Cryogenics
Obstetrical procedures
Cryopreservation | Ovarian tissue cryopreservation | [
"Physics",
"Chemistry"
] | 954 | [
"Cryopreservation",
"Applied and interdisciplinary physics",
"Cryogenics",
"Cryobiology"
] |
26,197,096 | https://en.wikipedia.org/wiki/C20H18O6 | {{DISPLAYTITLE:C20H18O6}}
The molecular formula C20H18O6 (molar mass: 354.35 g/mol, exact mass: 354.1103 u) may refer to:
Carpanone, a lignan
Luteone (isoflavone)
Sesamin, a lignan
Molecular formulas | C20H18O6 | [
"Physics",
"Chemistry"
] | 76 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
34,961,349 | https://en.wikipedia.org/wiki/Buchsbaum%20ring | In mathematics, Buchsbaum rings are Noetherian local rings such that every system of parameters is a weak sequence.
A sequence of the maximal ideal is called a weak sequence if for all .
They were introduced by and are named after David Buchsbaum.
Every Cohen–Macaulay local ring is a Buchsbaum ring. Every Buchsbaum ring is a generalized Cohen–Macaulay ring.
References
Commutative algebra
Ring theory | Buchsbaum ring | [
"Mathematics"
] | 89 | [
"Fields of abstract algebra",
"Commutative algebra",
"Ring theory"
] |
34,964,561 | https://en.wikipedia.org/wiki/In%20silico%20PCR | In silico PCR refers to computational tools used to calculate theoretical polymerase chain reaction (PCR) results using a given set of primers (probes) to amplify DNA sequences from a sequenced genome or transcriptome.
These tools are used to optimize the design of primers for target DNA or cDNA sequences. Primer optimization has two goals: efficiency and selectivity. Efficiency involves taking into account such factors as GC-content, efficiency of binding, complementarity, secondary structure, and annealing and melting point (Tm). Primer selectivity requires that the primer pairs not fortuitously bind to random sites other than the target of interest, nor should the primer pairs bind to conserved regions of a gene family. If the selectivity is poor, a set of primers will amplify multiple products besides the target of interest.
The design of appropriate short or long primer pairs is only one goal of PCR product prediction. Other information provided by in silico PCR tools may include determining primer location, orientation, length of each amplicon, simulation of electrophoretic mobility, identification of open reading frames, and links to other web resources.
Many software packages are available offering differing balances of feature set, ease of use, efficiency, and cost. Primer-BLAST is widely used, and freely accessible from the National Center for Biotechnology Information (NCBI) website. On the other hand, FastPCR, a commercial application, allows simultaneous testing of a single primer or a set of primers designed for multiplex target sequences. It performs a fast, gapless alignment to test the complementarity of the primers to the target sequences. Probable PCR products can be found for linear and circular templates using standard or inverse PCR as well as for multiplex PCR. Dicey is free software that outputs in-silico PCR products from primer sets provided in a FASTA file. It is fast (through use of a genome's FM-index) and can account for primer melting temperature and tolerated edit distances between primers and hit locations on the genome. VPCR runs a dynamic simulation of multiplex PCR, allowing for an estimate of quantitative competition effects between multiple amplicons in one reaction. The UCSC Genome Browser offers isPCR, which provides graphical as well text-file output to view PCR products on more than 100 sequenced genomes.
A primer may bind to many predicted sequences, but only sequences with no or few mismatches (1 or 2, depending on location and nucleotide) at the 3' end of the primer can be used for polymerase extension. The last 10-12 bases at the 3' end of a primer are sensitive to initiation of polymerase extension and general primer stability on the template binding site. The effect of a single mismatch at these last 10 bases at the 3' end of the primer depends on its position and local structure, reducing the primer binding, selectivity, and PCR efficiency.
References
External links
Webtools for PCR, qPCR, in silico PCR and oligonucleotides
In silico simulation of molecular biology experiments
Nucleic acids
Bioinformatics | In silico PCR | [
"Chemistry",
"Engineering",
"Biology"
] | 678 | [
"Bioinformatics",
"Biomolecules by chemical classification",
"Biological engineering",
"Nucleic acids"
] |
44,893,854 | https://en.wikipedia.org/wiki/Infrared%20Nanospectroscopy%20%28AFM-IR%29 | AFM-IR (atomic force microscope-infrared spectroscopy) or infrared nanospectroscopy is one of a family of techniques that are derived from a combination of two parent instrumental techniques. AFM-IR combines the chemical analysis power of infrared spectroscopy and the high-spatial resolution of scanning probe microscopy (SPM). The term was first used to denote a method that combined a tuneable free electron laser with an atomic force microscope (AFM, a type of SPM) equipped with a sharp probe that measured the local absorption of infrared light by a sample with nanoscale spatial resolution.
Originally the technique required the sample to be deposited on an infrared-transparent prism and be less than 1μm thick. This early setup improved the spatial resolution and sensitivity of photothermal AFM-based techniques from microns to circa 100 nm. Then, the use of modern pulsed optical parametric oscillators and quantum cascade lasers, in combination with top-illumination, have enabled to investigate samples on any substrate and with increase sensitivity and spatial resolution. As most recent advances, AFM-IR has been proved capable to acquire chemical maps and nanoscale resolved spectra at the single-molecule scale from macromolecular self-assemblies and biomolecules with circa 10 nm diameter, as well as to overcome limitations of IR spectroscopy and measure in aqueous liquid environments.
Recording the amount of infrared absorption as a function of wavelength or wavenumber, AFM-IR creates an infrared absorption spectra that can be used to chemically characterize and even identify unknown samples. Recording the infrared absorption as a function of position can be used to create chemical composition maps that show the spatial distribution of different chemical components. Novel extensions of the original AFM-IR technique and earlier techniques have enabled the development of bench-top devices capable of nanometer spatial resolution, that do not require a prism and can work with thicker samples, and thereby greatly improving ease of use and expanding the range of samples that can be analysed. AFM-IR has achieved lateral spatial resolutions of ca. 10 nm, with a sensitivity down to the scale of molecular monolayer and single protein molecules with molecular weight down to 400-600 kDa.
AFM-IR is related to techniques such as tip-enhanced Raman spectroscopy (TERS), scanning near-field optical microscopy (SNOM), nano-FTIR and other methods of vibrational analysis with scanning probe microscopy.
History
Early history
The earliest measurements combining AFM with infrared spectroscopy were performed in 1999 by Hammiche et al. at the University of Lancaster in the United Kingdom, in an EPSRC-funded project led by M Reading and H M Pollock. Separately, Anderson at the Jet Propulsion Laboratory in the United States made a related measurement in 2000. Both groups used a conventional Fourier transform infrared spectrometer (FTIR) equipped with a broadband thermal source, the radiation was focused near the tip of a probe that was in contact with a sample. The Lancaster group obtained spectra by detecting the absorption of infrared radiation using a temperature sensitive thermal probe. Anderson took the different approach of using a conventional AFM probe to detect the thermal expansion. He reported an interferogram but not a spectrum; the first infrared spectrum obtained in this way was reported by Hammiche et al. in 2004: this represented the first proof that spectral information about a sample could be obtained using this approach.
Both of these early experiments used a broadband source in conjunction with an interferometer; these techniques could, therefore, be referred to as AFM-FTIR although Hammiche et al. coined the more general term photothermal microspectroscopy or PTMS in their first paper. PTMS has various subgroups; including techniques that measure temperature measure thermal expansion use broadband sources. use lasers excite the sample using evanescent waves, illuminate the sample directly from above etc. and different combinations of these. Fundamentally, they all exploit the photothermal effect. Different combinations of sources, methods, methods of detection and methods of illumination have benefits for different applications. Care should be taken to ensure that it is clear which form of PTMS is being used in each case. Currently there is no universally accepted nomenclature. The original technique dubbed AFM-IR that induced resonant motion in the probe using a Free Electron Laser has developed by exploiting the foregoing permutations so that it has evolved into various forms.
The pioneering experiments of Hammiche et al and Anderson had limited spatial resolution due to thermal diffusion - the spreading of heat away from the region where the infrared light was absorbed. The thermal diffusion length (the distance the heat spreads) is inversely proportional to the root of the modulation frequency. Consequently, the spatial resolution achieved by the early AFM-IR approaches was around one micron or more, due to the low modulation frequencies of the incident radiation created by the movement of the mirror in the interferometer. Also, the first thermal probes were Wollaston wire devices that were developed originally for Microthermal analysis (in fact PTMS was originally considered to be one of a family of microthermal techniques). The comparatively large size of these probes also limited spatial resolution. Bozec et al. and Reading et al. used thermal probes with nanoscale dimensions and demonstrated higher spatial resolution. Ye et al described a MEM-type thermal probe giving sub-100 nm spatial resolution, which they used for nanothermal analysis. The process of exploring laser sources began in 2001 by Hammiche et al when they acquired the first spectrum using a tuneable laser (see Resolution improvement with pulsed laser source).
A significant development was the creation by Reading et al. in 2001 of a custom interface that allowed measurements to be made while illuminating the sample from above; this interface focused the infrared beam to a spot of circa 500μm diameter, close to the theoretical maximum. The use of top-down or top-side illumination has the important benefit that samples of arbitrary thickness can be studied on arbitrary substrates. In many cases this can be done without any sample preparation. All subsequent experiments by Hammiche, Pollock, Reading and their co-workers were made using this type of interface including the instrument constructed by Hill et al. for nanoscale imaging using a pulsed laser. The work of the University of Lancaster group in collaboration with workers from the University of East Anglia led to the formation of a company, Anasys Instruments, to exploit this and related technologies (see Commercialization).
Spatial resolution improvement with pulsed laser sources
In the first paper on AFM-based infrared by Hammiche et al., the relevant well-established theoretical considerations were outlined that predict that high spatial resolution can be achieved using rapid modulation frequencies because of the consequent reduction in the thermal diffusion length. They estimated that spatial resolutions in the range of 20 nm-30 nm should be achievable. The most readily available sources that can achieve high modulation frequencies are pulsed lasers: even when the rapidity of the pulses is not high, the square wave form of a pulse contains very high modulation frequencies in Fourier space. In 2001, Hammiche et al. used a type of bench-top tuneable, pulsed infrared laser known as an optical parametric oscillator or OPO and obtained the first probe-based infrared spectrum with a pulsed laser, however, they did not report any images.
Nanoscale spatial resolution AFM-IR imaging using a pulsed laser was first demonstrated by Dazzi et al at the University of Paris-Sud, France. Dazzi and his colleagues used a wavelength-tuneable, free electron laser at the CLIO facility in Orsay, France to provide an infrared source with short pulses. Like earlier workers, they used a conventional AFM probe to measure thermal expansion but introduced a novel optical configuration: the sample was mounted on an IR-transparent prism so that it could be excited by an evanescent wave. Absorption of short infrared laser pulses by the sample caused rapid thermal expansion that created a force impulse at the tip of the AFM cantilever. The thermal expansion pulse induced transient resonant oscillations of the AFM cantilever probe. This has led to the technique being dubbed Photo-Thermal Induced Resonance (PTIR), by some workers in the field. Some prefer the terms PTIR or PTMS to AFM-IR as the technique is not necessarily restricted to infrared wavelengths. The amplitude of the cantilever oscillation is directly related to the amount of infrared radiation absorbed by the sample. By measuring the cantilever oscillation amplitude as a function of wavenumber, Dazzi's group was able to obtain absorption spectra from nanoscale regions of the sample. Compared to earlier work, this approach improved spatial resolution because the use of short laser pulses reduced the duration of the thermal expansion pulse to the point that the thermal diffusion lengths can be on the scale of nanometres rather than microns.
A key advantage of the use of a tuneable laser source, with a narrow wavelength range, is the ability to rapidly map the locations of specific chemical components on the sample surface. To achieve this, Dazzi's group tuned their free electron laser source to a wavelength corresponding to the molecular vibration of the chemical of interest, then mapped the cantilever oscillation amplitude as function of position across the sample. They demonstrated the ability to map chemical composition in E. coli bacteria. They could also visualize polyhydroxybutyrate (PHB) vesicles inside Rhodobacter capsulatus cells and monitor the efficiency of PHB production by the cells.
At the University of East Anglia in the UK, as part of an EPSRC-funded project led by M. Reading and S. Meech, Hill and his co-workers followed the earlier work of Reading et al. and Hammiche et al. and measured thermal expansion using an optical configuration that illuminated the sample from above in contrast to Dazzi et al. who excited the sample with an evanescent wave from below. Hill also made use of an optical parametric oscillator as the infrared source in the manner of Hammiche et al. This novel combination of topside illumination, OPO source and measuring thermal expansion proved capable of nanoscale spatial resolution for infrared imaging and spectroscopy (the figures show a schematic of the UEA apparatus and results obtained with it). The use by Hill and co-workers of illumination from above allowed a substantially wider range of samples to be studied than was possible using Dazzi's technique. By introducing the use of a bench top IR source and topdown illumination, the work of Hammiche, Hill and their coworkers made possible the first commercially viable SPM-based infrared instrument (see Commercialization).
Broadband pulsed laser sources
Reading et al. have explored the use of a broadband QCL combined with thermal expansion measurements. Above, the inability of thermal broadband sources to achieve high spatial resolution is discussed (see history). In this case the frequency of modulation is limited by the mirror speed of the interferometer which, in turn, limits the lateral spatial resolution that can be achieved. When using a broadband QCL the resolution is limited not by the mirror speed but by the modulation frequency of the laser pulses (or other waveforms). The benefit of using a broadband source is that an image can be acquired that comprises an entire spectrum or part of a spectrum for each pixel. This is much more powerful than acquiring images bases on a single wavelength. The preliminary results of Reading et al. show that directing a broadband QCL though an interferometer can give an easily detectable response from a conventional AFM probe measuring thermal expansion.
Commercialization
The AFM-IR technique based on a pulsed infrared laser source was commercialized by Anasys Instruments, a company founded by Reading, Hammiche and Pollock in the United Kingdom in 2004; a sister, United States corporation was founded a year later. Anasys Instruments developed its product with support from the National Institute of Standards and Technology and the National Science Foundation. Since free electron lasers are rare and available only at select institutions, a key to enabling a commercial AFM-IR was to replace them with a more compact type of infrared source. Following the lead given by Hammiche et al in 2001 and Hill et al in 2008, Anasys Instruments introduced an AFM-IR product in early 2010, using a tabletop laser source based on a nanosecond optical parametric oscillator. The OPO source enabled nanoscale infrared spectroscopy over a tuning range of roughly 1000–4000 cm−1 or 2.5-10 μm.
The initial product required samples to be mounted on infrared-transparent prisms, with the infrared light being directed from below in the manner of Dazzi et al. For best operation, this illumination scheme required thin samples, with optimal thickness of less than 1 μm, prepared on the surface of the prism. In 2013, Anasys released an AFM-IR instrument based on the work of Hill et al. that supported top-side illumination. "By eliminating the need to prepare samples on infrared-transparent prisms and relaxing the restriction on sample thickness, the range of samples that could be studied was greatly expanded. The CEO of Anasys Instruments recognised this achievement by calling it " an exciting major advance" in a letter written to the university and included in the final report of EPSRC project EP/C007751/1. The UEA technique went on to become Anasys Instruments' flagship product.
Comparison to related photothermal techniques
It is worth noting that the first infrared spectrum obtained by measuring thermal expansion using an AFM was obtained by Hammiche and co-workers without inducing resonant motions in the probe cantilever. In this early example the modulation frequency was too low to achieve high spatial resolution but there is nothing, in principle, preventing the measurement of thermal expansion at higher frequencies without analysing or inducing resonant behaviour. Possible options for measuring the displacement of the tip rather than the subsequent propagation of waves along the cantilever include; interferometry focused at the end of the cantilever where the tip is located, a torsional motion resulting from an offset probe (it would only be influenced by the motions of the cantilever as a second order effect) and exploiting the fact that the signal from a heated thermal probe is strongly influenced by the position of the tip relative to the surface thus this could provide a measurement of thermal expansion that wasn't strongly influenced by or dependent upon resonance. The advantages of a non-resonant method of detection is that any frequency of light modulation could be used thus depth information could be obtained in a controlled way (see below) whereas methods that rely on resonance are limited to harmonics. The thermal-probe based method of Hammiche et al. has found a significant number of applications.
A unique application made possible by the top-down illumination combined with a thermal probe is localized depth profiling, this is not possible using either using the Dazzi et al. configuration of AFM-IR or that of Hill et al. despite the fact the latter uses top-down illumination. Obtaining linescans and images with thermal probes has been shown to be possible, sub-diffraction limit spatial resolution can be achieved and the resolution for delineating boundaries can be enhanced using chemometric techniques.
In all of these examples a spectrum is acquired that spans the entire mid-IR range for each pixel, this is considerably more powerful than measuring the absorption of a single wavelength as is the case for AFM-IR when using either the method of Dazzi et al. or Hill et al. Reading and his group demonstrated how, because thermal probes can be heated, localized thermal analysis can be combined with photothermal infrared spectroscopy using a single probe. In this way local chemical information could be complemented with local physical properties such melting and glass transition temperatures. This in turn led to the concept of thermally assisted nanosampling, where the heated tip performs a local thermal analysis experiment then the probe is retracted taking with it down to femtograms of softened material that adhere to the tip. This material can then be manipulated and/or analysed by photothermal infrared spectroscopy or other techniques. This considerably increases the analytical power of this type of SPM-based infrared instrument beyond anything that can be achieved with conventional AFM probes such as those used in AFM-IR when using either the Dazzi et al. or the Hill et al. version.
Thermal probe techniques have still not achieved the nanoscale spatial resolution that thermal expansion methods have attained though this is theoretically possible. For this, a robust thermal probe and a high intensity source is needed. Recently, the first images using a QCL and a thermal probe have been obtained by Reading et al. A good signal to noise ratio enabled rapid imaging but sub-micron spatial resolution was not clearly demonstrated. Theory predicts improvements in spatial resolution could be achieved by confining data analysis to the early part of the thermal response to a step change increase in the intensity of the incident radiation. In this way pollution of the measurement from adjacent regions would be avoided, i.e. the measurement window could be confined to a suitable fraction of the time of flight of the thermal wave (using a Fourier analysis of the response could provide a similar outcome by using the high frequency components). This could be achieved by tapping the probe in synchrony with the laser. Similarly, lasers that provide very rapid modulations could further reduce thermal diffusion lengths.
Although most effort to date has been focused on thermal expansion measurements, this might change. Truly robust thermal probes have recently become available, as have affordable compact QCL's that are tuneable over a broad frequency range. Consequently, it may soon be the case that thermal probe techniques will become as widely used as those based on thermal expansion. Ultimately, instruments that can easily switch between modes and even combine them using a single probe will certainly become available, for example, a single probe will eventually be able to measure both temperature and thermal expansion.
Recent improvements and single-molecule sensitivity
The original commercial AFM-IR instruments required most samples to be thicker than 50 nm to achieve sufficient sensitivity. Sensitivity improvements were achieved using specialized cantilever probes with an internal resonator and by wavelet based signal processing techniques. Sensitivity was further improved by Lu et al. by using quantum cascade laser (QCL) sources. The high repetition rate of the QCL allows absorbed infrared light to continuously excite the AFM tip at a "contact resonance" of the AFM cantilever. This resonance-enhanced AFM-IR, in combination with electric field enhancement from metallic tips and substrates led to the demonstration of AFM-IR spectroscopy and compositional imaging of films as thin as single self-assembled monolayers. AFM-IR has also been integrated with other sources including a picosecond OPO offering a tuning range 1.55 μm to 16 μm (from 6450 cm−1 to 625 cm−1).
In its initial development, with samples deposited on transparent prisms and using OPO laser sources, the sensitivity of AFM-IR was limited to a minimal thickness of the sample of circa 50-100 nm as mentioned above. The advent of quantum cascade lasers (QCL) and the use of the electromagnetic field enhancement between metallic probes and substrates have improved the sensitivity and spatial resolution of AFM-IR down to the measurement of large (>0.3 μm) and flat (~2–10 nm) self-assembled monolayers, where still hundreds of molecules are present. Ruggeri et al. have recently developed off-resonance, low power and short pulse AFM-IR (ORS-nanoIR) to prove the acquisition of infrared absorption spectra and chemical maps at the single molecule level, in the case of macromolecular assemblies and large protein molecules with a spatial resolution of ca. 10 nm.
Nanoscale chemical imaging and mapping
Nanoscale resolved chemical maps and spectra
AFM-IR enables nanoscale infrared spectroscopy, i.e. the ability to obtain infrared absorption spectra from nanoscale regions of a sample.
Chemical compositional mapping AFM-IR can also be used to perform chemical imaging or compositional mapping with spatial resolution down to ~10-20 nm, limited only by the radius of the AFM tip. In this case, the tuneable infrared source emits a single wavelength, corresponding to a specific molecular resonance, i.e. a specific infrared absorption band. By mapping the AFM cantilever oscillation amplitude as a function of position, it is possible to map out the distribution of specific chemical components. Compositional maps can be made at different absorption bands to reveal the distribution of difference chemical species.
Complementary morphological and mechanical mapping
The AFM-IR technique can simultaneously provide complementary measurements of the mechanical stiffness and dissipation of a sample surface. When infrared light is absorbed by the sample the resulting rapid thermal expansion excites a "contact resonance" of the AFM cantilever, i.e. a coupled resonance resulting from the properties of both the cantilever and the stiffness and damping of the sample surface. Specifically, the resonance frequency shifts to higher frequencies for stiffer materials and to lower frequencies for softer material. Additionally, the resonance becomes broader for materials with larger dissipation. These contact resonances have been studied extensively by the AFM community (see, for example, atomic force acoustic microscopy). Traditional contact resonance AFM requires an external actuator to excite the cantilever contact resonances. In AFM-IR these contact resonances are automatically excited every time an infrared pulse is absorbed by the sample. So the AFM-IR technique can measure the infrared absorption by the amplitude of the cantilever oscillation response and the mechanical properties of the sample via the contact resonance frequency and quality factor.
Applications
Applications of AFM-IR have include the characterisation of protein, polymers composites, bacteria, cells, biominerals, pharmaceutical sciences, photonics/nanoantennas, fuel cells, fibers, skin, hair, metal organic frameworks, microdroplets, self-assembled monolayers, nanocrystals, and semiconductors.
Polymers
Polymers blends, composites, multilayer films and fibers AFM-IR has been used to identify and map polymer components in blends, characterize interfaces in composites, and even reverse engineer multilayer films Additionally AFM-IR has been used to study chemical composition in Poly(3][4-ethylenedioxythiophene) (PEDOT) conducting polymers. and vapor infiltration into polyethylene terephthalate PET fibers.
Protein science
The chemical and structural properties of protein determine their interactions, and thus their functions, in a wide variety of biochemical processes. Since Ruggeri et al. pioneering work on the aggregation pathways of the Josephin domain of ataxin-3, responsible for type-3 spinocerebellar ataxia, an inheritable protein-misfolding disease, AFM-IR was used to characterize molecular conformations in a wide spectrum of applications in protein and life sciences. This approach has delivered new mechanistic insights into the behaviour of disease-related proteins and peptides, such as Aβ42, huntingtin and FUS, which are involved in the onset of Alzheimer's, Huntington's and Amyotrophic lateral sclerosis (ALS). Similarly AFM-IR has been applied to study studying protein based functional biomaterials.
Life sciences
AFM-IR has been used to characterise spectroscopically in detail chromosomes, bacteria and cells with nanoscale resolution. For example, in the case of infection of bacteria by viruses (Bacteriophages), and also the production of polyhydroxybutyrate (PHB) vesicles inside Rhodobacter capsulatus cells and triglycerides in Streptomyces bacteria (for biofuel applications). AFM-IR has also been used to evaluate and map mineral content, crystallinity, collagen maturity and acid phosphate content via ratiometric analysis of various absorption bands in bone. AFM-IR has also been used to perform spectroscopy and chemical mapping of structural lipids in human skin, cells and hair
Fuel cells
AFM-IR has been used to study hydrated Nafion membranes used as separators in fuel cells. The measurements revealed the distribution of free and ionically bound water on the Nafion surface.
Photonic nanoantennas
AFM-IR has been used to study the surface plasmon resonance in heavily silicon-doped indium arsenide microparticles. Gold split ring resonators have been studied for use with Surface-Enhanced Infrared Absorption Spectroscopy. In this case AFM-IR was used to measure the local field enhancement of the plasmonics structures (~30X) at 100 nm spatial resolution.
Pharmaceutical sciences
AFM-IR has been used to study miscibility and phase separation in drug polymer blends, the chemical analysis of nanocrystalline drug particles as small 90 nm across, the interaction of chromosomes with chemotherapeutics drugs, and of amyloids with pharmacological approaches to contrast neurodegeneration.
Notes
References
External links
Infrared Imaging beyond the Diffraction Limit (NIST Andrea Centrone Group)
Sub-wavelength resolution microspectroscopy (University of Texas Mikhail Belkin group)
Nanoscale Microscopy and Spectroscopy Group (Wageningen University, Ruggeri group)
Scanning probe microscopy
Infrared spectroscopy
Infrared imaging
Imaging
Analytical chemistry | Infrared Nanospectroscopy (AFM-IR) | [
"Physics",
"Chemistry",
"Materials_science"
] | 5,285 | [
"Spectrum (physical sciences)",
"Infrared spectroscopy",
"nan",
"Microscopy",
"Scanning probe microscopy",
"Nanotechnology",
"Spectroscopy"
] |
44,894,707 | https://en.wikipedia.org/wiki/OptiRTC | OptiRTC is an American technology company that has developed a software as a service platform for civil infrastructure. The OptiRTC platform is a cloud-native platform that integrates sensors, forecasts, and environmental contexts to actively control stormwater infrastructure. The OptiRTC platform is built on Microsoft Azure and uses internet of things technology to predicatively manage distributed water systems.
History
In June 2011, the OptiRTC team partnered with ioBridge to develop smart city tech based on ioBridge's hardware solution and OptiRTC's platform.
In November 2013, the OptiRTC team was assigned a patent for "Combined water storage and detention system and method of precipitation harvesting and management" that was co-invented by Marcus Quigley.
In November 2014, the New York Economic Development Corporation (NYCEDC) selected Opti technology as the winner of the first Smart City Expo World Congress.
In December 2014, OptiRTC was formally incorporated as an independent company through a spin-out from Geosyntec Consultants.
In January 2015, the Water Environment Research Foundation (WERF) published a research paper on High Performance Green Infrastructure, which focused primarily on distributed real-time control of stormwater infrastructure.
In January 2016, Opti began working with Particle to provide the communications layer for Opti's products.
In October 2018, Opti was commissioner by Albany’s Department of Water and Water Supply to install its underground smart water management system in Washington Park Lake as well as one of Albany’s constructed wetlands.
References
Environmental technology
Stormwater management
Internet of things
American companies established in 2014 | OptiRTC | [
"Chemistry",
"Environmental_science"
] | 323 | [
"Water treatment",
"Stormwater management",
"Water pollution"
] |
44,904,561 | https://en.wikipedia.org/wiki/Bogdan%20A.%20Dobrescu | Bogdan A. Dobrescu is a Romanian-born theoretical physicist with interests in high-energy physics associated with Fermilab. Previously he was a postdoctoral researcher at Yale University. He completed his Ph.D. in 1997 at Boston University.
In 2013, Dobrescu was elected a Fellow of the American Physical Society.
Selected works
References
External links
Bogdan A. Dobrescu's papers in the INSPIRE Database
Year of birth missing (living people)
Boston University alumni
Theoretical physicists
Particle physicists
People associated with Fermilab
Living people
Fellows of the American Physical Society | Bogdan A. Dobrescu | [
"Physics"
] | 118 | [
"Theoretical physics",
"Particle physicists",
"Particle physics",
"Theoretical physicists"
] |
46,312,975 | https://en.wikipedia.org/wiki/Omphalos%20of%20Delphi | The Omphalos of Delphi is an ancient marble monument that was found at the archaeological site of Delphi, Greece. According to the Ancient Greek myths regarding the founding of the Delphic Oracle, the god Zeus, in his attempt to locate the center of the Earth, launched two eagles from the two ends of the world, and the eagles, starting simultaneously and flying at equal speed, crossed their paths above the area of Delphi, and so was the place where Zeus placed the stone. Since then, Delphi has been considered by Greeks to be the center of the world, the omphalos "navel of the Earth."
The original stone is held in the museum of Delphi; there is a simplified copy at the site where it was found.
Description
The marble-carved stone that constituted the omphalos in the monument with the tripod and the dancers troubled the excavators, because they could not decide if it was the original or a copy from Hellenistic and Roman times. In the 2nd century AD, Pausanias traveled to the area of Delphi and has provided us with rare evidence through his work. The stone of the omphalos seems to have been decorated in high relief and had an oval shape. It is possible that in ancient times it was covered by a mesh of wool cloth, and it was kept in the adyton (inner sanctum), beside the tripod and the daphne (bay leaves) – the other sacred symbols of the god. As described by Pausanias, within the woolen cloth that was wound around the stone, there were precious stones designed in the shape of a mermaid, while two gilded eagles were fixed on top of it.
Recent studies by French archaeologists have demonstrated that the omphalos and the columns are connected and interlocked. In other words, the stone navel was mounted on the bronze tripods supported by the three dancers, at the top of the column. This is the spot where the omphalos is thought to have been placed until today, as a cover of the column, in order to reinforce the meaning and importance of the Athenian votive offering symbolically. The Athenians, wanting to placate and honor the god of light, offered him this copy of the original stone, which combined both Delphic symbols as a gift from the hands of the three priestess figures of Athenian origin.
See also
Omphalos (Omphalos is also the name of the stone given by Zeus's mother Rhea to Cronus to eat in place of her new-born son.)
References
External links
"Omphalos" in the Archaeological Museum of Delphi, Greek Ministry of Culture and Sports
Omphalos
Ancient Greek buildings and structures in Delphi
Ancient Greek art
Geographical centres
Objects in Greek mythology | Omphalos of Delphi | [
"Physics",
"Mathematics"
] | 564 | [
"Point (geometry)",
"Geometric centers",
"Geographical centres",
"Symmetry"
] |
46,313,242 | https://en.wikipedia.org/wiki/Materials%20oscilloscope | A materials oscilloscope is a time-resolved synchrotron high-energy X-ray technique to study rapid phase composition and microstructural related changes in a polycrystalline sample. Such device has been developed for in-situ studies of specimens undergoing physical thermo-mechanical simulation.
Principle
Two-dimensional diffraction images of a fine synchrotron beam interacting with the specimen are recorded in time frames, such that reflections stemming from individual crystallites of the polycrystalline material can be distinguished. Data treatment is undertaken in a way that diffraction rings are straightened and presented line by line streaked in time. The traces, so-called timelines in azimuthal-angle/time plots resemble to traces of an oscilloscope, giving insight on the processes happening in the material, while undergoing plastic deformation, or heating, or both, These timelines allow to distinguish grain growth or refinement, subgrain formation, slip deformation systems, crystallographic twinning, dynamic recovery, dynamic recrystallization, simultaneously in multiple phases.
History
The development has been undertaken from a project on modern diffraction methods for the investigation of thermo-mechanical processeses, and started with cold deformation of a copper specimen at the ESRF in 2007, followed by hot deformation of zirconium alloy at APS in 2008. Soon afterwards, a series of other materials has been tested and experience with the timeline traces gained. While ESRF and APS played the major role in experimental facilities, the Japanese high-energy synchrotron in the round, SPring-8 followed in 2013 by performing feasibility studies of this kind. Meanwhile, the new PETRA III synchrotron at DESY built a dedicated beamline for this purpose, opening the Materials Oscilloscope investigations to a larger public. The name materials oscilloscope was introduced in 2013 and used onward upon conferences such as MRS and TMS.
Implementation
Besides setups in multi-purpose facilities, the first dedicated end-station has been built at the PETRA-III storage ring, where this technique is routinely applied.
References
Materials science
Metallurgy
Materials testing
Metal forming
Diffraction
Synchrotron-related techniques
Synchrotron radiation
X-rays
Engineering thermodynamics
Laboratory techniques in condensed matter physics
Phases of matter
Deformation (mechanics) | Materials oscilloscope | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 486 | [
"Metallurgy",
"Engineering thermodynamics",
"Phases of matter",
"Electromagnetic spectrum",
"Laboratory techniques in condensed matter physics",
"Diffraction",
"Thermodynamics",
"Spectroscopy",
"X-rays",
"Materials science",
"Crystallography",
"Spectrum (physical sciences)",
"Deformation (me... |
46,316,026 | https://en.wikipedia.org/wiki/Combining%20rules | In computational chemistry and molecular dynamics, the combination rules or combining rules are equations that provide the interaction energy between two dissimilar non-bonded atoms, usually for the part of the potential representing the van der Waals interaction. In the simulation of mixtures, the choice of combining rules can sometimes affect the outcome of the simulation.
Combining rules for the Lennard-Jones potential
The Lennard-Jones Potential is a mathematically simple model for the interaction between a pair of atoms or molecules. One of the most common forms is
where ε is the depth of the potential well, σ is the finite distance at which the inter-particle potential is zero, r is the distance between the particles. The potential reaches a minimum, of depth ε, when r = 21/6σ ≈ 1.122σ.
Lorentz-Berthelot rules
The Lorentz rule was proposed by H. A. Lorentz in 1881:
The Lorentz rule is only analytically correct for hard sphere systems. Intuitively, since loosely reflect the radii of particle i and j respectively, their averages can be said to be the effective radii between the two particles at which point repulsive interactions become severe.
The Berthelot rule (Daniel Berthelot, 1898) is given by:
.
Physically, this arises from the fact that is related to the induced dipole interactions between two particles. Given two particles with instantaneous dipole respectively, their interactions correspond to the products of . An arithmetic average of and will not however, result in the average of the two dipole products, but the average of their logarithms would be.
These rules are the most widely used and are the default in many molecular simulation packages, but are not without failings.
Waldman-Hagler rules
The Waldman-Hagler rules are given by:
and
Fender-Halsey
The Fender-Halsey combining rule is given by
Kong rules
The Kong rules for the Lennard-Jones potential are given by:
Others
Many others have been proposed, including those of Tang and Toennies
Pena, Hudson and McCoubrey and Sikora (1970).
Combining rules for other potentials
Good-Hope rule
The Good-Hope rule for Mie–Lennard‐Jones or Buckingham potentials is given by:
Hogervorst rules
The Hogervorst rules for the Exp-6 potential are:
and
Kong-Chakrabarty rules
The Kong-Chakrabarty rules for the Exp-6 potential are:
and
Other rules for that have been proposed for the Exp-6 potential are the Mason-Rice rules and the Srivastava and Srivastava rules (1956).
Equations of state
Industrial equations of state have similar mixing and combining rules. These include the van der Waals one-fluid mixing rules
and the van der Waals combining rule, which introduces a binary interaction parameter ,
.
There is also the Huron-Vidal mixing rule, and the more complex Wong-Sandler mixing rule, which equates excess Helmholtz free energy at infinite pressure between an equation of state and an activity coefficient model (and thus with liquid excess Gibbs free energy).
References
Intermolecular forces
Computational chemistry
Theoretical chemistry
Potentials
Molecular dynamics
Molecular modelling | Combining rules | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 661 | [
"Molecular physics",
"Materials science",
"Intermolecular forces",
"Computational physics",
"Molecular dynamics",
"Computational chemistry",
"Theoretical chemistry",
"Molecular modelling",
"nan"
] |
36,048,753 | https://en.wikipedia.org/wiki/Strawberry%20Tree%20%28solar%20energy%20device%29 | The Strawberry Tree is the world’s first public solar charger for mobile devices. It was developed by Serbian company Strawberry Energy. It won first place in the European Commission’s "Sustainable energy week 2011" competition in Brussels, in the category Consuming.
Functionalities
Strawberry Tree is a solar and WiFi station which is permanently installed in public places such as streets, parks and squares, providing passersby with the opportunity to charge their mobile devices for free when they are outside. Its main parts are:
Solar panels that transform solar energy to electrical energy
Rechargeable batteries which accumulate energy and make Strawberry Tree function for more than 14 days without sunshine
Sixteen cords for different types of mobile devices such as mobile phones, cameras, mp3 players etc.
Smart electronics which enables balance between produced and consumed energy
Also, Strawberry Tree provides free wireless internet in the immediate surroundings.
History
The first idea of a public solar charger for mobile devices, Strawberry tree was developed by Miloš Milisavljević, founder of Strawberry Energy company.
, there were eleven Strawberry Trees installed. The first Strawberry Tree was installed in October, 2010 in the main square of Obrenovac municipality, Serbia. During the first 40 days from presentation of the solar charger, 10,000 chargings were measured. One year later, in cooperation with Telekom Serbia Company, a second Public solar charger for mobile devices was set up in Zvezdara municipality, Belgrade, Serbia. In the same month, a third Strawberry Tree was set in Novi Sad, Serbia. By the beginning of 2012, more than 100,000 chargings had been achieved on all three Strawberry trees.
In cooperation with Telekom Serbia Company, Strawberry energy also installed Strawberry Tree at these locations:
Kikinda, Serbia, in July, 2012.
Vranje, Serbia, in August, 2012.
Bor, Serbia, in October, 2012.
Valjevo, Serbia, in October, 2012.
In cooperation with city of Belgrade and Palilula municipality, Strawberry energy installed Strawberry Tree Black in Belgrade in Tašmajdan Park, in November 2012, with a completely new design by Serbian architect Miloš Milivojević.
In the beginning of 2013, Strawberry energy, in cooperation with the city of Belgrade and Mikser organization, set up Public solar charger Strawberry Tree Flow with the new design by Serbian designers Tamara Švonja and Vojin Stojadinović, in Slavija square, Belgrade, Serbia.
Later in 2013, through the project "Bijeljina and Bogatić – together on the way towards energy sustainability through increasing energy efficiency and promotion of renewable energy sources" within Cross Border Cooperation Programme Serbia – Bosnia and Herzegovina, two solar chargers have been installed in Bijeljina: in front of Cultural center and in the City park.
References
External links
Electric power
Solar energy companies
Serbian inventions
2010 establishments in Serbia
Energy in Serbia
Charging stations
Applications of photovoltaics
Projects established in 2010 | Strawberry Tree (solar energy device) | [
"Physics",
"Engineering"
] | 597 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
36,051,162 | https://en.wikipedia.org/wiki/Macroemulsion | Macroemulsions are dispersed liquid-liquid, thermodynamically unstable systems with particle sizes ranging from 1 to 100 μm (orders of magnitude), which, most often, do not form spontaneously. Macroemulsions scatter light effectively and therefore appear milky, because their droplets are greater than a wavelength of light. They are part of a larger family of emulsions along with miniemulsions (or nanoemulsions). As with all emulsions, one phase serves as the dispersing agent. It is often called the continuous or outer phase. The remaining phase(s) are disperse or inner phase(s), because the liquid droplets are finely distributed amongst the larger continuous phase droplets. This type of emulsion is thermodynamically unstable, but can be stabilized for a period of time with applications of kinetic energy. Surfactants (as the main emulsifiers) are used to reduce the interfacial tension between the two phases, and induce macroemulsion stability for a useful amount of time. Emulsions can be stabilized otherwise with polymers, solid particles (Pickering emulsions) or proteins.
Classification
Macroemulsions can be divided into two main categories based on if they are a single emulsion or a double or multiple emulsion group. Both categories will be described using a typical oil (O) and water (W) immiscible fluid pairing. Single emulsions can be sub divided into two different types. For each single emulsion a single surfactant stabilizing layer exists as a buffer in between the two layers. In (O/W) oil droplets are dispersed in water. On the other hand, (W/O) involves water droplets finely dispersed in oil. Double or multiple emulsion classification is similar to single emulsion classification, except the immiscible phases are separated by at least two surfactant thin films. In a (W/O/W) combination, an immiscible oil phase exists between two separate water phases. In contrast, in an (O/W/O) combination the immiscible water phase separates two different oil phases.
Formation
Macroemulsions are formed in a variety of ways. Since they are not thermodynamically stable, they do not form spontaneously and require energy input, usually in the form of stirring or shaking of some kind to mechanically mix the otherwise immiscible phases. The resulting size of the macroemulsions typically depends on how much energy was used to mix the phases, with higher-energy mixing methods resulting in smaller emulsion particles. The energy required for this can be approximated using the following equation:
Where is the energy Input, is the interfacial tension between the two phases, is the total volume of the mixture, and is the average radius of the newly created emulsions
This equation gives the energy requirement just to separate the particles. In practice the energy cost is much higher, as most of the mechanical energy is simply converted to heat rather than mixing the phases.
There are other ways to create emulsions between two liquids, such as adding one phase with droplets already being the required size.
An emulsifying agent of some sort is also generally required. This helps form emulsions by reducing the interfacial tension between the two phases, usually by acting as a surfactant and adsorbing to the interface. This works because most emulsifiers have a hydrophilic and hydrophobic side, which means they can bond with both the oil-like phase and the water-like phase, thus reducing the number of water-oil molecular interactions at the surface. Reducing the number of these interactions reduces the interfacial energy, thus causing the emulsions to become more stable. The concentration of surfactant needed is much higher than its critical micelle concentration (CMC). This forms a surfactant monolayer which orients itself to minimize its surface to volume ratio. This ratio yields highly polydisperse spherical droplets in the range of 1 to 100 μm. The probability (P) of finding a certain sized droplet can be estimated for inner layer drops through the following equation:
where R is the radius of the droplet, is the mean radius, and is the standard deviation.
Determining which phase is the continuous phase and which phase is the dispersed phase is done by using the Bancroft Rule when the two phases have similar mole fractions. This rule states that the phase which the emulsifier is the most soluble in will be the continuous phase, even if it has a smaller volume fraction overall. For example, a mixture that is 60% Water and 40% Oil can form an emulsion where the water is the dispersed phase and the oil is the continuous phase if the emulsifier is more soluble in the oil. This is because the continuous phase is the phase that can coalesce the fastest upon mixing, which means it is the phase that can diffuse the emulsifying agent away from its own interfaces and into the bulk the fastest. It seems that this rule is very well followed in the case of surfactant-stabilized emulsions, but not for Pickering emulsions. For mixtures with overwhelmingly large amounts of one phase, the largest phase will often become the continuous phase. However, highly concentrated emulsions (looking like 'liquid-liquid foams') can also be obtained and stabilized.
Stability
Macroemulsions are, by definition, not thermodynamically stable. This means that from the moment they are created, they are always reverting to their original, immiscible and separate state. The reason why Macroemulsions can exist however, is because they are kinetically stable rather than thermodynamically stable. This means that, while they are continuously breaking down, it is done at such a slow pace that it is practically stable from a macroscopic perspective.
The reasons why macroemulsions are stable is similar to the reasons why colloids can be stable. Based on DLVO theory, repulsive forces from the charged surfaces of the two phases repel each other enough to offset the attractive forces of the Hamaker Force Interactions. This creates a potential energy well at some distance, where the particles are in a local area of stability despite not being directly touching and therefore coalescing. However, since this is an area of local rather than total low potential, if any set of particles can randomly have enough thermal energy they can coalesce into an even more stable state, which is why all macroemulsions gradually coalesce over time.
While it is energetically favorable for individual particles to coalesce due to the subsequent reduction of interfacial area, the adsorbed emulsifier prevents this. This is because it is more favorable for the emulsifying agent to be at an interface so reducing the interfacial area requires expending energy to return the emulsifying agent to the bulk.
Stability of the Macroemulsions are based on numerous environmental factors including temperature, pH, and the ionic strength of the solvent.
Progression of macroemulsions
Flocculation
Flocculation occurs when the dispersed drops group together throughout the continuous phase, but don't lose their individual identities. The driving force for flocculation is a weak van der Waals attraction between drops at large distances, which is known as the secondary energy minimum. An electrostatic repulsion between the surfaces prevents the drops from touching and merging, stabilizing the macroemulsion. The rate of diffusion limited encounters is equal to the upper limit for the decrease in droplet concentration and can be represented by the following equation:
where D can be found using the Stokes-Einstein relation , R is the droplet radius, and c is the number of droplets per unit volume. This equation can be reduced to the following:
where is the rate constant of flocculation . If the droplet radii are not all the same size and aggregation occurs, the flocculation rate constant is equal to .
Creaming
Creaming is the accumulation of drops in the dispersed phase at the top of the container. This occurs as a result of buoyancy forces. The density of the dispersed and continuous phases, as well as the viscosity of the continuous phase, greatly affect the creaming process. If the dispersed phase liquid is less dense than the continuous phase liquid, creaming is more likely to occur. Also, there is a greater chance of creaming at lower viscosities of the continuous phase liquid. Once all of the dispersed drops are in close proximity to each other, it is easier for them to coalesce.
Coalescence
Coalescence is the merging of two dispersed drops into one. The surfaces of two drops must be in contact for coalescence to occur. This surface contact is dependent on both the van der Waals attraction and surface repulsion forces between two drops. Once in contact, the two surface films are able to fuse together, which is more likely to occur in areas where the surface film is weak. The liquid inside each drop is now in direct contact, and the two drops are able to merge into one.
Demulsification
Demulsification is the act of destabilizing an emulsion. Once all of the drops have coalesced, two continuous phases exist instead of one dispersed phase and one continuous phase. This process may be accelerated by adding a cosurfactant or salt or by slowly stirring the liquid solution. Demulsification is beneficial for several macroemulsion applications.
Applications
Macroemulsions have nearly endless uses in scientific, industrial, and household applications. They are widely utilized today in automotive, beauty, cleaning and fabric care products as well as biotechnology and manufacturing techniques.
Macroemulsions are often chosen over microemulsions for automotive and industrial applications because they are less expensive, easier to dispose of, and their tendency to demulsify more quickly is often desirable for lubricants. Soluble oil lubricants, usually containing fatty oil or mineral oil in water, are ideal for high speed and low pressure applications. They are often used for friction reducing needs and metalworking.
Many skin care products, sun screens, and fabric softeners are made from silicone macroemulsions. Silicone's is chosen because of its non-irritating and lubricating properties. Different combinations of macroemulsions and surfactants are the subject of a wide range of biological research, especially in the area of cell cultures.
The following table outlines a few examples of macroemulsions and their applications:
References
Liquids | Macroemulsion | [
"Physics",
"Chemistry"
] | 2,170 | [
"Phases of matter",
"Matter",
"Liquids"
] |
28,057,466 | https://en.wikipedia.org/wiki/Mediterranean%20tropical-like%20cyclone | Mediterranean tropical-like cyclones, often referred to as Mediterranean cyclones or Mediterranean hurricanes, and shortened as medicanes, are meteorological phenomena occasionally observed over the Mediterranean Sea. On a few rare occasions, some storms have been observed reaching the strength of a Category 1 hurricane, on the Saffir–Simpson scale, and Medicane Ianos in 2020 was recorded reaching Category 2 intensity. The main societal hazard posed by medicanes is not usually from destructive winds, but through life-threatening torrential rains and flash floods.
The occurrence of medicanes has been described as not particularly rare. Tropical-like systems were first identified in the Mediterranean basin in the 1980s, when widespread satellite coverage showing tropical-looking low pressures which formed a cyclonic eye in the center were identified. Due to the dry nature of the Mediterranean region, the formation of tropical, subtropical cyclones and tropical-like cyclones is infrequent and also hard to detect, in particular with the reanalysis of past data. Depending on the search algorithms used, different long-term surveys of satellite era and pre-satellite era data came up with 67 tropical-like cyclones of tropical storm intensity or higher between 1947 and 2014, and around 100 recorded tropical-like storms between 1947 and 2011. More consensus exists about the long term temporal and spatial distribution of tropical-like cyclones: they form predominantly over the western and central Mediterranean Sea while the area east of Crete is almost devoid of tropical-like cyclones. The development of tropical-like cyclones can occur year-round, with activity historically peaking between the months of September and January, while the counts for the summer months of June and July are the lowest, being within the peak dry season of the Mediterranean with stable air.
Meteorological classification and history
Historically, the term tropical-like cyclone was coined in the 1980s to unofficially distinguish tropical cyclones developing outside the tropics (like in the Mediterranean Basin) from those developing inside the tropics. The term tropical-like was in no way meant to indicate a hybrid cyclone exhibiting characteristics not usually seen in "true" tropical cyclones. In their matured stages, Mediterranean tropical cyclones show no difference from other tropical storms. Mediterranean hurricanes or medicanes are therefore not different from hurricanes elsewhere.
Mediterranean tropical-like cyclones are not considered to be formally classified tropical cyclones and their region of formation is not officially monitored by any agency with meteorological tasks. However, the NOAA subsidiary Satellite Analysis Branch released information related to a medicane in November 2011 while it was active, which they dubbed as "Tropical Storm 01M", though they ceased services in the Mediterranean on 16 December 2011 for undisclosed reasons. However, in 2015, the NOAA resumed services in the Mediterranean region; by 2016, the NOAA was issuing advisories on a new tropical system, Tropical Storm 90M. Since 2005, ESTOFEX has been issuing bulletins that can include tropical-like cyclones, among others. No agency with meteorological tasks, however, is officially responsible for monitoring the formation and development of medicanes, as well as for their naming.
Despite all this, the whole Mediterranean Sea lies within the Greek area of responsibility with the Hellenic National Meteorological Service (HNMS) as the governing agency, while France's Météo-France serves as a "preparation service" for the western part of the Mediterranean as well. As the only official agency covering the whole Mediterranean Sea, HNMS publications are of particular interest for the classification of medicanes. HNMS calls the meteorological phenomenon Mediterranean tropical-like Hurricane in its annual bulletin and – by also using the respective portmanteau word medicane– makes the term medicane quasi-official. In a joint article with the Laboratory of Climatology and Atmospheric Environment of the University of Athens, the Hellenic National Meteorological Service outlines conditions to consider a cyclone over the Mediterranean Sea a Medicane:
In the same article, a survey of 37 medicanes revealed that medicanes could have a well-defined cyclone eye at estimated maximum sustained winds between , with the lower end being exceptionally low for warm core cyclones. Medicanes can indeed develop well-defined eyes at such low maximum sustained winds of around as could be seen for a 22 October 2015 medicane near the Albanian coast. This is much lower than the lower threshold for eye development in tropical systems in the Atlantic Ocean which seems to be close to , well below hurricane-force winds.
Several notable and damaging medicanes are known to have occurred. In September 1969, a North African Mediterranean tropical cyclone produced flooding that killed nearly 600 individuals, left 250,000 homeless, and crippled local economies. A medicane in September 1996 that developed in the Balearic Islands region spawned six tornadoes, and inundated parts of the islands. Several medicanes have also been subject to extensive study, such as those of January 1982, January 1995, September 2006, November 2011, and November 2014. The January 1995 storm is one of the best-studied Mediterranean tropical cyclones, with its close resemblance to tropical cyclones elsewhere and availability of observations. The medicane of September 2006, meanwhile, is well-studied, due to the availability of existing observations and data.
Given the low profile of HNMS in forecasting and classifying tropical-like systems in the Mediterranean, a proper classification system for Mediterranean tropical-like cyclones does not exist. The HNMS criterion of a cyclonic eye for considering a system a medicane is usually valid for a system at peak strength, often only hours before landfall, which is not suitable at least for forecasts and warnings.
Unofficially, Deutscher Wetterdienst (DWD, the German meteorological service) proposed a system to forecast and classify tropical-like cyclones based on the NHC classification for the northern Atlantic Ocean. To account for the broader wind field and the larger radius of maximum winds of tropical-like systems in the Mediterranean (see the section Development and characteristics below), DWD is suggesting a lower threshold of for the use of the term medicane in the Mediterranean instead of as suggested by the Saffir–Simpson scale for Atlantic hurricanes. The DWD proposal and also US-based forecasts (NHC, NOAA, NRL etc.) use one-minute sustained winds while European-based forecasts use ten-minute sustained winds which makes a difference of roughly 14% in measurements. The distinction is also of direct practical use (for example for a comparison of NOAA bulletins with EUMETSAT, ESTOFEX and HNMS bulletins). To account for the difference, the DWD proposal is shown below for both one-minute and deduced ten-minute sustained winds (see tropical cyclone scales for conversions):
Another proposal uses roughly the same scale but suggests to use the term medicane for tropical storm force cyclones and major medicane for hurricane force cyclones. Both proposals would fit the observation, that half of the 37 cyclones surveyed by HNMS with a clearly observable hurricane-like eye, as the major criterion for assigning the medicane status, showed maximum sustained winds between , while another quarter of the medicanes peaked at lower wind speeds.
Climatology
A majority of Mediterranean tropical cyclones (tropical cyclogenesis) form over two separate regions. The first, more conducive for development than the other, encompasses an area of the western Mediterranean bordered by the Balearic Islands, southern France, and the shorelines of the islands of Corsica and Sardinia. The second identified region of development, in the Ionian Sea between Sicily and Greece and stretching south to Libya, is less favorable for tropical cyclogenesis. An additional two regions, in the Aegean and Adriatic seas, produce fewer medicanes, while activity is minimal in the Levantine region. The geographical distribution of Mediterranean tropical-like cyclones is markedly different from that of other cyclones, with the formation of regular cyclones centering on the Pyrenees and Atlas mountain ranges, the Gulf of Genoa, and in the Ionian Sea. Although meteorological factors are most advantageous in the Adriatic and Aegean seas, the closed nature of the region's geography, bordered by land, allows little time for further evolution.
The geography of mountain ranges bordering the Mediterranean are conducive for severe weather and thunderstorms, with the sloped nature of mountainous regions permitting the development of convective activity. Although the geography of the Mediterranean region, as well as its dry air, typically prevent the formation of tropical cyclones, when certain meteorological circumstances arise, difficulties influenced by the region's geography are overcome. The occurrence of tropical cyclones in the Mediterranean Sea is generally extremely rare, with an average of 1.57 forming annually and merely 99 recorded occurrences of tropical-like storms discovered between 1948 and 2011 in a modern study, with no definitive trend in activity in that period. Few medicanes form during the summer season, though activity typically rises in autumn, peaks in January, and gradually decreases from February to May. In the western Mediterranean region of development, approximately 0.75 such systems form each year, compared with 0.32 in the Ionian Sea region. However, on very rare occasions, similar tropical-like storms may also develop in the Black Sea.
Studies have evaluated that global warming can result in higher observed intensities of tropical cyclones as a result of deviations in the surface energy flux and atmospheric composition, which both heavily influence the development of medicanes as well. In tropical and subtropical areas, sea surface temperatures (SSTs) rose within a 50-year period, and in the North Atlantic and Northwestern Pacific tropical cyclone basins, the potential destructiveness and energy of storms nearly doubled within the same duration, evidencing a clear correlation between global warming and tropical cyclone intensities. Within a similarly recent 20-year period, SSTs in the Mediterranean Sea increased by , though no observable increase in medicane activity has been noted, . In 2006, a computer-driven atmospheric model evaluated the future frequency of Mediterranean cyclones between 2071 and 2100, projecting a decrease in autumn, winter, and spring cyclonic activity coinciding with a dramatic increase in formation near Cyprus, with both scenarios attributed to elevated temperatures as a result of global warming. In another study, researchers found that more tropical-like storms in the Mediterranean could reach Category 1 strength by the end of the 21st century, with most of the stronger storms appearing in the autumn, though the models indicated that some storms could potentially reach Category 2 intensity. Other studies, however, have been inconclusive, forecasting both increases and decreases in duration, number, and intensity. Three independent studies, using different methodologies and data, evaluated that while medicane activity would likely decline with a rate depending on the climate scenario considered, a higher percentage of those that formed would be of greater strength.
Development and characteristics
The development of tropical or subtropical cyclones in the Mediterranean Sea can usually only occur under somewhat unusual circumstances. Low wind shear and atmospheric instability induced by incursions of cold air are often required. A majority of medicanes are also accompanied by upper-level troughs, providing energy required for intensifying atmospheric convection—thunderstorms—and heavy precipitation. The baroclinic properties of the Mediterranean region, with high temperature gradients, also provides necessary instability for the formation of tropical cyclones. Another factor, rising cool air, provides necessary moisture as well. Warm sea surface temperatures (SSTs) are mostly unnecessary, however, as most medicanes' energy are derived from warmer air temperatures. When these favorable circumstances coincide, the genesis of warm-core Mediterranean tropical cyclones, often from within existing cut-off cold-core lows, is possible in a conducive environment for formation.
Factors required for the formation of medicanes are somewhat different from that normally expected of tropical cyclones; known to emerge over regions with sea surface temperatures (SSTs) below , Mediterranean tropical cyclones often require incursions of colder air to induce atmospheric instability. A majority of medicanes develop above regions of the Mediterranean with SSTs of , with the upper bound only found in the southernmost reaches of the sea. Despite the low sea surface temperatures, the instability incited by cold atmospheric air within a baroclinic zone—regions with high differences in temperature and pressure—permits the formation of medicanes, in contrast with tropical areas lacking high baroclinity, where raised SSTs are needed. While significant deviations in air temperature have been noted around the time of Mediterranean tropical cyclones' formation, few anomalies in sea surface temperature coincide with their development, indicating that the formation of medicanes is primarily controlled by higher air temperatures, not by anomalous SSTs. Similar to tropical cyclones, minimal wind shear—difference in wind speed and direction over a region—as well as abundant moisture and vorticity encourages the genesis of tropical cyclone-like systems in the Mediterranean Sea.
Due to the confined character of the Mediterranean and the limited capability of heat fluxes — in the case of medicanes, air-sea heat transfer — tropical cyclones with a diameter larger than cannot exist within the Mediterranean. Despite being a relatively baroclinic area with high temperature gradients, the primary energy source utilized by Mediterranean tropical cyclones is derived from underlying heat sources generated by the presence of convection—thunderstorm activity—in a humid environment, similar to tropical cyclones elsewhere outside the Mediterranean Sea. In comparison with other tropical cyclone basins, the Mediterranean Sea generally presents a difficult environment for development; although the potential energy necessary for development is not abnormally large, its atmosphere is characterized by its lack of moisture, impeding potential formation. The full development of a medicane often necessitates the formation of a large-scale baroclinic disturbance, transitioning late in its life cycle into a tropical cyclone-like system, nearly always under the influence of a deep, cut-off, cold-core low within the middle-to-upper troposphere, frequently resulting from abnormalities in a wide-spreading Rossby wave—massive meanders of upper-atmospheric winds.
The development of medicanes often results from the vertical shift of air in the troposphere as well, resulting in a decrease in its temperature coinciding with an increase in relative humidity, creating an environment more conducive for tropical cyclone formation. This, in turn, leads to in an increase in potential energy, producing heat-induced air-sea instability. Moist air prevents the occurrence of convective downdrafts—the vertically downward movement of air—which often hinder the inception of tropical cyclones, and in such a scenario, wind shear remains minimal; overall, cold-core cut-off lows serve well for the later formation of compact surface flux-influenced warm-core lows such as medicanes. The regular genesis of cold-core upper-level lows and the infrequency of Mediterranean tropical cyclones, however, indicate that additional unusual circumstances are involved the emergence of the latter. Elevated sea surface temperatures, contrasting with cold atmospheric air, encourage atmospheric instability, especially within the troposphere.
In general, most medicanes maintain a radius of , last between 12 hours and 5 days, travel between , develop an eye for less than 72 hours, and feature wind speeds of up to ; in addition, a majority are characterized on satellite imagery as asymmetric systems with a distinct round eye encircled by atmospheric convection. Weak rotation, similar to that in most tropical cyclones, is usually noted in a medicane's early stages, increasing with intensity; medicanes, however, often have less time to intensify, remaining weaker than most North Atlantic hurricanes and only persisting for the duration of a few days. While the entire lifetime of a cyclone may encompass several days, most will only retain tropical characteristics for less than 24 hours. Circumstances sometimes permit the formation of smaller-scale medicanes, although the required conditions differ even from those needed by other medicanes. The development of abnormally small tropical cyclones in the Mediterranean usually requires upper-level atmospheric cyclones inducing cyclogenesis in the lower atmosphere, leading to the formation of warm-core lows, encouraged by favorable moisture, heat, and other environmental circumstances.
Mediterranean cyclones have been compared with polar lows—cyclonic storms which typically develop in the far regions of the Northern and Southern Hemispheres—for their similarly small size and heat-related instability; however, while medicanes nearly always feature warm-core lows, polar lows are primarily cold-core. The prolonged life of medicanes and similarity to polar lows is caused primarily by origins as synoptic-scale surface lows and heat-related instability. Heavy precipitation and convection within a developing Mediterranean tropical cyclone are usually incited by the approach of an upper-level trough—an elongated area of low air pressures—bringing downstream cold air, encircling an existing low-pressure system. After this occurs, however, a considerable reduction in rainfall rates occurs despite further organization, coinciding with a decrease in previously high lightning activity as well. Although troughs will often accompany medicanes along their track, separation eventually occurs, usually in the later part of a Mediterranean tropical cyclone's life cycle. At the same time, moist air, saturated and cooled while rising into the atmosphere, then encounters the medicane, permitting further development and evolution into a tropical cyclone. Many of these characteristics are also evident in polar lows, except for the warm core characteristic.
Notable medicanes and impacts
22–27 Sep 1969
An unusually severe Mediterranean tropical cyclone developed on 23 September 1969 southeast of Malta, producing severe flooding. Steep pressure and temperature gradients above the Atlas mountain range were evident on 19 September, a result of cool sea air attempting to penetrate inland; south of the mountains, a lee depression—a low-pressure area in a mountainous region—developed. Under the influence of mountainous terrain, the low initially meandered northeastward. Following the entry of cool sea air, however, it recurved to the southeast before transitioning into a Saharan depression associated with a distinct cold front by 22 September. Along the front's path, desert air moved northward while cold air drifted in the opposite direction, and in northern Libya, warm arid air clashed with the cooler levant of the Mediterranean. The organization of the disturbance improved slightly further before emerging into the Mediterranean Sea on 23 September, upon which the system experienced immediate cyclogenesis, rapidly intensifying while southeast of Malta as a cold-core cut-off low, and acquiring tropical characteristics. In western Africa, meanwhile, several disturbances converged toward Mauritania and Algeria, while the medicane recurved southwestward back toward the coast, losing its closed circulation and later dissipating.
The cyclone produced severe flooding throughout regions of northern Africa. Malta received upward of of rainfall on 23 September, Sfax measured on 24 September, Tizi Ouzou collected on 25 September, Gafsa received and Constantine measured on 26 September, Cap Bengut collected on 27 September, and Biskra received on 28 September. In Malta, a 20000-ton tanker struck a reef and split in two, while in Gafsa, Tunisia, the cyclone flooded phosphate mines, leaving over 25,000 miners unemployed and costing the government over £2 million per week. Thousands of camels and snakes, drowned by flood waters, were swept out to sea, and massive Roman bridges, which withstood all floods since the fall of the Roman Empire, collapsed. In all, the floods in Tunisia and Algeria killed almost 600 individuals, left 250,000 homeless, and severely damaged regional economies. Due to communication problems, however, flood relief funds and television appeals were not set up until nearly a month later.
Leucosia (24–27 Jan 1982)
The unusual Mediterranean tropical storm of January 1982, dubbed Leucosia, was first detected in waters north of Libya. The storm likely reached the Atlas mountain range as a low-pressure area by 23 January 1982, reinforced by an elongated, slowly-drifting trough above the Iberian Peninsula. Eventually, a closed circulation center developed by 1310 UTC, over parts of the Mediterranean with sea surface temperatures (SSTs) of approximately and air temperature of . A hook-shaped cloud developed within the system shortly thereafter, rotating as it elongated into a -long comma-shaped apparatus. After looping around Sicily, it drifted eastward between the island and Peloponnese, recurving on its track again, exhibiting clearly curved spiral banding before shrinking slightly. The cyclone reached its peak intensity at 1800 UTC on the following day, maintaining an atmospheric pressure of , and was succeeded by a period of gradual weakening, with the system's pressure eventually rising to . The system slightly reintensified, however, for a six-hour period on 26 January. Ship reports indicated winds of were present in the cyclone at the time, tropical storm-force winds on the Saffir–Simpson hurricane wind scale, likely near the eyewall of the cyclone, which features the highest winds in a tropical cyclone.
The Global Weather Center's Cyclone Weather Center of the United States Air Force (USAF) initiated "Mediterranean Cyclone Advisories" on the cyclone at six-hour intervals starting at 1800 UTC on 27 January, until 0600 UTC on the following day. Convection was most intense in the eastern sector of the cyclone as it drifted east-northeastward. On infrared satellite imagery, the eye itself was in diameter, contracting to just one day prior to making landfall. The cyclone passed by Malta, Italy, and Greece before dissipating several days later, in the extreme eastern Mediterranean. Observations related to the cyclone, however, were inadequate, and although the system maintained numerous tropical characteristics, it is possible it was merely a compact but powerful extratropical cyclone exhibiting a clear eye, spiral banding, towering cumulonimbi, and high surface winds along the eyewall.
27 Sep – 2 October 1983
On 27 September 1983, a medicane was observed at sea between Tunisia and Sicily, looping around Sardinia and Corsica, coming ashore twice on the islands, before making landfall at Tunis early on 2 October and dissipating. The development of the system was not encouraged by baroclinic instability; rather, convection was incited by abnormally high sea surface temperatures (SSTs) at the time of its formation. It also featured a definitive eye, tall cumulonimbus clouds, intense sustained winds, and a warm core. For most of its duration, it maintained a diameter of , though it shrank just before landfall on Ajaccio to a diameter of .
Celeno (14–17 Jan 1995)
Among numerous documented medicanes, the cyclone of January 1995, which was dubbed Celeno, is generally considered to be the best-documented instance in the 20th century. The storm emerged from the Libyan coast and moved toward the Ionian shoreline of Greece on 13 January as a compact low-pressure area. The medicane maintained winds reaching up to as it traversed the Ionian Sea, while the German research ship Meteor recorded winds of . Upon the low's approach near Greece, it began to envelop an area of atmospheric convection; meanwhile, in the middle troposphere, a trough extended from Russia to the Mediterranean, bringing with it extremely cold temperatures. Two low-pressure areas were present along the path of the trough, with one situated above Ukraine and the other above the central Mediterranean, likely associated with a low-level cyclone over western Greece. Upon weakening and dissipation on 14 January, a second low, the system which would evolve into the Mediterranean tropical cyclone, developed in its place on 15 January.
At the time of formation, high clouds indicated the presence of intense convection, and the cyclone featured an axisymmetric cloud structure, with a distinct, cloud-free eye and rainbands spiraling around the disturbance as a whole. Soon thereafter, the parent low separated from the medicane entirely and continued eastward, meandering toward the Aegean Sea and Turkey. Initially remaining stationary between Greece and Sicily with a minimum atmospheric pressure of , the newly formed system began to drift southwest-to-south in the following days, influenced by northeasterly flow incited by the initial low, now far to the east, and a high-pressure area above central and eastern Europe. The system's atmospheric pressure increased throughout 15 January due to the fact it was embedded within a large-scale environment, with its rising pressure due to the general prevalence of higher air pressures throughout the region, and was not a sign of weakening.
Initial wind speeds within the young medicane were generally low, with sustained winds of merely , with the highest recorded value associated with the disturbance being at 0000 UTC on 16 January, slightly below the threshold for tropical storm on the Saffir–Simpson hurricane wind scale. Its structure now consisted of a distinct eye encircled by counterclockwise-rotating cumulonimbi with cloud top temperatures colder than , evidencing deep convection and a regular feature observed in most tropical cyclones. At 1200 UTC on 16 January, a ship recorded winds blowing east-southeast of about south-southwest about north-northeast of the cyclone's center. Intense convection continued to follow the entire path of the system as it traversed the Mediterranean, and the cyclone made landfall in northern Libya at approximately 1800 UTC on 17 January, rapidly weakening after coming ashore. As it moved inland, a minimum atmospheric pressure of was recorded, accompanied by wind speeds of as it slowed down after passing through the Gulf of Sidra. Although the system retained its strong convection for several more hours, the cyclone's cloud tops began to warm, evidencing lower clouds, before losing tropical characteristics entirely on 17 January. Offshore ship reports recorded that the medicane produced intense winds, copious rainfall, and abnormally warm temperatures.
11–13 Sep 1996
Three notable medicanes developed in 1996. The first, in mid-September 1996, was a typical Mediterranean tropical cyclone that developed in the Balearic Islands region. At the time of the cyclone's formation, a powerful Atlantic cold front and a warm front associated with a large-scale low, producing northeasterly winds over the Iberian peninsula, extended eastward into the Mediterranean, while abundant moisture gathered in the lower troposphere over the Balearic channel. On the morning of 12 September, a disturbance developed off of Valencia, Spain, dropping heavy rainfall on the coast even without coming ashore. An eye developed shortly thereafter as the system rapidly traversed across Majorca and Sardinia in its eastward trek. It made landfall upon the coast of southern Italy on the evening of 13 September with a minimum atmospheric pressure of , dissipating shortly after coming ashore, with a diameter of about .
At Valencia and other regions of eastern Spain, the storm generated heavy precipitation, while six tornadoes touched down over the Balearic Islands. While approaching the coast of the Balearic Islands, the warm-core low induced a pressure drop of at Palma, Majorca in advance of the tropical cyclone's landfall. Medicanes as small as the one that formed in September 1996 are atypical, and often require circumstances different even from those required for regular Mediterranean tropical cyclone formation. Warm low-level advection–transfer of heat through air or sea–caused by a large-scale low over the western Mediterranean was a primary factor in the rise of strong convection. The presence of a mid- to upper-level cut-off cold-core low, a method of formation typical to medicanes, was also key to the development of intense thunderstorms within the cyclone. In addition, interaction between a northeastward-drifting trough, the medicane, and the large-scale also permitted the formation of tornadoes within thunderstorms generated by the cyclone after making landfall.
4–6 Oct 1996
The second of the three recorded Mediterranean tropical cyclones in 1996 formed between Sicily and Tunisia on 4 October, making landfall on both Sicily and southern Italy. The medicane generated major flooding in Sicily. In Calabria, wind gusts of up to were reported in addition to severe inundation.
Cornelia (6–11 Oct 1996)
The third major Mediterranean tropical cyclone of that year formed north of Algeria, and strengthened while sweeping between the Balearic Islands and Sardinia, with an eye-like feature prominent on satellite. The storm was unofficially named Cornelia. The eye of the storm was distorted and disappeared after transiting over southern Sardinia throughout the evening of 8 October, with the system weakening as a whole. On the morning of October 9, a smaller eye emerged as the system passed over the Tyrrhenian Sea, gradually strengthening, with reports from the storm's center reporting winds of . Extreme damage was reported in the Aeolian Islands after the tropical cyclone passed north of Sicily, though the system dissipated while turning southward over Calabria. Overall, the lowest estimated atmospheric pressure in the third medicane was . Both October systems featured distinctive spiral bands, intense convection, high sustained winds, and abundant precipitation.
Querida (25–27 Sep 2006)
A short-lived medicane, named Querida by the Free University of Berlin, developed near the end of September 2006, along the coast of Italy. The origins of the medicane can be traced to the alpine Atlas mountain range on the evening of 25 September, likely forming as a normal lee cyclone. At 0600 UTC on 26 September, European Centre for Medium-Range Weather Forecasts (ECMWF) model analyses indicated the existence of two low-pressure areas along the shoreline of Italy, one on the west coast, sweeping eastward across the Tyrrhenian Sea, while the other, slightly more intense, low was located over the Ionian Sea. As the latter low approached the Strait of Sicily, it met an eastward-moving convection-producing cold front, resulting in significant intensification, while the system simultaneously reduced in size. It then achieved a minimum atmospheric pressure of approximately after transiting north-northeastward across the -wide Salentine peninsula in the course of roughly 30 minutes at 0915 UTC the same day.
Wind gusts surpassing were recorded as it passed over Salento due to a steep pressure gradient associated with it, confirmed by regional radar observations denoting the presence of a clear eye. The high winds inflicted moderate damages throughout the peninsula, though specific damage is unknown. Around 1000 UTC, both radar and satellite recorded the system's entry into the Adriatic Sea and its gradual northwestward curve back toward the Italian coast. By 1700 UTC, the cyclone made landfall in northern Apulia while maintaining its intensity, with a minimum atmospheric pressure at . The cyclone weakened while drifting further inland over the Italian mainland, eventually dissipating as it curved west-southwestward. A later study in 2008 evaluated that the cyclone possessed numerous characteristics seen in tropical cyclones elsewhere, with a spiral appearance, eye-like apparatus, rapid atmospheric pressure decreases in advance of landfall, and intense sustained winds, concentrated near the storm's eyewall; the apparent eye-like structure in the cyclone, however, was ill-defined. Since then, the medicane has been the subject of significant study as a result of the availability of scientific observations and reports related to the cyclone. In particular, the sensitivity of this cyclone to sea-surface temperatures, initial conditions, the model, and the parameterization schemes used in the simulations were analyzed. The relevance of different instability indices for the diagnosis and the prediction of these events were also studied.
Rolf (6–9 Nov 2011)
In November 2011, the first officially designated Mediterranean tropical cyclone by the National Oceanic and Atmospheric Administration (NOAA) formed, christened as Tropical Storm 01M by the Satellite Analysis Branch, and given the name Rolf by the Free University of Berlin (FU Berlin), despite the fact that no agency is officially responsible for monitoring tropical cyclone activity in the Mediterranean. On 4 November 2011, a frontal system associated with another low-pressure area monitored by FU Berlin, designated Quinn, spawned a second low-pressure system inland near Marseille, which was subsequently named Rolf by the university. An upper-level trough on the European mainland stalled as it approached the Pyrenees, before approaching and interacting with the low known as Rolf. Heavy rainfall consequently fell over regions of southern France and northwestern Italy, resulting in widespread landslides and flooding. On 5 November, Rolf slowed while stationed above the Massif Central, maintaining a pressure of . A stationary front, stationed between Madrid and Lisbon, approached Rolf the same day, with the cold front later encountering and becoming associated with Rolf, which would continue for a couple of days.
On 6 November, the cyclone drifted toward the Mediterranean from the southern shoreline of France, with the storm's frontal structure shrinking to in length. Slightly weakening, Rolf neared the Balearic Islands on 7 November, associating with two fronts producing heavy rain throughout Europe, before separating entirely and transitioning into a cut-off low. On the same day, the NOAA began monitoring the system, designating it as 01M, marking the first time that the agency officially monitored a Medicane. A distinct eye-like feature developed while spiral banding and intense convection became evident. At its highest, the Dvorak technique classified the system as T3.0. Convection then gradually decreased, and a misalignment of the mid- and upper-level centers was noted. The cyclone made landfall on 9 November near Hyères in France. The system continued to rapidly weaken on 9 November, before advisories on the system were discontinued later that day, and FU Berlin followed suit by 10 November, removing the name Rolf from its weather maps and declaring the storm's dissipation. The deep warm core of this cyclone persisted for a longer time compared to most of the other documented tropical-like cyclones in the Mediterranean.
At peak intensity, the storm's maximum sustained wind speed reached , with a minimum pressure of . During a nine-day period, from 1–9 November, Storm Quinn and Rolf dropped prolific amounts of rainfall across southwestern Europe, the vast majority of which came from Rolf, with a maximum total of of rain recorded in southern France. The storm caused at least $1.25 billion (2011 USD) in damages in Italy and France. The sum of fatalities totaled 12 people from Italy and France.
Qëndresa (7–9 Nov 2014)
On 6 November 2014, the low-level circulation centre of Qëndresa formed near Kerkennah Islands. As the system was moving north-northeastwards and combining with an upper-level low from Tunisia early on 7 November, the system occluded quickly and intensified dramatically with an eye-like feature, thanks to favourable conditions. Qëndresa directly hit Malta when it had lost its fronts with a more well-defined eye, with ten-minute sustained winds at and the gust at . The central pressure was presumed to be . Interacting with Sicily, the cyclone turned northeastwards and started to make an anticlockwise loop. On 8 November, Qëndresa crossed Syracuse in the morning and then significantly weakened. Turning southeastwards then moving eastwards, Qëndresa moved over Crete, before dissipating over the island on 11 November.
90M/"Trixi" (28–31 Oct 2016)
Early on 28 October 2016, an extratropical cyclone began to develop to the south of Calabria, in the Ionian sea. The system quickly intensified, attaining wind speeds of as it slowly moved to the west, causing high waves and minor damage to cars near the Maltese city of Valletta, weakening the following day and beginning to move eastwards. However, later that day, it began to re-intensify and underwent a tropical transition. At 12:00 UTC on 30 October, the system showed 10-minute sustained winds of . It became a tropical storm on 31 October. After passing over Crete, the storm began to quickly weaken, with the storm degenerating into an extratropical low on 1 November. Tropical Storm 90M was also nicknamed "Medicane Trixi" by some media outlets in Europe during its duration.
No fatalities or rainfall statistics have been reported for this system that was over open waters for most of the time.
Numa (16–19 Nov 2017)
On 11 November 2017, the remnant of Tropical Storm Rina from the Atlantic contributed to the formation of a new extratropical cyclone, west of the British Isles, which later absorbed Rina on the next day. On 12 November, the new storm was named Numa by the Free University of Berlin. On 14 November 2017, Extratropical Cyclone Numa emerged into the Adriatic Sea. On the following day, while crossing Italy, Numa began to undergo a subtropical transition, though the system was still extratropical by 16 November. The storm began to impact Greece as a strong storm on 16 November. Some computer models forecast that Numa could transition into a warm-core subtropical or tropical cyclone within the next few days. On 17 November, Numa completely lost its frontal system. On the afternoon of the same day, Météo France tweeted that Numa had attained the status of a subtropical Mediterranean depression. According to ESTOFEX, Numa showed numerous flags of 10-minute sustained winds in satellite data. Between 18:00 UTC on 17 November and 5:00 UTC on 18 November, Numa acquired evident tropical characteristics, and began to display a hurricane-like structure. ESTOFEX again reported . Later on the same day, Numa made landfall in Greece with a station at Kefalonia reporting peak winds of at . The cyclone rapidly weakened into a low-pressure area, before emerging into the Aegean Sea on 19 November. On 20 November, Numa was absorbed into another extratropical storm approaching from the north.
Numa hit Greece at a time when the soil was already heavily soaked from other storm systems that did arrive before Numa. The area was forecast to receive up to more than of additional rains in an 48 hours period starting with 16 November. No rainfall forecasts or measurements are known for the following days while Numa was still battering Greece. Numa resulted in 21 reported deaths. At least 1,500 homes were flooded, and residents had to evacuate their homes. The storm caused an estimated US$100 million in damages in Europe and was the deadliest weather event Greece had experienced since 1977.
Zorbas (27 Sep – 1 October 2018)
A first outlook about the possible development of a shallow warm-core cyclone in the Mediterranean was issued by ESTOFEX on 25 September 2018, and a second extended outlook was issued on 26 September 2018. On 27 September 2018, an extratropical storm developed in the eastern Mediterranean Sea. Water temperatures of around supported the storm's transition into a hybrid cyclone, with a warm thermal core in the center. The storm moved northeastward toward Greece, gradually intensifying and developing characteristics of a tropical cyclone. On September 29, the storm made landfall at peak intensity in the Peloponnese, west of Kalamata, where a minimum central pressure of was reported. ESTOFEX reported on Zorbas as "Mediterranean Cyclone 2018M02", with the same pressure of at Kalamata, further estimating the minimum central pressure of the cyclone to be , with one-minute maximum sustained winds of and a Dvorak number of T4.0, which all translate into marginal Category 1 hurricane characteristics for the cyclone.
It is unknown who named the system Zorbas, but the name is officially recognized for a medicane by the Deutscher Wetterdienst. Early on 1 October, Zorbas emerged into the Aegean Sea, while accelerating northeastward. On 2 October, Zorbas moved over northwestern Turkey and dissipated. A cold wake was observed in the Mediterranean Sea, with sea surface temperatures dropping along the track of Zorbas due to strong upwelling.
During its formative stages, the storm caused flash flooding in Tunisia and Libya, with around of rainfall observed. The floods killed five people in Tunisia, while also damaging homes, roads, and fields. The Tunisian government pledged financial assistance to residents whose homes were damaged. In advance of the storm's landfall in Greece, the Hellenic National Meteorological Office issued a severe warning. Several flights were canceled, and schools were closed. The offshore islands of Strofades and Rhodes reported gale-force winds during the storm's passage. A private weather station in Voutsaras measured wind gusts of . The storm spawned a waterspout that moved onshore. Gale-force winds in Athens knocked down trees and power lines. A fallen tree destroyed the roof of a school in western Athens. Dozens of roads were closed due to flooding. In Ioannina, the storm damaged the minaret on the top of the Aslan Pasha Mosque, which dates to 1614. From 29 to 30 September, Zorbas produced flash flooding in Greece and parts of western Turkey, with the storm dropping as much as in Greece and spawning multiple waterspouts. Three people were reporting missing in Greece after the flash floods; one person was found dead, but the other two individuals remained missing, as of 3 October. Zorbas was estimated to have caused millions of dollars (2018 USD) in damages.
Ianos (14–20 Sep 2020)
On 14 September 2020, a low-pressure area began to develop over the Gulf of Sidra, quickly developing in the coming hours while slowly moving northwest with a wind speed of around . By 15 September, it had intensified to with a minimum pressure of 1010 hPa, with further development predicted over the coming days. The cyclone had strong potential to become tropical over the next several days due to warm sea temperatures of in the region. Weather models predicted that it would likely hit the west coast of Greece on 17 or 18 September. Ianos gradually intensified over the Mediterranean Sea, acquiring an eye-like feature. Ianos made landfall on Greece at peak intensity on 03:00 UTC on 18 September, with winds peaking near and a minimum central pressure estimated at , equivalent to a minimal Category 2 hurricane.
Greece assigned the system the name "Ianos" (), sometimes anglicized to "Janus", while the German weather service used the name "Udine"; the Turkish used "Tulpar", and the Italians "Cassilda". As Ianos passed to the south of Italy on 16 September, it produced heavy rain across the southern part of the country and in Sicily. As much as of rain was reported in Reggio Calabria, more than the city's normal monthly rainfall.
Ianos left four dead people and one missing, in addition to strong tides in Ionian islands such as Kefalonia, Zakynthos, Ithaca and Lefkada, and winds at Karditsa which brought down trees and power lines, and caused landslides.
Apollo (22 Oct – 2 Nov 2021)
Around 22 October 2021, an area of organized thunderstorms formed near the Balearic Islands, with the disturbance becoming more organized and developing an area of low pressure around 24 October. The low started to form a low level center the next day and moved around the Tyrrhenian Sea, and around 28 October, the low became better organized, prompting forecast offices in Europe to name it.
The most commonly used name for the cyclone is Apollo, which was used by the Free University of Berlin. On the same day, the agency Meteo of the National Observatory of Athens in Greece named it Nearchus, after the voyager of the same name.
Heavy rain from the cyclone and its precursor caused heavy rainfall and flooding in Tunisia, Algeria, Southern Italy, and Malta, killing seven people in total. The storm caused over US$245 million (€219 million) in damages.
Blas (5–18 Nov 2021)
On 5 November, the Spanish Meteorological Agency (AEMET) started tracking a low near the Balearic Islands and named it Blas. An orange alert was issued for these islands, for coastal impacts and rain. The north of Catalonia was declared an Orange Zone, as strong winds blew inland from the Spanish Navarre and Aragon. Météo-France also issued a yellow alert for Aude and Pyrénées-Orientales for wind, as well as Corsica for rain. As the system stalled between Sardinia and the Balearic Islands on 8 November, AEMET predicted a strengthening trend for the next two days and maintained its alerts. At 00:00 UTC on 11 November, the system came very close to the Balearic Islands again. On 13 November, the storm developed a spiral structure similar to those of tropical cyclones, while shedding its frontal structure. After striking the islands again, the storm then slowly weakened while drifting back southeastward. On 14 November, the cyclone turned northward, moving over Sardinia and Corsica, before curving back southwestward on 15 November and moving over Sardinia again, while restrengthening in the process. On 16 November, Blas turned eastward once again, passing just south of Sardinia and moving towards Italy, before dissipating over the Tyrrhenian Sea on 18 November.
On 6 November, gusts of were recorded at Es Mercadal and at the lighthouse of Capdepera in the Balearic Islands where waves of hit the coast. Menorca was cut off from the world after the closure of the ports of Mahón and Ciutadella. On 9 and 10 November, Blas brought high winds and heavy rain again to the Balearic Islands, causing at least 36 incidents, mostly flooding, landslides and blackouts. A crew member had to be rescued after his sailboat's mast broke, leaving the boat adrift west of Sóller. On 6 November, a waterspout was reported in Melilla, a Spanish enclave on the coast of Morocco. In France, gusts of were recorded on 7 November at Cap Béar, as well as in Leucate and in Lézignan-Corbières. The storm caused severe weather on the Algerian coast, with exceptional rainfall. On 9 November, a building collapsed in Algiers, following torrential rains on the city, causing the deaths of three people. On 11 November, the heavy rain falling on Algiers caused another landslide to strike houses in the Raïs Hamidou neighborhood, causing the deaths of three other people. From 8 to 11 November, convective bands associated with the storm caused 3 deaths in Sicily, bringing the total death toll to nine people. Damage from the storm has not yet been assessed.
Daniel (4–12 Sep 2023)
Storm Daniel was named by the Hellenic National Meteorological Service on 4 September and was expected to bring heavy rainfall and heavy winds in Greece, especially in Greece's Thessaly region. On 5 September, the city of Volos was flooded extensively. The village of Zagora recorded 754 mm of rain in 24 hours, a record for Greece. The total rainfall reached 1,096 mm. As of 10 September, sixteen people are confirmed to have died in Greece, seven people are confirmed dead in Turkey, and four people are confirmed to have lost their lives in Bulgaria. Extensive flooding occurred in the plain of Thessaly, in Palamas, Karditsa and the city of Larisa and hundreds civilians were rescued. The flood water covered a region of about 720 square kilometers. In the Halkidiki region, several seaside villages such as Ierissos experienced damage due to the heavy wind. In the seaside village of Toroni in Halkidiki a canoeing woman got swept away by the wind but was later found. The torrential rainfall was a result of a cut-off low. Early on 9 September, the system showed signs of subtropical transition. Later on that same day, it developed a warm core while an ASCAT pass recorded sustained winds of 45 knots before making landfall near Benghazi, Libya. In Libya, the storm caused flooding in Marj, and the failure of two dams in Derna, and the Jabal al Akhdar district, as well as Benghazi, Susa, and Misrata. The resultant flooding and heavy rain caused the deaths of at least 5,900 people in the country, making it, by a very wide margin, the deadliest Mediterranean tropical-like cyclone on record, prompting a state of emergency to be declared by Libyan authorities.
Other tropical-like cyclones
Numerous other Mediterranean tropical-like cyclones have occurred, but few have been as well-documented as the cyclones in 1969, 1982, 1983, 1995, 1996, 2006, 2011, 2014, 2017, 2018, 2020, 2021, and 2023. These less-known system and their dates are given below.
A study in 2000 revealed five notable and well-developed medicanes. A follow-up study in 2013 revealed several additional storms with their formation days and also additional information on medicanes. A third study, conducted in 2007, revealed additional storms with their formation days. A fourth study from 2013 presented several other cyclones and their days of development. A survey made by EUMETSAT resulted in many more cyclones.
September 1947
September 1973
18–20 August 1976
26 March 1983
7 April 1984,
29–30 December 1984
14–18 December 1985
January 1991, 5 December 1991
21–25 October 1994
10–13 December 1996
22–27 September 1997, 30–31 October 1997, 5–8 December 1997
25–27 January 1998
19–21 March 1999, 13 September 1999
10 September 2000, 9 October 2000
27–28 May 2003, 16–19 September 2003, 27–28 September 2003, 8 October 2003
19–21 September 2004, 3–5 November 2004
August 2005, 15–16 September 2005, 22–23 October 2005, 26–28 October 2005, 14–16 December 2005
9 August 2006
19–23 March 2007 16–18 October 2007, 26 October 2007
June 2008, August 2008, September 2008, 4 December 2008
January 2009, May 2009, twice in September 2009, October 2009
12–14 October 2010, 2–4 November 2010
Twice in February 2012, 13–15 April 2012.
"Scott", October 2019
"Trudy" ("Detlef"), November 2019
"Masinissa", November 2020
03M/"Elaina" ("Andira"), December 2020
"Hannelore", January 2023
Climatological statistics
There have been 100 recognized tropical-like cyclones in the Mediterranean Sea between 1947 and 2021 from the databases of the Laboratory of Climatology and Atmospheric Environment, University of Athens, and METEOSAT. By steady accrual of reported and recognized occurrences of tropical-like cyclones (medicanes), the number count reached at least 89 by 15 November 2021. Unlike most northern hemisphere cyclone seasons, Mediterranean tropical-like cyclone activity peaks between the months of September and January.
The numbers do not necessarily mean that all occurrences of medicanes have been fetched in particular before the end of the 1980s. With the development (and constant improvement) of satellite-based observations, the number count of clearly identified medicanes increased from the 1980s onward. There might be an additional impact from climate change in the frequency of the observed medicanes, but this is not deducible from the data.
Deadly storms
The following is a list of all medicanes that caused deaths.
Tropical-like cyclones in the Black Sea
On a number of occasions, tropical-like storms similar to the tropical-like cyclones observed in the Mediterranean have formed in the Black Sea, including storms on 21 March 2002, 7–11 August 2002, and 25–29 September 2005. The 25–29 September 2005 cyclone is particularly well-documented and investigated.
See also
1996 Lake Huron cyclone
2006 Central Pacific cyclone
European windstorm (fully extratropical)
South Atlantic tropical cyclone
Subtropical Cyclone Katie
Subtropical Cyclone Lexi
Subtropical Storm 96C
Tropical cyclogenesis
Tropical cyclone basins
Tropical cyclone effects in Europe
Unusual areas of tropical cyclone formation
References
Citations
Sources
External links
Mediterranean Tropical Products Page – Satellite Services Division – Office of Satellite Data Processing and Distribution
Northeast Atlantic and Mediterranean Imagery – NOAA
Climate change and hurricanes
Natural disasters
Tropical cyclone meteorology
Types of cyclone | Mediterranean tropical-like cyclone | [
"Physics"
] | 10,654 | [
"Weather",
"Physical phenomena",
"Natural disasters"
] |
28,057,645 | https://en.wikipedia.org/wiki/Carbonite%20%28explosive%29 | Carbonite was one of the earliest and most successful coal-mining explosives. It is made from such ingredients as nitroglycerin, wood meal, and some nitrate as that of sodium; also nitrobenzene, sulfur, and diatomaceous earth. Carbonite was invented by Bichel of Schmidt and Bichel.
The term Carbonite can refer to these things:
least commonly, an early explosive from Schmidt and Bichel made of sulphuretted tar oil, , and sodium nitrate,
dynamite made to the specific Carbonite recipe and sold by Schmidt and Bichel under that name, or
an entire class of spin-offs of the original recipe (Arctic Carbonite, Ammonkarbonit, etc.); their common feature is that the percentage of combustible materials (wood meal or flour starch) is so high that most of the carbon in the reaction is bound into carbon monoxide and the temperature of combustion is relatively low. Some safety dynamites are carbonites.
References
Explosives | Carbonite (explosive) | [
"Chemistry"
] | 206 | [
"Explosives",
"Explosions"
] |
28,059,077 | https://en.wikipedia.org/wiki/Salmon%20Creek%20Dam | The Salmon Creek Dam is a concrete arch dam on the Salmon Creek, northwest of Juneau, Alaska. Built in 1914, it is the world's first constant-angle arch variable radius dam. Since it was built, over 100 such dams have been constructed all over the world. The dam was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 2022.
The dam was built by the Alaska-Gastineau Mining Company to meet the electrical energy needs for mining operations. The dam continues to be fully functional for hydroelectric generation, as one of the drinking water sources to Juneau city and for aquaculture and fishing. When built, adoption of the constant arch design for the dam reduced costs by 20% because less concrete was needed to construct the dam. Of the two hydroelectric power stations built at the initial stage (one at the upper level and the other at the lower level) – the latter one is still in use after a new powerhouse was built adjoining the old one – it produces 10% of the energy needs of Juneau city. When built, the dam and its two power plants were considered engineering wonders. Both are operated and maintained by the Alaska Electric Light & Power (AEL&P).
Topography
The dam was built in a forested, scenic and narrow valley of the Salmon creek, which runs from Salmon Creek Reservoir and flows southwest for to the Gastineau Channel. The dam (marked as Juneau B-2 in USGS maps) is located at the terminus of the tram-way that was built specifically by the Alaska Gastineau Mining Co. The tram-way was built for hauling material for constructing the dam and its associated power stations. The power supply was for the mines located at a site to the south at Sheep Creek. However, a long road had been built by AEL&P to the upper powerhouse at the base of the Salmon Creek Dam, which has since been de-commissioned.
The creek runs for a length of within the watershed, which has a creek divide of and a ridge line of . The stream bed has large gravel and bedrock substrata and its gradient decreases downstream of the dam. The average width of the stream is and depth of water is about . The basin is surrounded by hills with steep slopes and with elevation above above sea level. The distance from the dam to Juneau city is , to the Juneau harbor is , and to the Juneau Harbor Seaplane Base is .
Evolution of the arch dam design
During the Roman period, the theory of building a curved dam (arch dam) was known as a means to withstand the water pressure and hold the masonry joints. However, in the background of masonry arch dams which dominated dam building scenario in the 19th century and with introduction of concrete technology for building dams, the structural design of arch dams underwent a dramatic change in its economic evolution to minimize use of construction material and inter alia, the cost of construction. Basically, an arch dam is a structure that curves upstream and the water pressure is transferred either directly to the valley sides or indirectly through concrete abutments. Theoretically, the ideal constant angle arch in a V-shaped valley for such an arch dam has a central angle of 133° of curvature. This theory led to the development of the "constant-angle" (or variable radius) arch dam, which was also thinner in design. The theory was first initiated in North America for several dams, and in Alaska in particular, in 1913 with the building of the Salmon Creek Dam, which was completed in 1914. In this regard, Bartlett Lee Thane, the mining engineer, who made a lasting impact in the mining industry – in the Alaska-Gastineau Mining Company – was instrumental in introducing this design of thin arch dam with help from his former football team mates. Lars Jorgensen evolved the specific first design of the constant arch dam. However, AEL&P gives design credit to their then Chief Engineer Harry L. Wallenberg for the Salmon Creek Dam. The idea for building such a dam had been thought about 30 years earlier to economize on the use of construction material for building the dam. However, it was only in 1913 that the concept was transformed on ground by the pioneer mining engineer Thane.
Theory and design
In arch dam design, two basic shapes are adopted. These are the constant–radius arch and the constant-angle arch, the latter design is more complex. In the constant-radius arch design, which is also known as the single-radius arch, the shape of the dam is cylindrical with vertical upstream face while the downstream face is battered. The constant-angle arch design has also a variable–radius arch. In this design, the central opening angle is constant whereas the arch radius increases from the base to the crest; this increase towards the crest is proportional to the increase of the canyon width of the gorge. Further, according to the theory of constant-arch design, the arch action at the base of the dam exerts the maximum pressure on the base. A V-shaped gorge in particular is considered an ideal feature for building this type of dam. This design ensures substantial savings in use of construction material as opposed to the constant–radius arch design. Lars R. Jorgenson who had conceived this concept had proved that the most economic design of the dam was obtained with an optimum opening angle of 133.6°, with the least quantity of concrete. This design was applied with some modifications for the Salmon Creek Dam, which was designed with constant opening angle of 113° with radius varying from at the base to at the crest.
The "V" shape of the gorge at the dam site was adjudged ideal for building this type of dam at Salmon Creek site.
An ice pressure of 10 tons per square foot (500 kPa) was considered based on the rim conditions of the reservoir and the design was also checked for an ice pressure of 20 tons per square foot (1,000 kPa); in the latter case the safety factor in concrete under resultant compression values was considered to be 5 and safe. The dam was designed with a base width of (width of is also mentioned in one source) and it tapers to a width of ( is also mentioned in one source) at the top over a dam height of . The geological condition at the base of the dam also dictated the type of dam as a flat rocky bed of width was available to lay the foundation for the arch dam without resorting to large-scale excavations. The dip of the rock was also steep both on upstream and downstream side of the flat base. Bank to bank, the crest length is . The dam envisaged storage of about at the full reservoir level while it is when the water is deep. A steel outlet pipe of diameter was proposed to be embedded in the mid base of the dam as a spillway. The dam was planned with a top elevation of above sea level. The reservoir water spreads to an area of , while the catchment drained is .
Concurrently, two power houses were planned to generate power for the mining industry from the reservoir's water – an upper power station with a 3 MW capacity about downstream of the dam and a lower power station with a 3 MW capacity near the shore of the Gastineau Channel. The variable-radius type shape of the dam adopted for Salmon Creek became a standard for many high and large dams, particularly in western USA. An article from the National Science Foundation's SimScience project notes the following:
Construction
Based on the plans prepared for the concrete arch dam and the two power stations, construction was started in May 1912. Construction of the lower power house on the shore at the confluence of Salmon Creek with Gastineau Channel was initiated first; a transmission line was erected from here to the mines in March 1913 and the construction of the dam was started in April 1913. Construction facilities for the dam were established upstream where aggregates (fine and coarse) were produced by crushing rocks at the crushing plant. The aggregates were mixed in designed proportions with cement and with a small admixture of lime to manufacture concrete for placing on the dam. The first batch of concrete was placed on the dam on July 14, 1913. The dam was completed, over a 13 months period, on August 13, 1914. A concrete quantity of was placed on the dam. Concrete was placed at the rate of per day.
Between 1912 and 1914, two power stations were built to utilize the water stored in the reservoir created by the dam. The first power station, the upper powerhouse titled 'Powerhouse 2', was located below the dam and had an installation of two units of 1.5 MW capacity each operating under a hydraulic head of . The tail waters from this power station was conveyed through a long power channel to the second power station titled 'Powerhouse 1' located on the shore of the Gastineau Channel. The power house at this location also had two units of 1.5 MW capacity each operating under a head of . Thus, the total generation from the two power stations was . In 1916, the average load on the two power stations was . Near Powerhouse 1 on the shore, office buildings, machine shops, saw mills, canteen and housing facilities for staff were also built.
Rehabilitation
Since its completion, the dam and its two power stations have gone through many rehabilitation measures. Power House 2 was damaged in a fire in 1923. It was rehabilitated in 1935. It was finally abandoned in 1998.
With aging, the dam also needed to be rehabilitated and work was carried out in 1967. Deteriorated concrete was removed, the dam body was regrouted and the upstream face of the dam was provided with a layer of high strength concrete in the top .
The lower power house also underwent major rehabilitation measures. It was shut down in December 1974 due to the high cost of operation and maintenance. In 1984 a new power plant was built adjoining the old powerhouse, with installation of 6.7 MW capacity.
Benefits
Even though the project was initially built for hydroelectric power generation to meet the mining requirements, it has over the years evolved into a multipurpose reservoir with benefits of power generation, drinking water and fisheries.
Hydroelectricity
The rehabilitated dam and the new power house facility at the lower house site are now fully functional. The generating capacity of the power station is 29.5 GWh annually, which accounts for nearly 10% of Juneau's power demand. Alaska Electric Light and Power operates and maintains the system.
Drinking water supply
Salmon Creek reservoir is a secondary source of drinking water which is provided in conjunction with Alaska Electric Light and Power Company (AEL&P). Water is drawn from near the Salmon Creek power generation plant, which is located near sea level. Tail waters from the power station are then pumped to a water treatment plant for membrane filtration, chlorination, and pH and alkalinity adjustment with soda ash before the water is supplied to the distribution system. This system was commissioned by the City Borough of Juneau (CBJ) in 2015. The reservoir is also used as chlorine contact tanks, where chlorine is added for purification and given time to react with any pathogens, before it is supplied to the city. However, this source is subject to seasonal high turbidity and also interruptions due to the annual maintenance of the generator units. This system is able to supply of water. The water resources are generally pollution-free and quality is monitored and tested every month to check for drinking-water standards set by the EPA and Alaska Department of Environmental Conservation (ADEC).
Fisheries
In 1880, the Salmon Creek was named by Richard Harris and Joe Juneau (during their first visit to the area for gold prospecting). The local people called it Tilhini meaning "dog salmon" a native name used by Tingit Alaskan Indian; this name is also shown in some early topographic maps. In 1917, fish propagation was established in Salmon Creek Reservoir by introducing 50,000 fry lings (small and young recently hatched fish) in the reservoir with assistance from Alaska Fish and Game Club, which maintained a hatchery at Juneau. This helped in propagating fish reserves of Salmon in the reservoir. It is reported that by the time the lake was opened for fishing, the fish measured and could be caught with a fly of . Salmon Creek Reservoir is now open for bait fishing all the year round. Salmon fish varieties are many. Some fish species identified are: Dolly Varden, Brook Trout, Freshwater Trout, Salmon family, Chum Salmon and Coho Salmon.
References
External links
Constant-Angle Arch Dam
AEL&P -Pages 8–12 Salmon Creek Dam
FERC permit document
1914 establishments in Alaska
Arch dams
Buildings and structures in Juneau, Alaska
Dams completed in 1914
Dams in Alaska
Energy infrastructure completed in 1914
Hydroelectric power plants in Alaska
Lakes of Juneau, Alaska
Reservoirs in Alaska
Historic Civil Engineering Landmarks | Salmon Creek Dam | [
"Engineering"
] | 2,611 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
28,059,825 | https://en.wikipedia.org/wiki/Barnette%27s%20conjecture | Barnette's conjecture is an unsolved problem in graph theory, a branch of mathematics, concerning Hamiltonian cycles in graphs. It is named after David W. Barnette, a professor emeritus at the University of California, Davis; it states that every bipartite polyhedral graph with three edges per vertex has a Hamiltonian cycle.
Definitions
A planar graph is an undirected graph that can be embedded into the Euclidean plane without any crossings. A planar graph is called polyhedral if and only if it is 3-vertex-connected, that is, if there do not exist two vertices the removal of which would disconnect the rest of the graph. A graph is bipartite if its vertices can be colored with two different colors such that each edge has one endpoint of each color. A graph is cubic (or 3-regular) if each vertex is the endpoint of exactly three edges. Finally, a graph is Hamiltonian if there exists a cycle that passes through each of its vertices exactly once. Barnette's conjecture states that every cubic bipartite polyhedral graph is Hamiltonian.
By Steinitz's theorem, a planar graph represents the edges and vertices of a convex polyhedron if and only if it is polyhedral. A three-dimensional polyhedron has a cubic graph if and only if it is a simple polyhedron.
And, a planar graph is bipartite if and only if, in a planar embedding of the graph, all face cycles have even length. Therefore, Barnette's conjecture may be stated in an equivalent form: suppose that a three-dimensional simple convex polyhedron has an even number of edges on each of its faces. Then, according to the conjecture, the graph of the polyhedron has a Hamiltonian cycle.
History
conjectured that every cubic polyhedral graph is Hamiltonian; this came to be known as Tait's conjecture. It was disproven by , who constructed a counterexample with 46 vertices; other researchers later found even smaller counterexamples. However, none of these known counterexamples is bipartite. Tutte himself conjectured that every cubic 3-connected bipartite graph is Hamiltonian, but this was shown to be false by the discovery of a counterexample, the Horton graph. proposed a weakened combination of Tait's and Tutte's conjectures, stating that every bipartite cubic polyhedron is Hamiltonian, or, equivalently, that every counterexample to Tait's conjecture is non-bipartite.
Equivalent forms
showed that Barnette's conjecture is equivalent to a superficially stronger statement, that for every two edges e and f on the same face of a bipartite cubic polyhedron, there exists a Hamiltonian cycle that contains e but does not contain f. Clearly, if this statement is true, then every bipartite cubic polyhedron contains a Hamiltonian cycle: just choose e and f arbitrarily. In the other directions, Kelmans showed that a counterexample could be transformed into a counterexample to the original Barnette conjecture.
Barnette's conjecture is also equivalent to the statement that the vertices of the dual of every cubic bipartite polyhedral graph can be partitioned into two subsets whose induced subgraphs are trees. The cut induced by such a partition in the dual graph corresponds to a Hamiltonian cycle in the primal graph.
Partial results
Although the truth of Barnette's conjecture remains unknown, computational experiments have shown that there is no counterexample with fewer than 86 vertices.
If Barnette's conjecture turns out to be false, then it can be shown to be NP-complete to test whether a bipartite cubic polyhedron is Hamiltonian. If a planar graph is bipartite and cubic but only of connectivity 2, then it may be non-Hamiltonian, and it is NP-complete to test Hamiltonicity for these graphs. Another result was obtained by : if the dual graph can be vertex-colored with colors blue, red and green such that every red-green cycle contains a vertex of degree 4, then the primal graph is Hamiltonian.
Related problems
A related conjecture of Barnette states that every cubic polyhedral graph in which all faces have six or fewer edges is Hamiltonian. Computational experiments have shown that, if a counterexample exists, it would have to have more than 177 vertices. The conjecture was proven by .
The intersection of these two conjectures would be that every bipartite cubic polyhedral graph in which all faces have four or six edges is Hamiltonian. This was proved to be true by .
Notes
References
; Reprinted in Scientific Papers, Vol. II, pp. 85–98
External links
Barnette's Conjecture in the Open Problem Garden, Robert Samal, 2007.
Conjectures
Unsolved problems in graph theory
Planar graphs
Hamiltonian paths and cycles | Barnette's conjecture | [
"Mathematics"
] | 1,016 | [
"Unsolved problems in mathematics",
"Statements about planar graphs",
"Planar graphs",
"Conjectures",
"Unsolved problems in graph theory",
"Planes (geometry)",
"Mathematical problems"
] |
29,407,693 | https://en.wikipedia.org/wiki/Herbert%20Enderton | Herbert Bruce Enderton (April 15, 1936 – October 20, 2010) was an American mathematician. He was a Professor Emeritus of Mathematics at UCLA and a former member of the faculties of Mathematics and of Logic and the Methodology of Science at the University of California, Berkeley.
Enderton also contributed to recursion theory, the theory of definability, models of analysis, computational complexity, and the history of logic.
He earned his Ph.D. at Harvard in 1962. He was a member of the American Mathematical Society from 1961 until his death.
Personal life
He lived in Santa Monica. He married his wife, Cathy, in 1961 and they had two sons; Eric and Bert.
Death
He died from leukemia in 2010.
Selected publications
References
External links
Herbert B. Enderton home page
Enderton publications
Herbert Enderton UCLA lectures on YouTube
1936 births
2010 deaths
20th-century American mathematicians
21st-century American mathematicians
American logicians
Mathematical logicians
American set theorists
Harvard University alumni
University of California, Berkeley College of Letters and Science faculty
University of California, Los Angeles faculty
Place of birth missing | Herbert Enderton | [
"Mathematics"
] | 218 | [
"Mathematical logic",
"Mathematical logicians"
] |
29,409,353 | https://en.wikipedia.org/wiki/Refractory%20lined%20expansion%20joint | A Refractory lined expansion joint is an assembly used in a pipe line to allow it to expand and contract as climate conditions move from hot to cold and helps to ensure that the system remains functional. The refractory-lining can be vibra cast insulation with anchors, abrasion resistant refractory in hex mesh, gunned insulating refractory, or poured insulating refractory. Refractory lined expansion joints can be hinged, in-line pressure balanced, gimbal, tied-universal depending on the temperature, pressure, movement and flow media conditions.
Refractory lined Expansion joints are used in extremely high temperature and high pressure applications and are designed to withstand extreme environments. The Refractory lining within the metallic Expansion joint bellows functions to reduce the
pipe wall temperature by 300˚F to 450˚F, depending upon the thickness of the refractory lining. The lining also helps to withstand the abrasive material from the catalyst in FCCU applications.
Applications
Fluid catalytic cracking Units (FCCU)
Furnaces
Hot gas turbines
Styrene plants
Fluidized bed boilers
Kilns
Power recovery trains
Thermal oxidizers
References
Structural connectors | Refractory lined expansion joint | [
"Physics",
"Engineering"
] | 236 | [
"Structural engineering",
"Refractory materials",
"Materials",
"Structural connectors",
"Matter"
] |
29,411,500 | https://en.wikipedia.org/wiki/Molybdenum%20oxotransferase | The enzyme super-family of molybdenum oxotransferases all contain molybdenum, and promote oxygen atom transfer reactions.
Enzymes in this family include DMSO reductase, xanthine oxidase, nitrite reductase, and sulfite oxidase.
See also
Bioinorganic chemistry
References
Metalloproteins
Molybdenum compounds | Molybdenum oxotransferase | [
"Chemistry"
] | 87 | [
"Metalloproteins",
"Bioinorganic chemistry"
] |
29,413,418 | https://en.wikipedia.org/wiki/Bumping%20%28chemistry%29 | Bumping is a phenomenon in chemistry where homogeneous liquids boiled in a test tube or other container will superheat and, upon nucleation, rapid boiling will expel the liquid from the container. In extreme cases, the container may be broken.
Cause
Bumping occurs when a liquid is heated or has its pressure reduced very rapidly, typically in smooth, clean glassware. The hardest part of bubble formation is the initial formation of the bubble; once a bubble has formed, it can grow quickly. Because the liquid is typically above its boiling point, when the liquid finally starts to boil, a large vapor bubble is formed that pushes the liquid out of the test tube, typically at high speed. This rapid expulsion of boiling liquid poses a serious hazard to others and oneself in the lab. Furthermore, if a liquid is boiled and cooled back down, the chance of bumping increases on each subsequent boil, because each heating cycle progressively de-gasses the liquid, reducing the number of remaining nucleation sites.
Prevention
The most common way of preventing bumping is by adding one or two boiling chips to the reaction vessel. However, these alone may not prevent bumping and for this reason it is advisable to boil liquids in a boiling tube, a boiling flask, or an Erlenmeyer flask. In addition, heating test tubes should never be pointed towards any person, just in case bumping does occur. Whenever a liquid is cooled below its boiling point and re-heated to a boil, a new boiling chip will be needed, as the pores in the old boiling chip tend to fill with solvent, rendering it ineffective.
A sealed capillary tube can also be placed in a boiling solution to provide a nucleation site, reducing the bumping risk and allowing its easy removal from a system.
Stirring a liquid also lessens the chances of bumping, as the resulting vortex breaks up any large bubbles that might form, and the stirring itself creates bubbles.
References
Phase transitions
Chemical safety | Bumping (chemistry) | [
"Physics",
"Chemistry"
] | 404 | [
"Physical phenomena",
"Phase transitions",
"Chemical accident",
"Phases of matter",
"Critical phenomena",
"nan",
"Statistical mechanics",
"Chemical safety",
"Matter"
] |
29,415,060 | https://en.wikipedia.org/wiki/Kenneth%20C.%20Brugger | Kenneth C. Brugger (16 June 1918 – 25 November 1998) was an American naturalist and self-taught textile engineer. He is noted for discovering, with his wife Catalina Trail, the location of the overwintering sites of the monarch butterfly, Danaus plexippus.
Life and career
Brugger was born 16 June 1918 in Kenosha, Wisconsin. He never attended college but had strong mechanical aptitude and mathematical skills. He worked as a mechanic in his father's garage until World War II, when he worked in the cryptology section of the U. S. Signal Corps. After the war he went to work for Jockey International and rose to the position of chief engineer for Jockey's worldwide knitting operations. He designed innovative knitting machines, including a compactor that minimized shrinkage in knitted underwear. Following a divorce in 1965 he moved to Mexico to work as a textile consultant.
Monarch research
In 1972 Brugger was working in Mexico City. An amateur naturalist, he responded to a notice in a local newspaper written by Fred and Norah Urquhart, Canadian zoologists who were studying the migration patterns of monarch butterflies. The Urquharts had tracked the migration route as far as Texas, where it disappeared, and they thought it might continue into Mexico, so they were seeking volunteers to look for the butterflies.
In 1973, after seeing the ad, Brugger convinced Catalina Aguado to search for the butterflies with him. They searched for several years, first as volunteers, then as paid assistants to the Urquharts. In 1974 he married Catalina Aguado, a fellow butterfly lover. On 2 January 1975, they finally found a mountaintop forest containing millions of resting monarch butterflies. Their discovery was reported as the cover story in National Geographic magazine in August 1976. Eventually a dozen such sites were located and were protected by the Mexican government as ecological reserves. The area is now a World Heritage Site known as the Monarch Butterfly Biosphere Reserve. The sites are popular with ecotourists who admire the beauty of the massed butterflies. Ironically, Brugger could not appreciate that beauty; he was totally colorblind.
Brugger and Catalina Aguado (who later remarried and became known as Catalina Trail) separated in 1991 and eventually divorced; they had one son.
Recognition
Brugger's search and discovery is dramatized in the IMAX film Flight of the Butterflies.
References
1918 births
1998 deaths
Textile engineers
People from Kenosha, Wisconsin
People from Austin, Texas
20th-century American naturalists
United States Army personnel of World War II | Kenneth C. Brugger | [
"Engineering"
] | 526 | [
"Textile engineers",
"Textile engineering"
] |
29,418,139 | https://en.wikipedia.org/wiki/Manganocene | Manganocene or bis(cyclopentadienyl)manganese(II) is an organomanganese compound with the formula [Mn(C5H5)2]n. It is a thermochromic solid that degrades rapidly in air. Although the compound is of little utility, it is often discussed as an example of a metallocene with ionic character.
Synthesis and structure
It may be prepared in the manner common for other metallocenes, i.e., by reaction of manganese(II) chloride with sodium cyclopentadienide:
MnCl2 + 2 CpNa → Cp2Mn + 2 NaCl
In the solid state below 159 °C, manganocene adopts a polymeric structure with every manganese atom coordinated by three cyclopentadienyl ligands, two of which are bridging ligands. Above 159 °C, the solid changes color from amber to pink and the polymer converts to the structure of a normal sandwich complex, i.e., the molecule Mn(η5-C5H5)2.
Reactions
The ionic character of manganocene gives it an unusual pattern of reactivities compared to metallocenes of other transition metals in the same row. It is kinetically labile, being readily hydrolysed by water or hydrochloric acid, and readily forms adducts with two- or four-electron Lewis bases.
Manganocene polymerizes ethylene to high molecular weight linear polyethylene in the presence of methylaluminoxane or diethylaluminium chloride as cocatalysts. It does not polymerize propylene.
References
Metallocenes
Organomanganese compounds
Cyclopentadienyl complexes | Manganocene | [
"Chemistry"
] | 363 | [
"Organometallic chemistry",
"Cyclopentadienyl complexes"
] |
40,221,381 | https://en.wikipedia.org/wiki/Kinoform | A kinoform is a type of computer-generated converging lens that is able to efficiently focus light to a point. They typically use holography to reproduce the optical phase profile of a normal converging lens, albeit on a flat surface.
They can be used in areas such as focusing x-ray radiation, or in the study of nanomaterials. Diamond is often used in kinoform lenses as it has a high thermal conductivity. Higher chromatic aberration is a common drawback.
See also
Metasurface
Further reading
A.F. Isakovic, A. Stein, J.B. Warren, S. Narayanan, M. Sprung, A.R. Sandy, K. Evans-Lutterodt, "Diamond Kinoform Hard X-ray Refractive Lenses: Design, Nanofabrication and Testing," J. Synch. Rad., 16, 8 (2009).
References
X-rays | Kinoform | [
"Physics"
] | 202 | [
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
40,226,710 | https://en.wikipedia.org/wiki/Raft%20%28algorithm%29 | Raft is a consensus algorithm designed as an alternative to the Paxos family of algorithms. It was meant to be more understandable than Paxos by means of separation of logic, but it is also formally proven safe and offers some additional features. Raft offers a generic way to distribute a state machine across a cluster of computing systems, ensuring that each node in the cluster agrees upon the same series of state transitions. It has a number of open-source reference implementations, with full-specification implementations in Go, C++, Java, and Scala. It is named after Reliable, Replicated, Redundant, And Fault-Tolerant.
Raft is not a Byzantine fault tolerant (BFT) algorithm; the nodes trust the elected leader.
Basics
Raft achieves consensus via an elected leader. A server in a raft cluster is either a leader or a follower, and can be a candidate in the precise case of an election (leader unavailable). The leader is responsible for log replication to the followers. It regularly informs the followers of its existence by sending a heartbeat message. Each follower has a timeout (typically between 150 and 300 ms) in which it expects the heartbeat from the leader. The timeout is reset on receiving the heartbeat. If no heartbeat is received the follower changes its status to candidate and starts a leader election.
Approach of the consensus problem in Raft
Raft implements consensus by a leader approach. The cluster has one and only one elected leader which is fully responsible for managing log replication on the other servers of the cluster. It means that the leader can decide on new entries' placement and establishment of data flow between it and the other servers without consulting other servers. A leader leads until it fails or disconnects, in which case surviving servers elect a new leader.
The consensus problem is decomposed in Raft into two relatively independent subproblems listed down below.
Leader election
When the existing leader fails or when the algorithm initializes, a new leader needs to be elected.
In this case, a new term starts in the cluster. A term is an arbitrary period of time on the server for which a new leader needs to be elected. Each term starts with a leader election. If the election is completed successfully (i.e. a single leader is elected) the term keeps going with normal operations orchestrated by the new leader. If the election is a failure, a new term starts, with a new election.
A leader election is started by a candidate server. A server becomes a candidate if it receives no communication by the leader over a period called the election timeout, so it assumes there is no acting leader anymore. It starts the election by increasing the term counter, voting for itself as new leader, and sending a message to all other servers requesting their vote. A server will vote only once per term, on a first-come-first-served basis. If a candidate receives a message from another server with a term number larger than the candidate's current term, then the candidate's election is defeated and the candidate changes into a follower and recognizes the leader as legitimate. If a candidate receives a majority of votes, then it becomes the new leader. If neither happens, e.g., because of a split vote, then a new term starts, and a new election begins.
Raft uses a randomized election timeout to ensure that split vote problems are resolved quickly. This should reduce the chance of a split vote because servers won't become candidates at the same time: a single server will time out, win the election, then become leader and send heartbeat messages to other servers before any of the followers can become candidates.
Log replication
The leader is responsible for the log replication. It accepts client requests. Each client request consists of a command to be executed by the replicated state machines in the cluster. After being appended to the leader's log as a new entry, each of the requests is forwarded to the followers as AppendEntries messages. In case of unavailability of the followers, the leader retries AppendEntries messages indefinitely, until the log entry is eventually stored by all of the followers.
Once the leader receives confirmation from half or more of its followers that the entry has been replicated, the leader applies the entry to its local state machine, and the request is considered committed. This event also commits all previous entries in the leader's log. Once a follower learns that a log entry is committed, it applies the entry to its local state machine. This ensures consistency of the logs between all the servers through the cluster, ensuring that the safety rule of Log Matching is respected.
In the case of a leader crash, the logs can be left inconsistent, with some logs from the old leader not being fully replicated through the cluster. The new leader will then handle inconsistency by forcing the followers to duplicate its own log. To do so, for each of its followers, the leader will compare its log with the log from the follower, find the last entry where they agree, then delete all the entries coming after this critical entry in the follower log and replace it with its own log entries. This mechanism will restore log consistency in a cluster subject to failures.
Safety
Safety rules in Raft
Raft guarantees each of these safety properties:
Election safety: at most one leader can be elected in a given term.
Leader append-only: a leader can only append new entries to its logs (it can neither overwrite nor delete entries).
Log matching: if two logs contain an entry with the same index and term, then the logs are identical in all entries up through the given index.
Leader completeness: if a log entry is committed in a given term then it will be present in the logs of the leaders since this term.
State machine safety: if a server has applied a particular log entry to its state machine, then no other server may apply a different command for the same log.
The first four rules are guaranteed by the details of the algorithm described in the previous section. The State Machine Safety is guaranteed by a restriction on the election process.
State machine safety
This rule is ensured by a simple restriction: a candidate can't win an election unless its log contains all committed entries. In order to be elected, a candidate has to contact a majority of the cluster, and given the rules for logs to be committed, it means that every committed entry is going to be present on at least one of the servers the candidates contact.
Raft determines which of two logs (carried by two distinct servers) is more up-to-date by comparing the index term of the last entries in the logs. If the logs have a last entry with different terms, then the log with the later term is more up-to-date. If the logs end with the same term, then whichever log is longer is more up-to-date.
In Raft, the request from a candidate to a voter includes information about the candidate's log. If its own log is more up-to-date than the candidate's log, the voter denies its vote to the candidate. This implementation ensures the State Machine Safety rule.
Follower crashes
If a follower crashes, AppendEntries and vote requests sent by other servers will fail. Such failures are handled by the servers trying indefinitely to reach the downed follower. If the follower restarts, the pending requests will complete. If the request has already been taken into account before the failure, the restarted follower will just ignore it.
Timing and availability
Timing is critical in Raft to elect and maintain a steady leader over time, in order to have a perfect availability of the cluster. Stability is ensured by respecting the timing requirement of the algorithm:
broadcastTime << electionTimeout << MTBF
broadcastTime is the average time it takes a server to send a request to every server in the cluster and receive responses. It is relative to the infrastructure used.
MTBF (Mean Time Between Failures) is the average time between failures for a server. It is also relative to the infrastructure.
electionTimeout is the same as described in the Leader Election section. It is something the programmer must choose.
Typical numbers for these values can be 0.5 ms to 20 ms for broadcastTime, which implies that the programmer sets the electionTimeout somewhere between 10 ms and 500 ms. It can take several weeks or months between single server failures, which means the values are sufficient for a stable cluster.
Production use of Raft
CockroachDB uses Raft in the Replication Layer.
Etcd uses Raft to manage a highly-available replicated log
Hazelcast uses Raft to provide its CP Subsystem, a strongly consistent layer for distributed data structures.
MongoDB uses a variant of Raft in the replication set.
Neo4j uses Raft to ensure consistency and safety.
RabbitMQ uses Raft to implement durable, replicated FIFO queues.
ScyllaDB uses Raft for metadata (schema and topology changes)
Splunk Enterprise uses Raft in a Search Head Cluster (SHC)
TiDB uses Raft with the storage engine TiKV.
YugabyteDB uses Raft in the DocDB Replication
ClickHouse uses Raft for in-house implementation of ZooKeeper-like service
Redpanda uses the Raft consensus algorithm for data replication
Apache Kafka Raft (KRaft) uses Raft for metadata management.
NATS Messaging uses the Raft consensus algorithm for Jetstream cluster management and data replication
Camunda uses the Raft consensus algorithm for data replication
References
External links
Distributed algorithms
Fault-tolerant computer systems | Raft (algorithm) | [
"Technology",
"Engineering"
] | 1,928 | [
"Fault-tolerant computer systems",
"Reliability engineering",
"Computer systems"
] |
4,036,703 | https://en.wikipedia.org/wiki/Lime%20mortar | Lime mortar or torching is a masonry mortar composed of lime and an aggregate such as sand, mixed with water. It is one of the oldest known types of mortar, used in ancient Rome and Greece, when it largely replaced the clay and gypsum mortars common to ancient Egyptian construction.
With the introduction of Portland cement during the 19th century, the use of lime mortar in new constructions gradually declined. This was largely due to the ease of use of Portland cement, its quick setting, and high compressive strength. However, the soft and porous properties of lime mortar provide certain advantages when working with softer building materials such as natural stone and terracotta. For this reason, while Portland cement continues to be commonly used in new brick and concrete construction, its use is not recommended in the repair and restoration of brick and stone-built structures originally built using lime mortar.
Despite its enduring utility over many centuries (Roman concrete), lime mortar's effectiveness as a building material has not been well understood; time-honoured practices were based on tradition, folklore and trade knowledge, vindicated by the vast number of old buildings that remain standing. Empirical testing in the late 20th century provided a scientific understanding of its remarkable durability. Both professionals and do-it-yourself home owners can purchase lime putty mortar (and have their historical mortar matched for both color and content) by companies that specialize in historical preservation and sell pre-mixed mortar in small batches.
Etymology
Lime comes from Old English lim ('sticky substance, birdlime, mortar, cement, gluten'), and is related to Latin limus ('slime, mud, mire'), and linere ('to smear'). Mortar is a mixture with cement and comes from Old French mortier ('builder's mortar, plaster; bowl for mixing') in the late 13th century and Latin mortarium ('mortar'). Lime is a cement which is a binder or glue that holds things together but cement is usually reserved for Portland cement.
History
Lime mortar appeared in Antiquity. the ancient Egyptians were the first to use lime mortars about 6,000 years ago ,they used lime to plaster the Giza pyramids. In addition, the Egyptians also incorporated various limes into their religious temples as well as their homes. Indian traditional structures were built with lime mortar, some of which are more than 4,000 years old (such as Mohenjo-daro, a heritage monument of Indus Valley civilization in Pakistan).
The Roman Empire used lime-based mortars extensively. Vitruvius, a Roman architect, provided basic guidelines for lime mortar mixes. The Romans created hydraulic mortars that contained lime and a pozzolan such as brick dust or volcanic ash. These mortars were intended to be used in applications where the presence of water would otherwise not allow the mortar to harden (carbonate) properly.
Uses
Lime mortar today is primarily used in the conservation of buildings originally built using it, but may be used as an alternative to ordinary portland cement. It is made principally of lime (hydraulic, or non hydraulic as explained below), water, and an aggregate such as sand. Portland cement has proven to be incompatible with lime mortar because it is harder, less flexible, and impermeable. These qualities lead to premature deterioration of soft, historic bricks so traditionally, low-temperature-fired lime mortars are recommended for use with existing mortar of a similar type or reconstruction of buildings using historically correct methods. In the past, lime mortar tended to be mixed on site with whatever sand was locally available. Since the sand influences the colour of the lime mortar, colours of pointing mortar can vary dramatically from district to district.
Hydraulic and non-hydraulic lime
Hydraulic lime contains substances which set by hydration, so it can set underwater. Non-hydraulic lime sets by carbonation and so needs exposure to carbon dioxide in the air; the material cannot set underwater or inside a thick wall. For natural hydraulic lime (NHL) mortars, the lime is obtained from limestone naturally containing a sufficient percentage of silica and/or alumina. Artificial hydraulic lime is produced by introducing specific types and quantities of additives to the source of lime during the burning process, or adding a pozzolan to non-hydraulic lime. Non-hydraulic lime is produced from a high purity source of calcium carbonate such as chalk, limestone, or oyster shells.
Non-hydraulic lime
Non-hydraulic lime is primarily composed of (generally greater than 95%) calcium hydroxide, Ca(OH)2.
Non-hydraulic lime is produced by first heating sufficiently pure calcium carbonate to between 954° and 1066 °C, driving off carbon dioxide to produce quicklime (calcium oxide). This is done in a lime kiln. The quicklime is then slaked: hydrated by being thoroughly mixed with enough water to form a slurry (lime putty), or with less water to produce dry powder. This hydrated lime (calcium hydroxide) naturally turns back into calcium carbonate by reacting with carbon dioxide in the air, the entire process being called the lime cycle.
The slaking process involved in creating a lime putty is an exothermic reaction which initially creates a liquid of a creamy consistency. This is then matured for 2 to 3 months—depending upon environmental conditions—to allow time for it to condense and mature into a lime putty.
A matured lime putty is thixotropic, meaning that when a lime putty is agitated it changes from a putty into a more liquid state. This aids its use for mortars as it makes a mortar easier to work with. If left to stand following agitation a lime putty will slowly revert from a thick liquid to a putty state.
As well as calcium-based limestone, dolomitic limes can be produced which are based on calcium magnesium carbonate.
A frequent source of confusion regarding lime mortar stems from the similarity of the terms hydraulic and hydrated.
Hydrated lime is any lime other than quicklime, and can refer to either hydraulic (hardens under water) or non-hydraulic (does not harden under water) lime.
Lime putty is always non-hydraulic and will keep indefinitely stored under water. As the name suggests, lime putty is in the form of a putty made from just lime and water.
If the quicklime is slaked with an excess of water then putty or slurry is produced. If just the right quantity of water is used, the result is a dry material (any excess water escaping as steam during heating). This is ground to make hydrated lime powder.
Hydrated, non-hydraulic lime powder can be mixed with water to form lime putty. Before use putty is usually left in the absence of carbon dioxide (usually under water) to mature. Putty can be matured for as little as 24 hours or for many years; an increased maturation time improves the quality of the putty. There is an argument that a lime putty which has been matured for an extended period (over 12 months) becomes so stiff that it is difficult to work.
There is some dispute (Roman concrete) as to the comparative quality of putty formed from dry hydrated lime compared with that produced as putty at the time of slaking. It is generally agreed that the latter is preferable. A hydrated lime will produce a material which is not as "fatty”, being a common trade term for compounds have a smoother buttery texture when worked. Often, due to lengthy and poor storage, the resulting lime produced by hydrated lime will exhibit longer carbonatation periods as well as lower compressive strengths.
Non-hydraulic lime takes longer to set and is weaker than hydraulic lime, and should not be allowed to freeze before it is well set. Although the setting process can be slow, the drying time of a lime mortar must be regulated at a slow rate to ensure a good final set. A rapidly dried lime mortar will result in a low-strength, poor-quality final mortar often displaying shrinkage cracks. In practice, lime mortars are often protected from direct sunlight and wind with damp hessian sheeting or sprayed with water to control the drying rates. But it also has the quality of autogenous healing (self healing) where some free lime dissolves in water and is redeposited in any tiny cracks which form.
Oyster shell mortar
In the tidewater region of Maryland and Virginia, oyster shells were used to produce quicklime during the colonial period. Similar to other materials used to produce lime, the oyster shells are burned. This can be done in a lime rick instead of a kiln. Burning shells in a rick is something that Colonial Williamsburg and the recreation of Ferry Farm have had to develop from conjecture and in-the-field learning. The rick that they constructed consists of logs set up in a circle that burn slowly, converting oysters that are contained in the wood pile to an ashy powder. An explanatory video of how the rick was built for the Ferry Farm can be found here. The burnt shell can then be slaked and turned into lime putty.
Mortars using oyster shells can sometimes be identified by the presence of small bits of shell in the exposed mortar joint. In restoration masonry, the bits of shell are sometimes exaggerated to give the viewer the impression of authenticity. Unfortunately, these modern attempts often contain higher than necessary ratios of Portland cement. This can cause failures in the brick if the mortar joint is stronger than the brick elements.
Hydraulic lime
Hydraulic lime sets by reaction with water called hydration.
When a stronger lime mortar is required, such as for external or structural purposes, a pozzolan can be added, which improves its compressive strength and helps to protect it from weathering damage. Pozzolans include powdered brick, heat treated clay, silica fume, fly ash, and volcanic materials. The chemical set imparted ranges from very weak to almost as strong as Portland cement.
This can also assist in creating more regulated setting times of the mortar as the pozzolan will create a hydraulic set, which can be of benefit in restoration projects when time scales and ultimately costs need to be monitored and maintained.
Hydraulic lime can be considered, in terms both of properties and manufacture, as part-way between non-hydraulic lime and Portland cement. The limestone used contains sufficient quantities of clay and/or silica. The resultant product will contain dicalcium silicate but unlike Portland cement not tricalcium silicate.
It is slaked enough to convert the calcium oxide to calcium hydroxide but not with sufficient water to react with the dicalcium silicate. It is this dicalcium silicate which in combination with water provides the setting properties of hydraulic lime.
Aluminium and magnesium also produce a hydraulic set, and some pozzolans contain these elements.
There are three strength grades for natural hydraulic lime, laid down in the European Norm EN459; NHL2, NHL3.5 and NHL5. The numbers stand for the minimum compressive strength at 28 days in newtons per square millimeter (N/mm2). For example, the NHL 3.5 strength ranges from 3.5 N/mm2 (510 psi) to 10 N/mm2 (1,450 psi). These are similar to the old classification of feebly hydraulic, moderately hydraulic and eminently hydraulic, and although different, some people continue to refer to them interchangeably. The terminology for hydraulic lime mortars was improved by the skilled French civil engineer Louis Vicat in the 1830s from the older system of water limes and feebly, moderately and eminently. Vicat published his work following research of the use of lime mortars whilst building bridges and roads in his work. The French company Vicat still currently produce natural cements and lime mortars. Names of lime mortars were so varied and conflicting across the European continent that the reclassification has greatly improved the understanding and use of lime mortars.
Mix
Traditional lime mortar is a combination of lime putty and aggregate (usually sand). A typical modern lime mortar mix would be 1 part lime putty to 3 parts washed, well graded, sharp sand. Other materials have been used as aggregate instead of sand. The theory is that the voids of empty space between the sand particles account for 1/3 of the volume of the sand. The lime putty, when mixed at a 1:3 ratio, fills these voids to create a compact mortar. Analysis of mortar samples from historic buildings typically indicates a higher ratio of around 1 part lime putty to 1.5 part aggregate/sand was commonly used. This equates to approximately 1 part dry quicklime to 3 parts sand. A traditional coarse plaster mix also had horse hair added for reinforcing and control of shrinkage, important when plastering to wooden laths and for base (or dubbing) coats onto uneven surfaces such as stone walls where the mortar is often applied in thicker coats to compensate for the irregular surface levels.
If shrinkage and cracking of the lime mortar does occur this can be as a result of either
The sand being poorly graded or with a particle size that is too small
The mortar being applied too thickly (Thicker coats increase the possibility of shrinkage, cracking and slumping)
Too much suction from the substrate
High air temperatures or direct sunlight which force dry the mortar
High water content in the lime mortar mix
Poor quality or unmatured lime putty
A common method for mixing lime mortar with powdered lime is as follows:
Gather your ingredients, sand, lime, and water
Measure out your ratio of sand to lime, for example 3 buckets of sand, and 1 bucket of lime for a 3:1 ratio.
Mix the dry ingredients thoroughly so all the sand is coated with lime, and there are neither chunks of sand or lime visible.
Reserve some portion of the dry ingredients by removing it from your mixing vessel. The amount reserved can vary, but a safe starting point is about 1/4 of the batch. This will be added in later to fine tune the dryness of the mix.
Measure out water. How much depends on how wet you want your mix to be, and how damp/wet your sand is. A good starting point is 1 quart of water per gallon of sand.
Add about 2/3 of the water to your dry ingredients and mix until even consistency.
Add the reserved dry ingredients and/or the remaining water to get a mix you like. It takes time to know what works well, and the recipe can change depending on the temperature, humidity, moisture in the sand, type of brick, and task at hand (laying brick may warrant a wetter mix, while pointing may require a drier one.
To test the mix as you are making it, you can use a trowel, or pat the mortar with your hand to see how much moisture and "cream" come to the surface.
Remember to thoroughly wet your brick prior to using lime mortar. Old brick can be extremely porous, a 4lb brick can hold a pint of water. The bricks should be saturated, but dry on the surface prior to laying or pointing. Excess water can cause the lime to run and leave streaks.
Hair reinforcement
Hair reinforcement is common in lime plaster and many types of hair and other organic fibres can be found in historic plasters. However, organic material in lime will degrade in damp environments particularly on damp external renders. This problem has given rise to the use of polypropylene fibres in new lime renders
Properties
Lime mortar is not as strong in compression as Portland cement based mortar, but both are sufficiently strong for construction of non-high-rise domestic properties.
Lime mortar does not adhere as strongly to masonry as Portland cement. This is an advantage with softer types of masonry, where use of cement in many cases eventually results in cement pulling away some masonry material when it reaches the end of its life. The mortar is a sacrificial element which should be weaker than the bricks so it will crack before the bricks. It is less expensive to replace cracked mortar than cracked bricks.
Under cracking conditions, Portland cement breaks, whereas lime often produces numerous microcracks if the amount of movement is small. These microcracks recrystallise through the action of 'free lime' effectively self-healing the affected area.
Historic buildings are frequently constructed with relatively soft masonry units (e.g. soft brick and many types of stone), and minor movement in such buildings is quite common due to the nature of the foundations. This movement breaks the weakest part of the wall, and with Portland cement mortar this is usually the masonry. When lime mortar is used, the lime is the weaker element, and the mortar cracks in preference to the masonry. This results in much less damage, and is relatively simple to repair.
Lime mortar is more porous than cement mortars, and it wicks any dampness in the wall to the surface where it evaporates. Thus any salt content in the water crystallises on the lime, damaging the lime and thus saving the masonry. Cement, on the other hand, evaporates water less than soft brick, so damp issues are liable to cause salt formation and spalling on brick surfaces and consequent disintegration of bricks. This damp evaporation ability is widely referred to as 'breathability'.
Lime mortar should not be used below temperatures of and takes longer to set so it should be protected from freezing for three months. Because of its faster set, hydraulic lime may not need as much time before freezing temperatures begin.
Usually any dampness in the wall will cause the lime mortar to change colour, indicating the presence of moisture. The effect will create an often mottled appearance of a limewashed wall. As the moisture levels within a wall alter, so will the shade of a limewash. The darker the shade of limewash, the more pronounced this effect will become.
A load of mixed lime mortar may be allowed to sit as a lump for some time, without it drying out (it may get a thin crust). When ready to use, this lump may be remixed ('knocked up') again and then used. Traditionally on building sites, prior to the use of mechanical mixers, the lime putty (slaked on site in a pit) was mixed with sand by a labourer who would "beat and ram" the mix with a "larry" (a wide hoe with large holes). This was then covered with sand and allowed to sit for a while (from days to weeks) - a process known as "banking". This lump was then remixed and used as necessary. This process cannot be done with Portland cement.
Lime with Portland cement
The combination of Portland cement and lime is used for stabilization and solidification of the ground through establishing of lime cement columns or stabilization of the entire upper mass volume. The method provides an increase in strength when it comes to vibrations, stability and settling. When building e.g. roads and railways, the method is more common and widespread (Queen Eufemias street in Central Oslo, E18 at Tønsberg etc.).
For preservation purposes, Type N and Type O mortars are often used. A Type N mortar is 1 part Portland, 1 part Lime and 6 parts sand or other aggregate (1:1:6). A Type O mortar is 1 part Portland, 2 parts Lime and 9 parts sand or other aggregate (1:2:9). Straight lime mortar has no Portland, and 1 part Lime to 3 parts sand or other aggregate. The addition of cement or other pozzolan to decrease cure times is referred to as “gauging”. Other than Portland, ash and brick dust have been used to gauge mortars.
For historic restoration purposes, and restoration work involving repointing or brick replacement, masons must discover the original brick and mortar and repair it with a similar material. The National Park Service provides guidance for proper masonry repointing through Preservation Brief 2. In general, Brief 2 suggests that repointing should be done with a similar or weaker mortar. Therefore, a straight lime mortar joint should be repointed in kind. Due to the popularity of Portland cement, this often is not the case. A wall system needs a balance between the mortar and brick that allows the mortar to be the weak part of the unit.
When mortar is stronger than the brick, it prevents any natural movement in the wall and the faces of the brick will begin to deteriorate, a process known as spalling, the process by which the outer face of a brick degrades and can flake off or turn to powder. There is also a natural movement of water through a masonry wall. A strong Portland cement mix will prevent a free flow of water from a moist to dry area. This can cause rising damp to be trapped within the wall and create system failures. If moisture can not escape into the air, it will cause damage to a wall structure. Water freezing in the wall is another cause of spalling.
In restoration work of pre-20th century structures, there should be a high ratio of lime and aggregate to Portland. This reduces the compressive strength of the mortar but allows the wall system to function better. The lime mortar acts as a wick that helps to pull water from the brick. This can help to prevent the older brick from spalling. Even when the brick is a modern, harder element, repointing with a higher ratio lime mortar may help to reduce rising damp.
It may not be advisable for all consumers to use a straight lime mortar. With no Portland in the mix, there is less control over the setting of the mortar. In some cases, a freeze thaw cycle will be enough to create failure in the mortar joint. Straight lime mortar can also take a long time to fully cure and therefore work needs to be performed at a time of year where the weather conditions are conducive to the mortar setting properly. Those conditions are not only above freezing temperatures but also drier seasons. To protect the slow curing mortar from damp, a siloxane can be added to the surface. With historic structures, this may be a controversial strategy as it could have a detrimental effect to the historic fabric.
The presence of Portland allows for a more stable mortar. The stability and predictability make the mixed mortar more user friendly, particularly in applications where entire wall sections are being laid. Contractors and designers may prefer mixes that contain Portland due to the increased compressive strength over a straight lime mortar. As many pre-Portland mix buildings are still standing and have original mortar, the arguments for greater compressive strength and ease of use may be more a result of current practice and a lack of understanding of older techniques.
See also
Energetically modified cement
Hempcrete
Plastering
Sticky rice mortar
Whitewash
References
Further reading
Burnell, George Rowdon; Rudimentary Treatise on Limes, Cements, Mortars, Concretes, Mastics, Plastering, Etc.
Dibdin, William Joseph; Lime Mortar & Cement: Their Characteristics and Analyses. With an Account of Artificial Stone and Asphalt
Gilmore, Quincy A.; Limes Hydraulic Cement and Mortars
Hodgson, Fred T.; Concrete, Cements, Mortars, Artificial Marbles, Plasters and Stucco: How to Use and How to Prepare Them
Lazell, Ellis Warren; Lime Mortar & Cement : Their Characteristics and Analyses. With an Account of Artificial Stone and Asphalt
External links
The following are mid-19th-century technical articles on the respective subjects: lime mortar, cement making on a small scale, cement making on a large scale and mortar.
Gerard Lynch, 'The Myth in the Mix: The 1:3 Ratio of Lime to Sand', The Building Conservation Directory, 2007
Building materials
Cement
Masonry | Lime mortar | [
"Physics",
"Engineering"
] | 4,893 | [
"Masonry",
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Matter",
"Building materials"
] |
38,931,462 | https://en.wikipedia.org/wiki/Carbon%20additive | Carbon additive is a product that is added to molten steel. Carbon additive includes calcined petroleum coke, graphite petroleum coke, calcined anthracite coal, electrical calcined anthracite, and natural graphite. For the steel-making industry, the most suitable carbon additive is calcined petroleum coke with fixed carbon of 98.5%min. Sulfur in calcined petroleum coke is a crucial element, for sulfur impacts the quality of steel. The lower the sulfur, the better quality of calcined petroleum coke. The sulfur content of calcined petroleum coke is decided by the sulfur content in petroleum coke. Northeast China is the only source of low sulfur (≤ 0.5) petroleum coke in the world. G-high carbon has been the origin for many trading companies and metallurgical factories when they look for qualified carbon additives.
References
Steelmaking | Carbon additive | [
"Chemistry"
] | 182 | [
"Metallurgical processes",
"Steelmaking"
] |
38,937,108 | https://en.wikipedia.org/wiki/Neutral%20plane | In mechanics, the neutral plane or neutral surface is a conceptual plane within a beam or cantilever. When loaded by a bending force, the beam bends so that the inner surface is in compression and the outer surface is in tension. The neutral plane is the surface within the beam between these zones, where the material of the beam is not under stress, either compression or tension.
As there is no lengthwise stress force on the neutral plane, there is no strain or extension either: when the beam bends, the length of the neutral plane remains constant. Any line within the neutral plane parallel to the axis of the beam is called the deflection curve of the beam.
To show that every beam must have a neutral plane, the material of the beam can be imagined to be divided into narrow fibers parallel to its length. When the beam is bent, at any given cross-section the region of fibers near the concave side will be under compression, while the region near the convex side will be under tension. Because the stress in the material must be continuous across any cross section, there must be a boundary between the regions of compression and tension at which the fibers have no stress. This is the neutral plane.
Structural design
The location of the neutral plane can be an important factor in monocoque structures and pressure vessels. If the structure is a membrane supported by strength ribs, then placing the skin along the neutral surface avoids either compression or tension forces upon it. If the skin is already under external pressure, then this reduces the total force to which it is subject.
In the design of submarines this has been an important, although subtle, issue. The US Fleet submarines of World War II had a hull section that was not quite circular, causing the nodal circle to separate from the neutral plane, giving rise to additional stresses. The original design was framed internally: this needed trial-and-error design refinement to produce acceptable dimensions for the rib scantlings. The designer Andrew I. McKee at Portsmouth Naval Shipyard developed an improved design. By placing the frames partly inside the hull and partly outside, the neutral axis could be rearranged to coincide with the nodal circle once more. This gave no resultant bending moment on the frames and so allowed a lighter and more efficient structure.
Metrology
The property of remaining a constant length under load has been made use of in length metrology. When metal bars were developed as physical standards for length measures, they were calibrated as marks made on a length measured along the neutral plane. This avoided the minuscule changes in length, owing to the bar sagging under its own weight.
The first length standards to use this technique were rectangular section solid bars. A blind hole was bored at each end, to the depth of the neutral plane, and the calibration marks were made at this depth. This was inconvenient, as it was impossible to measure directly between the two marks, but only with an offset trammel down the wells.
A more convenient approach was used for the international prototype metre of 1870, a bar of platinum-iridium alloy which served as the definition of the meter from 1889 to 1960, when the CGPM redefined the meter based on a krypton standard. This bar was made with a splayed cross section called the Tresca section resembling an X with a connecting bar, or alternatively a H with the sides bent at an angle. One surface of the centre crossbar of the H was designed to coincide with the neutral plane, and the calibration marks defining the meter were scribed into this surface.
See also
Airy points
Neutral axis
Zero force member
References
Statics
Planes (geometry) | Neutral plane | [
"Physics",
"Mathematics"
] | 745 | [
"Statics",
"Mathematical objects",
"Classical mechanics",
"Infinity",
"Planes (geometry)"
] |
43,165,560 | https://en.wikipedia.org/wiki/Radiocarbon%20%28journal%29 | Radiocarbon is a scientific journal devoted to the topic of radiocarbon dating.
It was founded in 1959 as a supplement to the American Journal of Science, and is an important source of data and information about radiocarbon dating. It publishes many radiocarbon results, and since 1979 it has published the proceedings of the international conferences on radiocarbon dating. The journal is published six times per year. it is published by Cambridge University Press.
See also
Carbon-14
References
External links
Radiocarbon at the University of Arizona
Radiocarbon archives (1959-2012) at the University of Arizona Campus Repository
Radiocarbon dating
Academic journals established in 1959
Cambridge University Press academic journals
Bimonthly journals
University of Arizona | Radiocarbon (journal) | [
"Physics",
"Chemistry"
] | 137 | [
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Nuclear physics"
] |
43,165,864 | https://en.wikipedia.org/wiki/Representation%20on%20coordinate%20rings | In mathematics, a representation on coordinate rings is a representation of a group on coordinate rings of affine varieties.
Let X be an affine algebraic variety over an algebraically closed field k of characteristic zero with the action of a reductive algebraic group G. G then acts on the coordinate ring of X as a left regular representation: . This is a representation of G on the coordinate ring of X.
The most basic case is when X is an affine space (that is, X is a finite-dimensional representation of G) and the coordinate ring is a polynomial ring. The most important case is when X is a symmetric variety; i.e., the quotient of G by a fixed-point subgroup of an involution.
Isotypic decomposition
Let be the sum of all G-submodules of that are isomorphic to the simple module ; it is called the -isotypic component of . Then there is a direct sum decomposition:
where the sum runs over all simple G-modules . The existence of the decomposition follows, for example, from the fact that the group algebra of G is semisimple since G is reductive.
X is called multiplicity-free (or spherical variety) if every irreducible representation of G appears at most one time in the coordinate ring; i.e., .
For example, is multiplicity-free as -module. More precisely, given a closed subgroup H of G, define
by setting and then extending by linearity. The functions in the image of are usually called matrix coefficients. Then there is a direct sum decomposition of -modules (N the normalizer of H)
,
which is an algebraic version of the Peter–Weyl theorem (and in fact the analytic version is an immediate consequence.) Proof: let W be a simple -submodules of . We can assume . Let be the linear functional of W such that . Then .
That is, the image of contains and the opposite inclusion holds since is equivariant.
Examples
Let be a B-eigenvector and X the closure of the orbit . It is an affine variety called the highest weight vector variety by Vinberg–Popov. It is multiplicity-free.
The Kostant–Rallis situation
See also
Algebra representation
Notes
References
Group theory
Representation theory
Representation theory of groups | Representation on coordinate rings | [
"Mathematics"
] | 479 | [
"Group theory",
"Representation theory",
"Fields of abstract algebra"
] |
43,166,551 | https://en.wikipedia.org/wiki/LISA%20Pathfinder | LISA Pathfinder, formerly Small Missions for Advanced Research in Technology-2 (SMART-2), was an ESA spacecraft that was launched on 3 December 2015 on board Vega flight VV06. The mission tested technologies needed for the Laser Interferometer Space Antenna (LISA), an ESA gravitational wave observatory planned to be launched in 2035. The scientific phase started on 8 March 2016 and lasted almost sixteen months. In April 2016 ESA announced that LISA Pathfinder demonstrated that the LISA mission is feasible.
The estimated mission cost was €400 million.
Mission
LISA Pathfinder placed two test masses in a nearly perfect gravitational free-fall, and controlled and measured their relative motion with unprecedented accuracy. The laser interferometer measured the relative position and orientation of the masses to an accuracy of less than 0.01 nanometres, a technology estimated to be sensitive enough to detect gravitational waves by the follow-on mission, the Laser Interferometer Space Antenna (LISA).
The interferometer was a model of one arm of the final LISA interferometer, but reduced from millions of kilometers long to 40 cm. The reduction did not change the accuracy of the relative position measurement, nor did it affect the various technical disturbances produced by the spacecraft surrounding the experiment, whose measurement was the main goal of LISA Pathfinder. The sensitivity to gravitational waves, however, is proportional to the arm length, and this is reduced several billion-fold compared to the planned LISA experiment.
LISA Pathfinder was an ESA-led mission. It involved European space companies and research institutes from France, Germany, Italy, The Netherlands, Spain, Switzerland, UK, and the US space agency NASA.
LISA Pathfinder science
LISA Pathfinder was a proof-of-concept mission to prove that the two masses can fly through space, untouched but shielded by the spacecraft, and maintain their relative positions to the precision needed to realise a full gravitational wave observatory planned for launch in 2035. The primary objective was to measure deviations from geodesic motion. Much of the experimentation in gravitational physics requires measuring the relative acceleration between free-falling, geodesic reference test particles.
In LISA Pathfinder, precise inter-test-mass tracking by optical interferometry allowed scientists to assess the relative acceleration of the two test masses, situated about 38 cm apart in a single spacecraft. The science of LISA Pathfinder consisted of measuring and creating an experimentally-anchored physical model for all the spurious effects – including stray forces and optical measurement limits – that limit the ability to create, and measure, the perfect constellation of free-falling test particles that would be ideal for the LISA follow-up mission.
In particular, it verified:
Drag-free attitude control of a spacecraft with two proof masses,
The feasibility of laser interferometry in the desired frequency band (which is not possible on the surface of Earth), and
The reliability and longevity of the various components—capacitive sensors, microthrusters, lasers and optics.
For the follow-up mission, LISA, the test masses will be pairs of 2 kg gold/platinum cubes housed in each of three separate spacecraft 2.5 million kilometers apart.
Spacecraft design
LISA Pathfinder was assembled by Airbus Defence and Space in Stevenage (UK), under contract to the European Space Agency. It carried a European "LISA Technology Package" comprising inertial sensors, interferometer and associated instrumentation as well as two drag-free control systems: a European one using cold gas micro-thrusters (similar to those used on Gaia), and a US-built "Disturbance Reduction System" using the European sensors and an electric propulsion system that uses ionised droplets of a colloid accelerated in an electric field. The colloid thruster (or "electrospray thruster") system was built by Busek and delivered to JPL for integration with the spacecraft.
Instrumentation
The LISA Technology Package (LTP) was integrated by Airbus Defence and Space Germany, but the instruments and components were supplied by contributing institutions across Europe. The noise rejection technical requirements on the interferometer were very stringent, which means that the physical response of the interferometer to changing environmental conditions, such as temperature, must be minimised.
Environmental influences
On the follow-up mission, eLISA, environmental factors will influence the measurements the interferometer takes. These environmental influences include stray electromagnetic fields and temperature gradients, which could be caused by the Sun heating the spacecraft unevenly, or even by warm instrumentation inside the spacecraft itself. Therefore, LISA Pathfinder was designed to find out how such environmental influences change the behaviour of the inertial sensors and the other instruments. LISA Pathfinder flew with an extensive instrument package which can measure temperature and magnetic fields at the test masses and at the optical bench. The spacecraft was even equipped to stimulate the system artificially: it carried heating elements which can warm the spacecraft's structure unevenly, causing the optical bench to distort and enabling scientists to see how the measurements change with varying temperatures.
Spacecraft operations
Mission control for LISA Pathfinder was at ESOC in Darmstadt, Germany with science and technology operations controlled from ESAC in Madrid, Spain.
Lissajous orbit
The spacecraft was first launched by Vega flight VV06 into an elliptical LEO parking orbit. From there it executed a short burn each time perigee was passed, slowly raising the apogee closer to the intended halo orbit around the Earth–Sun point.
Chronology and results
The spacecraft reached its operational location in orbit around the Lagrange point L1 on 22 January 2016, where it underwent payload commissioning. The testing started on 1 March 2016. In April 2016 ESA announced that LISA Pathfinder demonstrated that the LISA mission is feasible.
On 7 June 2016, ESA presented the first results of two months' worth of science operation showing that the technology developed for a space-based gravitational wave observatory was exceeding expectations. The two cubes at the heart of the spacecraft are falling freely through space under the influence of gravity alone, unperturbed by other external forces, to a factor of 5 better than requirements for LISA Pathfinder. In February 2017, BBC News reported that the gravity probe had exceeded its performance goals.
LISA Pathfinder was deactivated on 30 June 2017.
On 5 February 2018, ESA published the final results. Precision of measurements could be improved further, beyond current goals for the future LISA mission, due to venting of residue air molecules and better understanding of disturbances.
See also
Einstein Telescope, a European gravitational wave detector
GEO600, a gravitational wave detector located in Hannover, Germany
LIGO, a gravitational wave observatory in USA
Taiji 1, a Chinese technology demonstrator for gravitational wave observation launched in 2019
Virgo interferometer, an interferometer located close to Pisa, Italy
References
External links
LISA and LISA Pathfinder's Homepage
LISA Pathfinder mission home at ESA
LISA Pathfinder at eoPortal
Max Planck Institute for Gravitational Physics (Albert Einstein Institute Hannover)
European Space Agency space probes
Space probes launched in 2015
Spacecraft using halo orbits
Spacecraft launched by Vega rockets
Interferometers
Space telescopes
Gravitational-wave telescopes
Technology demonstrations
Artificial satellites at Earth-Sun Lagrange points | LISA Pathfinder | [
"Astronomy",
"Technology",
"Engineering"
] | 1,455 | [
"Space telescopes",
"Interferometers",
"Measuring instruments"
] |
43,167,221 | https://en.wikipedia.org/wiki/Coastal%20upwelling%20of%20the%20South%20Eastern%20Arabian%20Sea | Coastal upwelling of the South Eastern Arabian Sea (SEAS) is a typical eastern boundary upwelling system (EBUS) similar to the California, Bengulea, Canary Island, and Peru-Chile systems. Unlike those four, the SEAS upwelling system needs to be explored in a much focused manner to clearly understand the chemical and biological responses associated with this coastal process.
The coastal upwelling in the south-eastern Arabian Sea occurs seasonally. It begins in the mid Spring (Mid May) along the southern tip of India and as the season advances it spreads northward. It is not a uniform wind-driven upwelling system, but is driven by various factors. While at Cape Comorin it can be modeled as just wind-driven, as the phenomena rises along the west coast of India, longshore wind stresses play an increasing role, as do atmospheric effects from the Bay of Bengal, such as Kelvin and Rossby waves.
References
Physical oceanography
Aquatic ecology | Coastal upwelling of the South Eastern Arabian Sea | [
"Physics",
"Biology"
] | 199 | [
"Aquatic ecology",
"Ecosystems",
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
43,167,897 | https://en.wikipedia.org/wiki/Arrangement%20%28space%20partition%29 | In discrete geometry, an arrangement is the decomposition of the d-dimensional linear, affine, or projective space into connected cells of different dimensions, induced by a finite collection of geometric objects, which are usually of dimension one less than the dimension of the space, and often of the same type as each other, such as hyperplanes or spheres.
Definition
For a set of objects in , the cells in the arrangement
are the connected components of sets of the form
for subsets of . That is, for each the cells are the connected components of the points that belong to every object in and do not belong to any other object. For instance the cells of an arrangement of lines in the Euclidean plane are of three types:
Isolated points, for which is the subset of all lines that pass through the point.
Line segments or rays, for which is a singleton set of one line. The segment or ray is a connected component of the points that belong only to that line and not to any other line of
Convex polygons (possibly unbounded), for which is the empty set, and its intersection (the empty intersection) is the whole space. These polygons are the connected components of the subset of the plane formed by removing all the lines in .
Types of arrangement
Of particular interest are the arrangements of lines and arrangements of hyperplanes.
More generally, geometers have studied arrangements of other types of curves in the plane, and of other more complicated types of surface. Arrangements in complex vector spaces have also been studied; since complex lines do not partition the complex plane into multiple connected components, the combinatorics of vertices, edges, and cells does not apply to these types of space, but it is still of interest to study their symmetries and topological properties.
Applications
An interest in the study of arrangements was driven by advances in computational geometry, where the arrangements were unifying structures for many problems. Advances in study of more complicated objects, such as algebraic surfaces, contributed to "real-world" applications, such as motion planning and computer vision.
References
Computational geometry
Discrete geometry | Arrangement (space partition) | [
"Mathematics"
] | 420 | [
"Discrete geometry",
"Computational geometry",
"Computational mathematics",
"Discrete mathematics"
] |
43,170,941 | https://en.wikipedia.org/wiki/Least-squares%20adjustment | Least-squares adjustment is a model for the solution of an overdetermined system of equations based on the principle of least squares of observation residuals. It is used extensively in the disciplines of surveying, geodesy, and photogrammetry—the field of geomatics, collectively.
Formulation
There are three forms of least squares adjustment: parametric, conditional, and combined:
In parametric adjustment, one can find an observation equation relating observations explicitly in terms of parameters (leading to the A-model below).
In conditional adjustment, there exists a condition equation which is involving only observations (leading to the B-model below) — with no parameters at all.
Finally, in a combined adjustment, both parameters and observations are involved implicitly in a mixed-model equation .
Clearly, parametric and conditional adjustments correspond to the more general combined case when and , respectively. Yet the special cases warrant simpler solutions, as detailed below. Often in the literature, may be denoted .
Solution
The equalities above only hold for the estimated parameters and observations , thus . In contrast, measured observations and approximate parameters produce a nonzero misclosure:
One can proceed to Taylor series expansion of the equations, which results in the Jacobians or design matrices: the first one,
and the second one,
The linearized model then reads:
where are estimated parameter corrections to the a priori values, and are post-fit observation residuals.
In the parametric adjustment, the second design matrix is an identity, B=-I, and the misclosure vector can be interpreted as the pre-fit residuals, , so the system simplifies to:
which is in the form of ordinary least squares.
In the conditional adjustment, the first design matrix is null, .
For the more general cases, Lagrange multipliers are introduced to relate the two Jacobian matrices, and transform the constrained least squares problem into an unconstrained one (albeit a larger one). In any case, their manipulation leads to the and vectors as well as the respective parameters and observations a posteriori covariance matrices.
Computation
Given the matrices and vectors above, their solution is found via standard least-squares methods; e.g., forming the normal matrix and applying Cholesky decomposition, applying the QR factorization directly to the Jacobian matrix, iterative methods for very large systems, etc.
Worked-out examples
Applications
Leveling, traverse, and control networks
Bundle adjustment
Triangulation, Trilateration, Triangulateration
GPS/GNSS positioning
Helmert transformation
Related concepts
Parametric adjustment is similar to most of regression analysis and coincides with the Gauss–Markov model
Combined adjustment, also known as the (named after German mathematicians/geodesists C.F. Gauss and F.R. Helmert), is related to the errors-in-variables models and total least squares.
The use of a priori parameter covariance matrix is akin to Tikhonov regularization
Extensions
If rank deficiency is encountered, it can often be rectified by the inclusion of additional equations imposing constraints on the parameters and/or observations, leading to constrained least squares.
References
Bibliography
Lecture notes and technical reports
Nico Sneeuw and Friedhelm Krum, "Adjustment theory", Geodätisches Institut, Universität Stuttgart, 2014
Krakiwsky, "A synthesis of recent advances in the method of least squares", Lecture Notes #42, Department of Geodesy and Geomatics Engineering, University of New Brunswick, 1975
Cross, P.A. "Advanced least squares applied to position-fixing", University of East London, School of Surveying, Working Paper No. 6, , January 1994. First edition April 1983, Reprinted with corrections January 1990. (Original Working Papers, North East London Polytechnic, Dept. of Surveying, 205 pp., 1983.)
Snow, Kyle B., Applications of Parameter Estimation and Hypothesis Testing to GPS Network Adjustments, Division of Geodetic Science, Ohio State University, 2002
Books and chapters
Friedrich Robert Helmert. Die Ausgleichsrechnung nach der Methode der kleinsten Quadrate (Adjustment computation based on the method of least squares). Leipzig: Teubner, 1872. <http://eudml.org/doc/203764>.
Reino Antero Hirvonen, "Adjustments by least squares in geodesy and photogrammetry", Ungar, New York. 261 p., , , 1971.
Edward M. Mikhail, Friedrich E. Ackermann, "Observations and least squares", University Press of America, 1982
Peter Vaníček and E.J. Krakiwsky, "Geodesy: The Concepts." Amsterdam: Elsevier. (third ed.): , ; chap. 12, "Least-squares solution of overdetermined models", pp. 202–213, 1986.
Gilbert Strang and Kai Borre, "Linear Algebra, Geodesy, and GPS", SIAM, 624 pages, 1997.
Paul Wolf and Bon DeWitt, "Elements of Photogrammetry with Applications in GIS", McGraw-Hill, 2000
Karl-Rudolf Koch, "Parameter Estimation and Hypothesis Testing in Linear Models", 2a ed., Springer, 2000
P.J.G. Teunissen, "Adjustment theory, an introduction", Delft Academic Press, 2000
Edward M. Mikhail, James S. Bethel, J. Chris McGlone, "Introduction to Modern Photogrammetry", Wiley, 2001
Harvey, Bruce R., "Practical least squares and statistics for surveyors", Monograph 13, Third Edition, School of Surveying and Spatial Information Systems, University of New South Wales, 2006
Huaan Fan, "Theory of Errors and Least Squares Adjustment", Royal Institute of Technology (KTH), Division of Geodesy and Geoinformatics, Stockholm, Sweden, 2010, .
Charles D. Ghilani, "Adjustment Computations: Spatial Data Analysis", John Wiley & Sons, 2011
Charles D. Ghilani and Paul R. Wolf, "Elementary Surveying: An Introduction to Geomatics", 13th Edition, Prentice Hall, 2011
Erik Grafarend and Joseph Awange, "Applications of Linear and Nonlinear Models: Fixed Effects, Random Effects, and Total Least Squares", Springer, 2012
Alfred Leick, Lev Rapoport, and Dmitry Tatarnikov, "GPS Satellite Surveying", 4th Edition, John Wiley & Sons, ; Chapter 2, "Least-Squares Adjustments", pp. 11–79, doi:10.1002/9781119018612.ch2
A. Fotiou (2018) "A Discussion on Least Squares Adjustment with Worked Examples" In: Fotiou A., D. Rossikopoulos, eds. (2018): “Quod erat demonstrandum. In quest for the ultimate geodetic insight.” Special issue for Professor Emeritus Athanasios Dermanis. Publication of the School of Rural and Surveying Engineering, Aristotle University of Thessaloniki, 405 pages.
John Olusegun Ogundare (2018), "Understanding Least Squares Estimation and Geomatics Data Analysis", John Wiley & Sons, 720 pages, .
Curve fitting
Least squares
Geodesy
Surveying
Photogrammetry | Least-squares adjustment | [
"Mathematics",
"Engineering"
] | 1,507 | [
"Applied mathematics",
"Civil engineering",
"Surveying",
"Geodesy"
] |
43,174,898 | https://en.wikipedia.org/wiki/Matter%20wave%20clock | A matter wave clock is a type of clock whose principle of operation makes use of the apparent wavelike properties of matter.
Matter waves were first proposed by Louis de Broglie and are sometimes called de Broglie waves. They form a key aspect of wave–particle duality and experiments have since supported the idea. The wave associated with a particle of a given mass, such as an atom, has a defined frequency, and a change in the duration of one cycle from peak to peak that is sometimes called its Compton periodicity. Such a matter wave has the characteristics of a simple clock, in that it marks out fixed and equal intervals of time. The twins paradox arising from Albert Einstein's theory of relativity means that a moving particle will have a slightly different period from a stationary particle. Comparing two such particles allows the construction of a practical "Compton clock".
Matter waves as clocks
De Broglie proposed that the frequency f of a matter wave equals E/h, where E is the total energy of the particle and h is the Planck constant. For a particle at rest, the relativistic equation E=mc2 allows the derivation of the Compton frequency f for a stationary massive particle, equal to mc2/h.
De Broglie also proposed that the wavelength λ for a moving particle was equal to h/p where p is the particle's momentum.
The period (one cycle of the wave) is equal to 1/f.
This precise Compton periodicity of a matter wave is said to be the necessary condition for a clock, with the implication that any such matter particle may be regarded as a fundamental clock. This proposal has been referred to as "A rock is a clock."
Applications
In his paper, "Quantum mechanics, matter waves and moving clocks", Müller has suggested that "The description of matter waves as matter-wave clocks ... has recently been applied to tests of general relativity, matter-wave experiments, the foundations of quantum mechanics, quantum space-time decoherence, the matter wave clock/mass standard, and led to a discussion on the role of the proper time in quantum mechanics. It is generally covariant and thus well-suited for use in curved space-time, e.g., gravitational waves."
Implications
In his paper, "Quantum mechanics, matter waves and moving clocks", Müller has suggested that "[The model] has also given rise to a fair amount of controversy. Within the broader context of quantum mechanics ... this description has been abandoned, in part because it could not be used to derive a relativistic quantum theory, or explain spin. The descriptions that replaced the clock picture achieve these goals, but do not motivate the concepts used. ... We shall construct a ... description of matter waves as clocks. We will thus arrive at a space-time path integral that is equivalent to the Dirac equation. This derivation shows that De Broglie's matter wave theory naturally leads to particles with spin-1/2. It relates to Feynman's search for a formula for the amplitude of a path in 3+1 space and time dimensions which is equivalent to the Dirac equation. It yields a new intuitive interpretation of the propagation of a Dirac particle and reproduces all results of standard quantum mechanics, including those supposedly at odds with it. Thus, it illuminates the role of the gravitational redshift and the proper time in quantum mechanics."
Controversy
The theoretical idea of matter waves as clocks has caused some controversy, and has attracted strong criticism.
Atom interferometry
An atom interferometer uses a small difference in waves associated with two atoms to create an observable interference pattern. Conventionally these waves are associated with the electrons orbiting the atom, but the matter wave theory suggests that the wave associated with the wave–particle duality of the atom itself may alternatively be used.
An experimental device comprises two clouds of atoms, one of which is given a small "kick" from a precisely-tuned laser. This gives it a finite velocity which, according to the matter wave theory lowers its observed frequency. The two clouds are then recombined so that their differing waves interfere, and the maximum output signal is obtained when the frequency difference is an integer number of cycles.
Experiments designed around the idea of interference between matter waves (as clocks) are claimed to have provided the most accurate validation yet of the gravitational redshift predicted by general relativity. A similar atom interferometer forms the heart of the Compton clock.
However, this claimed interpretation of the interferometry function has been criticised. One criticism is that a real Compton oscillator or matter wave does not appear in the design of any actual experiment. The matter wave interpretation is also said to be flawed.
Compton clocks
A functional timepiece designed on the basis of matter wave interferometry is called a Compton clock.
Principles of operation
The frequency of the wave associated with a massive particle, such as an atom, is too high to be used directly in a practical clock and its period and wavelength are too short. A practical device makes use of the twin paradox arising from the theory of relativity, where a moving particle ages more slowly than a stationary one. The moving particle-wave therefore has a slightly lower frequency. Using interferometry, the difference or "beat frequency" between the two frequencies can be accurately measured and this beat frequency can be used as a basis for keeping time.
Measurement of mass
The technique used in the devices can theoretically be reversed to use time to measure mass. This has been proposed as an opportunity for replacing the platinum-iridium cylinder currently used as the 1 kg reference standard.
References
Quantum mechanics
Clocks
Theory of relativity
Fringe physics | Matter wave clock | [
"Physics",
"Technology",
"Engineering"
] | 1,168 | [
"Machines",
"Theoretical physics",
"Quantum mechanics",
"Clocks",
"Measuring instruments",
"Physical systems",
"Theory of relativity"
] |
37,487,265 | https://en.wikipedia.org/wiki/Variational%20method%20%28quantum%20mechanics%29 | In quantum mechanics, the variational method is one way of finding approximations to the lowest energy eigenstate or ground state, and some excited states. This allows calculating approximate wavefunctions such as molecular orbitals. The basis for this method is the variational principle.
The method consists of choosing a "trial wavefunction" depending on one or more parameters, and finding the values of these parameters for which the expectation value of the energy is the lowest possible. The wavefunction obtained by fixing the parameters to such values is then an approximation to the ground state wavefunction, and the expectation value of the energy in that state is an upper bound to the ground state energy. The Hartree–Fock method, density matrix renormalization group, and Ritz method apply the variational method.
Description
Suppose we are given a Hilbert space and a Hermitian operator over it called the Hamiltonian . Ignoring complications about continuous spectra, we consider the discrete spectrum of and a basis of eigenvectors (see spectral theorem for Hermitian operators for the mathematical background):
where is the Kronecker delta
and the satisfy the eigenvalue equation
Once again ignoring complications involved with a continuous spectrum of , suppose the spectrum of is bounded from below and that its greatest lower bound is . The expectation value of in a state is then
If we were to vary over all possible states with norm 1 trying to minimize the expectation value of , the lowest value would be and the corresponding state would be the ground state, as well as an eigenstate of . Varying over the entire Hilbert space is usually too complicated for physical calculations, and a subspace of the entire Hilbert space is chosen, parametrized by some (real) differentiable parameters . The choice of the subspace is called the ansatz. Some choices of ansatzes lead to better approximations than others, therefore the choice of ansatz is important.
Let's assume there is some overlap between the ansatz and the ground state (otherwise, it's a bad ansatz). We wish to normalize the ansatz, so we have the constraints
and we wish to minimize
This, in general, is not an easy task, since we are looking for a global minimum and finding the zeroes of the partial derivatives of over all is not sufficient. If is expressed as a linear combination of other functions ( being the coefficients), as in the Ritz method, there is only one minimum and the problem is straightforward. There are other, non-linear methods, however, such as the Hartree–Fock method, that are also not characterized by a multitude of minima and are therefore comfortable in calculations.
There is an additional complication in the calculations described. As tends toward in minimization calculations, there is no guarantee that the corresponding trial wavefunctions will tend to the actual wavefunction. This has been demonstrated by calculations using a modified harmonic oscillator as a model system, in which an exactly solvable system is approached using the variational method. A wavefunction different from the exact one is obtained by use of the method described above.
Although usually limited to calculations of the ground state energy, this method can be applied in certain cases to calculations of excited states as well. If the ground state wavefunction is known, either by the method of variation or by direct calculation, a subset of the Hilbert space can be chosen which is orthogonal to the ground state wavefunction.
The resulting minimum is usually not as accurate as for the ground state, as any difference between the true ground state and results in a lower excited energy. This defect is worsened with each higher excited state.
In another formulation:
This holds for any trial φ since, by definition, the ground state wavefunction has the lowest energy, and any trial wavefunction will have energy greater than or equal to it.
Proof:
can be expanded as a linear combination of the actual eigenfunctions of the Hamiltonian (which we assume to be normalized and orthogonal):
Then, to find the expectation value of the Hamiltonian:
Now, the ground state energy is the lowest energy possible, i.e., . Therefore, if the guessed wave function is normalized:
In general
For a hamiltonian H that describes the studied system and any normalizable function Ψ with arguments appropriate for the unknown wave function of the system, we define the functional
The variational principle states that
, where is the lowest energy eigenstate (ground state) of the hamiltonian
if and only if is exactly equal to the wave function of the ground state of the studied system.
The variational principle formulated above is the basis of the variational method used in quantum mechanics and quantum chemistry to find approximations to the ground state.
Another facet in variational principles in quantum mechanics is that since and can be varied separately (a fact arising due to the complex nature of the wave function), the quantities can be varied in principle just one at a time.
Helium atom ground state
The helium atom consists of two electrons with mass m and electric charge , around an essentially fixed nucleus of mass and charge . The Hamiltonian for it, neglecting the fine structure, is:
where ħ is the reduced Planck constant, is the vacuum permittivity, (for ) is the distance of the -th electron from the nucleus, and is the distance between the two electrons.
If the term , representing the repulsion between the two electrons, were excluded, the Hamiltonian would become the sum of two hydrogen-like atom Hamiltonians with nuclear charge . The ground state energy would then be , where is the Rydberg constant, and its ground state wavefunction would be the product of two wavefunctions for the ground state of hydrogen-like atoms:
where is the Bohr radius and , helium's nuclear charge. The expectation value of the total Hamiltonian H (including the term ) in the state described by will be an upper bound for its ground state energy. is , so is .
A tighter upper bound can be found by using a better trial wavefunction with 'tunable' parameters. Each electron can be thought to see the nuclear charge partially "shielded" by the other electron, so we can use a trial wavefunction equal with an "effective" nuclear charge : The expectation value of in this state is:
This is minimal for implying shielding reduces the effective charge to ~1.69. Substituting this value of into the expression for yields , within 2% of the experimental value, −78.975 eV.
Even closer estimations of this energy have been found using more complicated trial wave functions with more parameters. This is done in physical chemistry via variational Monte Carlo.
References
Quantum chemistry
Theoretical chemistry
Computational chemistry
Computational physics
Approximations
Electronic structure methods | Variational method (quantum mechanics) | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,388 | [
"Quantum chemistry",
"Quantum mechanics",
"Computational physics",
"Theoretical chemistry",
"Electronic structure methods",
"Computational chemistry",
"Mathematical relations",
" molecular",
"nan",
"Atomic",
"Approximations",
" and optical physics"
] |
37,487,414 | https://en.wikipedia.org/wiki/Wu%E2%80%93Sprung%20potential | In mathematical physics, the Wu–Sprung potential, named after Hua Wu and Donald Sprung, is a potential function in one dimension inside a Hamiltonian with the potential defined by solving a non-linear integral equation defined by the Bohr–Sommerfeld quantization conditions involving the spectral staircase, the energies and the potential .
here is a classical turning point so , the quantum energies of the model are the roots of the Riemann Xi function and . In general, although Wu and Sprung considered only the smooth part, the potential is defined implicitly by ; with being the eigenvalue staircase and is the Heaviside step function.
For the case of the Riemann zeros Wu and Sprung and others have shown that the potential can be written implicitly in terms of the Gamma function and zeroth-order Bessel function.
and that the density of states of this Hamiltonian is just the Delsarte's formula for the Riemann zeta function and defined semiclassically as
here they have taken the derivative of the Euler product on the critical line ; also they use the Dirichlet generating function . is the Mangoldt function.
The main idea by Wu and Sprung and others is to interpret the density of states as the distributional Delsarte's formula and then use the WKB method to evaluate the imaginary part of the zeros by using quantum mechanics.
Wu and Sprung also showed that the zeta-regularized functional determinant is the Riemann Xi-function
The main idea inside this problem is to recover the potential from spectral data as in some inverse spectral problems in this case the spectral data is the Eigenvalue staircase, which is a quantum property of the system, the inverse of the potential then, satisfies an Abel integral equation (fractional calculus) which can be immediately solved to obtain the potential.
Asymptotics
For large x if we take only the smooth part of the eigenvalue staircase , then the potential as is positive and it is given by the asymptotic expression with and in the limit . This potential is approximately a Morse potential with
The asymptotic of the energies depend on the quantum number as , where is the Lambert W function.
References
G. Sierra, A physics pathway to the Riemann hypothesis, arXiv:math-ph/1012.4264, 2010.
Rev. Mod. Phys. 2011; 83, 307–330 Colloquium: Physics of the Riemann hypothesis
Trace formula in noncommutative geometry and the zeros of the Riemann zeta function Alain Connes
Some remarks on the Wu–Sprung potential. Preliminary report Diego Dominici
http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/NTfractality.htm
Mathematical physics | Wu–Sprung potential | [
"Physics",
"Mathematics"
] | 575 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
37,490,344 | https://en.wikipedia.org/wiki/Alexandrov%20theorem | In mathematical analysis, the Alexandrov theorem, named after Aleksandr Danilovich Aleksandrov, states that if is an open subset of and is a convex function, then has a second derivative almost everywhere.
In this context, having a second derivative at a point means having a second-order Taylor expansion at that point with a local error smaller than any quadratic.
The result is closely related to Rademacher's theorem.
References
Theorems in measure theory | Alexandrov theorem | [
"Mathematics"
] | 95 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Theorems in measure theory",
"Mathematical analysis stubs"
] |
37,495,325 | https://en.wikipedia.org/wiki/Corrosion%20in%20ballast%20tanks | Corrosion in Ballast Tanks is the deterioration process where the surface of a ballast tank progresses from microblistering, to loss of tank coating, and finally to cracking of the tank steel itself.
“Effective corrosion control in segregated water ballast spaces is probably the single most important feature, next to the integrity of the initial design, in determining the ship’s effective life span and structural reliability,” said Alan Gavin, Germanischer Lloyd's Principal surveyor.
Throughout the years the merchant fleet has become increasingly aware of the importance of avoiding corrosion in ballast tanks.
Factors influencing corrosion in ballast tanks
Epoxy and modified epoxy are standard coatings used to provide protective barriers to corrosion in ballast tanks. Exposed, unprotected steel will corrode much more rapidly than steel covered with this protective layer. Many ships also use sacrificial anodes or an impressed current for additional protection. Empty ballast tanks will corrode faster than areas fully immersed due to the thin - and electo conducting - moisture film covering them.
The main factors influencing the rate of corrosion are diffusion, temperature, Conductivity, type of ions, pH, and
electrochemical corrosion potential.
Regions of a ballast tank
Ballast tanks do not corrode uniformly throughout the tank. Each region behaves distinctively, according to it electrochemical loading. The differences can especially be seen in empty ballast tanks. The upper sections usually corrode but the lower sections will blister.
A ballast tank has three distinct sections: 1) upper, 2) mid or "boottop" area and, 3) the "double bottom" or lower wing sections. The upper regions are constantly affected by weather. This area experiences a high degree of thermal cycling and mechanical damage through vibration. This area tends to undergo anodic oxidation more rapidly than other sections and will weaken more rapidly. This ullage or headspace area contains more oxygen and thus speeds atmospheric corrosion, as evidenced by the appearance of rust scales.
In the midsection corrodes more slowly than upper or the bottom sections of the tank.
Double bottoms are prone to cathodic blistering. Temperatures in this area are much lower due to the cooling of the sea. If this extremely cathodic region is placed close to an anodic source (e.g. a corroding ballast pipe), cathodic blistering may occur especially where the epoxy coating is relatively new. Mud retained in ballast water can lead to microbial corrosion.
Marine regulations
Many maritime accidents have been caused by corrosion, and this has led to stringent regulations concerning protective coatings for ballast tanks. The Coating Performance Standard for Ballast Tank Coatings (PSPC), became effective in 2008. It specifies how protective coatings should be applied during vessel construction with the intention of giving a coating a 15-year service life. Additional regulations, such as those established by The International Convention for the Control and Management of Ships Ballast Water & Sediments (SBWS) sought to avoid introducing invasive species throughout the world through ship's ballast tanks. The methods used to avoid having these invasive species surviving in ballast tanks however greatly increased the rate of corrosion. Therefore ongoing research attempts to find water treatment systems that kill invasive species, while not having a destructive effect on the ballast tank coatings. As double-hulled tankers were introduced it meant that there was more ballast tank area had to be coated and therefore a greater capital investment for ship owners. With the onset of the OPA 90 and later the amendments to MARPOL annex 1, single hull tankers (without alternative method) have basically phased out.
Modern double hull tankers, with their fully "segregated ballast tanks" propose another problem. Empty tanks act as insulation from the cold sea and allow the warm cargo areas to retain their heat longer. Corrosion rates increase with differences in temperature. Consequently, the cargo side of the ballast tank corrodes more quickly than it did with single hull tankers.
See also
Corrosion engineering
References
The Original Ballast Tank Rust Inhibitor "Float Coat" (Military Specification: MIL-R-21006) - Rust & Corrosion Inhibitor
Further reading
Corrosion for students of science and engineering Trethewey and Chamberlain ISBN 0-58-245089-6
Brett CMA, Brett AMO, ELECTROCHEMISTRY, Principles, methods, and applications, Oxford University Press, (1993)
Corrosion - 2nd Edition (elsevier.com) Volume 1and 2; Editor: L L Shreir
A.W. Peabody, Peabody's Control of Pipeline Corrosion, 2nd Ed., 2001, NACE International.
Ashworth V., Corrosion Vol. 2, 3rd Ed., 1994,
Baeckmann, Schwenck & Prinz, Handbook of Cathodic Corrosion Protection, 3rd Edition 1997.
Scherer, L. F., Oil and Gas Journal, (1939)
Roberge, Pierre R, Handbook of Corrosion Engineering 1999
EN 12473:2000 - General principles of cathodic protection in sea water
Corrosion
Shipbuilding | Corrosion in ballast tanks | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,018 | [
"Metallurgy",
"Corrosion",
"Shipbuilding",
"Electrochemistry",
"Marine engineering",
"Materials degradation"
] |
34,972,710 | https://en.wikipedia.org/wiki/Perdido%20%28oil%20platform%29 | Perdido (Spanish for lost) is the deepest floating oil platform in the world at a water depth of about 2,450 meters (8,040 feet) operated by the Shell Oil Company in the Gulf of Mexico. The platform is located in the Perdido fold belt which is a rich discovery of crude oil and natural gas. The Perdido spar began production in 2010 and its peak production is 100,000 barrels of oil equivalent (c. ) and 200 million cubic feet of gas per day (c. ).
Construction and assembly
The spar and the topsides of the Perdido were constructed separately and then assembled in its final position in the Gulf of Mexico.
The Perdido's hull or spar was constructed by Technip in Pori, Finland. A barge shipped the 22,000 tonne spar 13,200 kilometres (8,200 miles) from the Baltic Sea to the Gulf of Mexico. After floating the spar, it was towed to its final home above the Alaminos canyon 320 kilometres (120 miles) from the shore. The spar was rotated by the Balder from a horizontal to a vertical floating position by pumping water through hoses attached to the spar. It was then anchored by the Balder to piles in the seafloor.
The platform has three decks or topsides which support the oil and gas processing units, a drilling rig and living quarters for the workers. The topsides were designed by Alliance Engineering and constructed by Kiewit Offshore in Corpus Christi, Texas. The temperature difference between Finland and Texas posed a challenge in assembling the pieces as the components built in the cold of northern Europe expand in the heat of the Gulf of Mexico. Computer-guided lasers marked out the measurements to ensure precision. After the decks were constructed, in March 2009 the Thialf lifted the 9,500 tonne topsides onto four posts on the spar and slotted it into position.
The Perdido has the ability to adjust the tension of the mooring cables which allows it to move around in the area of a soccer field. This is used to be placed – if needed – directly above one of the 22 wellheads on the seafloor.
Operation
Operated by Shell, with JV partners Chevron (37.5%) and BP (27.5%), the spar acts as a hub for and enables development of three fields Great White, Tobago, and Silvertip. The oil and gas fields beneath the platform lie in a geological formation holding resources estimated at 3–15 billion barrels of oil equivalent according to a report by the BSEE, formerly known as the MMS. At peak production, Perdido processes 100,000 barrels of oil equivalent a day, and 200 million cubic feet of gas.
Perdido extracts oil from 35 subsea wells, of which 22 lie beneath the spar, while 13 wells are located eight miles west of the platform. These are connected via a 44 kilometre (27 mile) network of pipelines on the ocean floor the manifold below the platform, where the oil is pumped upwards in five flexible pipes called risers. This complicated subsea installation is needed, as the reservoir pressure is rather low which means the transport from the seafloor to the ocean surface needs pumps, the number of which had to be minimized.
A workforce of 172 people keep it up and running. They work in 12-hour shifts for 2 weeks followed by 2 weeks off back on land.
Safety
The platform has extensive safety equipment to protect workers in this remote location.
It has the largest rescue boat used on any Shell facility, which has room for 24 people. The living quarters are blast-resistant. Perdido's helipad can accommodate two Sikorsky S-92 helicopters that can carry 19 passengers each.
Notes
References
Inside Perdido, Shell.com
Assembling one of the deepest offshore hubs, Shell.com
Living on a platform, Shell.com
External links
Excell, Jon, Shell's Perdido platform will be world's deepest The Engineer, 2 November 2009
Clanton, Brett, Shells Perdido blazes a deep water trail. Houston Chronicle, April 18, 2010
Fahey, Jonathan, Deepwater oil rig work takes a certain attitude, NBC news, 1/3/2012
Information on Perdido published on the official Shell Global website, retrieved 28/03/2013
Scale diagram in the Skyscraper page
Oil platforms
Structural engineering
Petroleum production
2009 establishments in the United States | Perdido (oil platform) | [
"Chemistry",
"Engineering"
] | 921 | [
"Oil platforms",
"Structural engineering",
"Petroleum technology",
"Construction",
"Civil engineering",
"Natural gas technology"
] |
34,983,327 | https://en.wikipedia.org/wiki/Fermi%20arc | In the field of unconventional superconductivity, a Fermi arc is a phenomenon visible in the pseudogap state of a superconductor. Seen in momentum space, part of the space exhibits a gap in the density of states, like in a superconductor. This starts at the antinodal points, and spreads through momentum space when lowering the temperature until everywhere is gapped and the sample is superconducting. The area in momentum space that remains ungapped is called the Fermi arc.
Fermi arcs also appear in some materials with topological properties such as Weyl semimetals where they represent a surface projection of a two dimensional Fermi contour and are terminated onto the projections of the Weyl fermion nodes on the surface.
See also
Fermi surface
References
Superconductivity | Fermi arc | [
"Physics",
"Materials_science",
"Engineering"
] | 170 | [
"Materials science stubs",
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Condensed matter stubs",
"Electrical resistance and conductance"
] |
34,984,079 | https://en.wikipedia.org/wiki/Flat%20spot%20%28reflection%20seismology%29 | In reflection seismology, a flat spot is a seismic attribute anomaly that appears as a horizontal reflector cutting across the stratigraphy elsewhere present on the seismic image. Its appearance can indicate the presence of hydrocarbons. Therefore, it is known as a direct hydrocarbon indicator and is used by geophysicists in hydrocarbon exploration.
Theory
A flat spot can result from the increase in acoustic impedance when a gas-filled porous rock (with a lower acoustic impedance) overlies a liquid-filled porous rock (with a higher acoustic impedance). It may stand out on a seismic image because it is flat and will contrast with surrounding dipping reflections.
Caution
There are a number of other possible reasons for there being a flat spot on a seismic image. It could be representative of a mineralogical change in the subsurface or an unresolved shallower multiple. Additionally, the interpretation of a flat spot should be attempted after depth conversion to confirm that the anomaly is actually flat.
See also
Bright spot
Seismic attribute
Reflection seismology
References
Seismology measurement
Petroleum geology | Flat spot (reflection seismology) | [
"Chemistry"
] | 221 | [
"Petroleum",
"Petroleum geology"
] |
34,984,373 | https://en.wikipedia.org/wiki/Aceglutamide | Aceglutamide (brand name Neuramina), or aceglutamide aluminium (brand name Glumal), also known as acetylglutamine, is a psychostimulant, nootropic, and antiulcer agent that is marketed in Spain and Japan. It is an acetylated form of the amino acid L-glutamine, the precursor of glutamate in the body and brain. Aceglutamide functions as a prodrug to glutamine with improved potency and stability.
Aceglutamide is used as a psychostimulant and nootropic, while aceglutamide aluminium is used in the treatment of ulcers. Aceglutamide can also be used as a liquid-stable source of glutamine to prevent damage from protein energy malnutrition. The drug has shown neuroprotective effects in an animal model of cerebral ischemia.
See also
Aceburic acid
Aceturic acid
N-Acetylglutamic acid
References
Acetamides
Alpha-Amino acids
Amino acid derivatives
Drugs acting on the gastrointestinal system and metabolism
Neurotransmitter precursors
Nootropics
Prodrugs
Stimulants | Aceglutamide | [
"Chemistry"
] | 263 | [
"Chemicals in medicine",
"Prodrugs"
] |
34,984,725 | https://en.wikipedia.org/wiki/AISoy1 | AISoy1, developed by the Spanish company AISoy Robotics, is a pet-robot considered to be one of the first emotional-learning robots for the consumer market. Its software platform allows it to interpret stimuli from its sensors network, in order to gain information from its environment and take decisions based on logical and emotional criteria. Unlike other previously developed robots, AISoy1 has not a single collection of programmed answers, but its behavior is dynamic and results unpredictable. The dialogue system and the advanced recognition system allow it to interact with humans as well as robots.
Features
The robot is based on the Linux operating system with a Texas Instruments OMAP 3503 (ARM Cortex A8) processor at 600 MHz. It has a 1 GB NAND FLASH memory, 256 MB MOBILE DDR SDRAM, and a microSD card slot to increase its memory.
AISoy1 has various sensors, including temperature, 3D orientation, environmental brightness, touch and force sensors. It is equipped with a radio communication module as well as an integrated 1Mpx camera, allowing it to visually recognize users.
Human Interaction
The AISoy1 robots develop a unique personality based on their past experiences with the user. As they develop a relationship with their user, they improve their speech ability and feeling capacities. They are able to display up to 14 different emotional states.
AISoy1 can be commanded to perform different activities through voice commands, such as engaging in games, playing music, or saving information. Users wanting to extend the functionality of AISoy1 can do so through the platform DIA, which allows quick creation of programs through a graphical tool. More advanced users can integrate hardware and develop complex programs through the SDK from AISoy.
References
External links
Official Website of AISoy Robotics (Spanish and English).
Social robots
Robotic animals
2010 robots
Robots of Spain | AISoy1 | [
"Technology",
"Biology"
] | 374 | [
"Social robots",
"Animals",
"Computing and society",
"Robotic animals"
] |
24,794,048 | https://en.wikipedia.org/wiki/Gravitoelectromagnetism | Gravitoelectromagnetism, abbreviated GEM, refers to a set of formal analogies between the equations for electromagnetism and relativistic gravitation; specifically: between Maxwell's field equations and an approximation, valid under certain conditions, to the Einstein field equations for general relativity. Gravitomagnetism is a widely used term referring specifically to the kinetic effects of gravity, in analogy to the magnetic effects of moving electric charge. The most common version of GEM is valid only far from isolated sources, and for slowly moving test particles.
The analogy and equations differing only by some small factors were first published in 1893, before general relativity, by Oliver Heaviside as a separate theory expanding Newton's law of universal gravitation.
Background
This approximate reformulation of gravitation as described by general relativity in the weak field limit makes an apparent field appear in a frame of reference different from that of a freely moving inertial body. This apparent field may be described by two components that act respectively like the electric and magnetic fields of electromagnetism, and by analogy these are called the gravitoelectric and gravitomagnetic fields, since these arise in the same way around a mass that a moving electric charge is the source of electric and magnetic fields. The main consequence of the gravitomagnetic field, or velocity-dependent acceleration, is that a moving object near a massive, rotating object will experience acceleration that deviates from that predicted by a purely Newtonian gravity (gravitoelectric) field. More subtle predictions, such as induced rotation of a falling object and precession of a spinning object are among the last basic predictions of general relativity to be directly tested.
Indirect validations of gravitomagnetic effects have been derived from analyses of relativistic jets. Roger Penrose had proposed a mechanism that relies on frame-dragging-related effects for extracting energy and momentum from rotating black holes. Reva Kay Williams, University of Florida, developed a rigorous proof that validated Penrose's mechanism. Her model showed how the Lense–Thirring effect could account for the observed high energies and luminosities of quasars and active galactic nuclei; the collimated jets about their polar axis; and the asymmetrical jets (relative to the orbital plane). All of those observed properties could be explained in terms of gravitomagnetic effects. Williams's application of Penrose's mechanism can be applied to black holes of any size. Relativistic jets can serve as the largest and brightest form of validations for gravitomagnetism.
A group at Stanford University is currently analyzing data from the first direct test of GEM, the Gravity Probe B satellite experiment, to see whether they are consistent with gravitomagnetism. The Apache Point Observatory Lunar Laser-ranging Operation also plans to observe gravitomagnetism effects.
Equations
According to general relativity, the gravitational field produced by a rotating object (or any rotating mass–energy) can, in a particular limiting case, be described by equations that have the same form as in classical electromagnetism. Starting from the basic equation of general relativity, the Einstein field equation, and assuming a weak gravitational field or reasonably flat spacetime, the gravitational analogs to Maxwell's equations for electromagnetism, called the "GEM equations", can be derived. GEM equations compared to Maxwell's equations are:
where:
Eg is the gravitoelectric field (conventional gravitational field), with SI unit m⋅s−2
E is the electric field
Bg is the gravitomagnetic field, with SI unit s−1
B is the magnetic field
ρg is mass density, with SI unit kg⋅m−3
ρ is charge density
Jg is mass current density or mass flux (Jg = ρgvρ, where vρ is the velocity of the mass flow), with SI unit kg⋅m−2⋅s−1
J is electric current density
G is the gravitational constant
ε0 is the vacuum permittivity
c is both the speed of propagation of gravity and the speed of light.
Potentials
Faraday's law of induction (third line of the table) and the Gaussian law for the gravitomagnetic field (second line of the table) can be solved by the definition of a gravitation potential and the vector potential according to:
and
Inserting this four potentials into the Gaussian law for the gravitation field (first line of the table) and Ampère's circuital law (fourth line of the table) and applying the Lorenz gauge the following inhomogeneous wave-equations are obtained:
For a stationary situation () the Poisson equation of the classical gravitation theory is obtained. In a vacuum () a wave equation is obtained under non-stationary conditions. GEM therefore predicts the existence of gravitational waves. In this way GEM can be regarded as a generalization of Newton's gravitation theory.
The wave equation for the gravitomagnetic potential can also be solved for a rotating spherical body (which is a stationary case) leading to gravitomagnetic moments.
Lorentz force
For a test particle whose mass m is "small", in a stationary system, the net (Lorentz) force acting on it due to a GEM field is described by the following GEM analog to the Lorentz force equation:
where:
v is the velocity of the test particle
m is the mass of the test particle
q is the electric charge of the test particle.
Poynting vector
The GEM Poynting vector compared to the electromagnetic Poynting vector is given by:
Scaling of fields
The literature does not adopt a consistent scaling for the gravitoelectric and gravitomagnetic fields, making comparison tricky. For example, to obtain agreement with Mashhoon's writings, all instances of Bg in the GEM equations must be multiplied by − and Eg by −1. These factors variously modify the analogues of the equations for the Lorentz force. There is no scaling choice that allows all the GEM and EM equations to be perfectly analogous. The discrepancy in the factors arises because the source of the gravitational field is the second order stress–energy tensor, as opposed to the source of the electromagnetic field being the first order four-current tensor. This difference becomes clearer when one compares non-invariance of relativistic mass to electric charge invariance. This can be traced back to the spin-2 character of the gravitational field, in contrast to the electromagnetism being a spin-1 field. (See Relativistic wave equations for more on "spin-1" and "spin-2" fields).
Higher-order effects
Some higher-order gravitomagnetic effects can reproduce effects reminiscent of the interactions of more conventional polarized charges. For instance, if two wheels are spun on a common axis, the mutual gravitational attraction between the two wheels will be greater if they spin in opposite directions than in the same direction. This can be expressed as an attractive or repulsive gravitomagnetic component.
Gravitomagnetic arguments also predict that a flexible or fluid toroidal mass undergoing minor axis rotational acceleration (accelerating "smoke ring" rotation) will tend to pull matter through the throat (a case of rotational frame dragging, acting through the throat). In theory, this configuration might be used for accelerating objects (through the throat) without such objects experiencing any g-forces.
Consider a toroidal mass with two degrees of rotation (both major axis and minor-axis spin, both turning inside out and revolving). This represents a "special case" in which gravitomagnetic effects generate a chiral corkscrew-like gravitational field around the object. The reaction forces to dragging at the inner and outer equators would normally be expected to be equal and opposite in magnitude and direction respectively in the simpler case involving only minor-axis spin. When both rotations are applied simultaneously, these two sets of reaction forces can be said to occur at different depths in a radial Coriolis field that extends across the rotating torus, making it more difficult to establish that cancellation is complete.
Modelling this complex behaviour as a curved spacetime problem has yet to be done and is believed to be very difficult.
Gravitomagnetic fields of astronomical objects
A rotating spherical body with a homogeneous density distribution produces a stationary gravitomagnetic potential, which is described by:
Due to the body's angular velocity the velocity inside the body can be described as . Therefore
has to be solved to obtain the gravitomagnetic potential . The analytical solution outside of the body is (see for example):
where:
is the angular momentum vector;
is the moment of inertia of a ball-shaped body (see: List of moments of inertia);
is the angular velocity;
m is the mass;
R is the radius;
T is the rotational period.
The formula for the gravitomagnetic field Bg can now be obtained by:
It is exactly half of the Lense–Thirring precession rate. This suggests that the gravitomagnetic analog of the g-factor is two. This factor of two can be explained completely analogous to the electron's g-factor by taking into account relativistic calculations. At the equatorial plane, r and L are perpendicular, so their dot product vanishes, and this formula reduces to:
Gravitational waves have equal gravitomagnetic and gravitoelectric components.
Earth
Therefore, the magnitude of Earth's gravitomagnetic field at its equator is:
where is Earth's gravity. The field direction coincides with the angular moment direction, i.e. north.
From this calculation it follows that the strength of the Earth's equatorial gravitomagnetic field is about . Such a field is extremely weak and requires extremely sensitive measurements to be detected. One experiment to measure such field was the Gravity Probe B mission.
Pulsar
If the preceding formula is used with the pulsar PSR J1748-2446ad (which rotates 716 times per second), assuming a radius of 16 km and a mass of two solar masses, then
equals about 166 Hz. This would be easy to notice. However, the pulsar is spinning at a quarter of the speed of light at the equator, and its radius is only three times its Schwarzschild radius. When such fast motion and such strong gravitational fields exist in a system, the simplified approach of separating gravitomagnetic and gravitoelectric forces can be applied only as a very rough approximation.
Lack of invariance
While Maxwell's equations are invariant under Lorentz transformations, the GEM equations are not. The fact that ρg and jg do not form a four-vector (instead they are merely a part of the stress–energy tensor) is the basis of this difference.
Although GEM may hold approximately in two different reference frames connected by a Lorentz boost, there is no way to calculate the GEM variables of one such frame from the GEM variables of the other, unlike the situation with the variables of electromagnetism. Indeed, their predictions (about what motion is free fall) will probably conflict with each other.
Note that the GEM equations are invariant under translations and spatial rotations, just not under boosts and more general curvilinear transformations. Maxwell's equations can be formulated in a way that makes them invariant under all of these coordinate transformations.
See also
Anti-gravity
Artificial gravity
Frame-dragging
Geodetic effect
Gravitational radiation
Gravity Probe B
Kaluza–Klein theory
Linearized gravity
Modified Newtonian dynamics
Non-Relativistic Gravitational Fields
Speed of gravity § Electrodynamical analogies
Stationary spacetime
References
Further reading
Books
Papers
in
External links
Gravity Probe B: Testing Einstein's Universe
Gyroscopic Superconducting Gravitomagnetic Effects news on tentative result of European Space Agency (esa) research
In Search of Gravitomagnetism , NASA, 20 April 2004.
Gravitomagnetic London Moment – New test of General Relativity?
Measurement of Gravitomagnetic and Acceleration Fields Around Rotating Superconductors M. Tajmar, et al., 17 October 2006.
Test of the Lense–Thirring effect with the MGS Mars probe, New Scientist, January 2007.
General relativity
Effects of gravity
Tests of general relativity | Gravitoelectromagnetism | [
"Physics"
] | 2,565 | [
"General relativity",
"Theory of relativity"
] |
24,801,788 | https://en.wikipedia.org/wiki/Valve%20leakage | Valve leakage refers to flow through a valve which is set in the 'off' state.
The importance of valve leakage depends on what the valve is controlling. For example, a dripping tap is less significant than a leak from a six-inch pipe carrying high-pressure radioactive steam.
In the United States, the American National Standards Institute specifies six different leakage classes, with "leakage" defined in terms of the full open valve capacity:
Class I, or 'dust-tight' valves, are intended to work but have not been tested
Class II valves have no more than 0.5% leakage with (or less if operating pressure is less) of air pressure at the operating temperature
Class III valves have no more than 0.1% leakage under those conditions; this may require soft valve seats, or lapped metal surfaces
Class IV valves have no more than 0.01% leakage under those conditions; this tends to require multiple graphite piston rings or a single Teflon piston ring, and lapped metal seats.
Class V valves leak less than cubic metres, per second, per bar of pressure differential, per millimetre of port diameter, of water when tested at the service pressure.
Class VI valves are slightly different in that they are required (at or operating pressure, whichever is less) to have less than a specified leakage rate in millilitres of air per minute:
References
Valves | Valve leakage | [
"Physics",
"Chemistry"
] | 288 | [
"Physical systems",
"Valves",
"Hydraulics",
"Piping"
] |
24,803,004 | https://en.wikipedia.org/wiki/Smaart | Smaart (System Measurement Acoustical Analysis in Real Time) is a suite of audio and acoustical measurements and instrumentation software tools introduced in 1996 by JBL's professional audio division. It is designed to help the live sound engineer optimize sound reinforcement systems before public performance and actively monitor acoustical parameters in real time while an audio system is in use. Most earlier analysis systems required specific test signals sent through the sound system, ones that would be unpleasant for the audience to hear. Smaart is a source-independent analyzer and therefore will work effectively with a variety of test signals including speech or music.
The product has been known as JBL-SMAART, SIA-SMAART Pro, EAW SMAART, and SmaartLive. As of 2008 the product has been branded as simply Smaart. An acoustician version has been offered as Smaart Acoustic Tools, however as of Smaart v7.4, Acoustic Tools have been included within the Impulse Response mode of Smaart. A standalone sound pressure level monitoring-only version called Smaart SPL was released in 2020.
Smaart is a real-time single and dual-channel fast Fourier transform (FFT) analyzer. Smaart has two modes: Real-Time Mode and impulse response mode. Real-time mode views include single channel Spectrum and dual channel Transfer Function measurements to display RTA, Spectrograph, and Transfer Function (Live IR, Phase, Coherence, Magnitude) measurements. The impulse response mode will display time domain graphs such as Lin (Linear), Log (Logarithmic), ETC (Energy Time Curve), as well as Frequency, Spectrograph, and Histogram graphs. Impulse Response mode also includes a suite of acoustical intelligibly criteria such as STI, STIPA, Clarity, RT60, EDT, etc.
Smaart has been licensed and owned by several companies since JBL and is currently owned and developed by Rational Acoustics. First written as a native Windows 3.1 application to work within Windows 95 on IBM PC–compatible computers, in 2006 a version was introduced that was compatible on both Windows and Apple Macintosh operating systems. Smaart was in its 8th version.
Use
Smaart is based on real-time fast Fourier transform (FFT) analysis, including dual-FFT audio signal comparison, called "transfer function", and single-FFT spectrum analyzer. It includes maximum length sequence (MLS) analysis as a choice for impulse response, for the measurement of room acoustics. The FFT implementation of Smaart includes a proprietary multi-time window (MTW) selection in which the FFT, rather than being a fixed length, is made increasingly shorter as the frequency increases. This feature allows the software to 'ignore' later signal reflections from walls and other surfaces, increasing in coherence as the audio frequency increases.
The latest version of Smaart 8 runs under Windows 7 or newer, and Mac OSX 10.7 or newer, including 32- and 64-bit versions. A computer having a dual-core processor with a clock rate of at least 2 GHz is recommended. Smaart can be set to sample rates of 44.1 kHz, 48 kHz or 96 kHz, and to bit depths of 16 or 24. The software works with computer audio protocols ASIO, Core Audio, WAV or WDM audio drivers.
Transfer function
Smaart's transfer function requires a stereo input to the computer because it analyzes two channels of audio signal. Using its dual-FFT mode, Smaart compares one channel with the other to show the difference. This is used by live sound engineers to set up concert sound systems before a show and to monitor and adjust these systems during the performance. The first channel of audio undergoing analysis is connected directly from one of the main outputs of the mixing console and the second channel is connected to a microphone placed in the audience listening area, usually an omnidirectional test microphone with a flat, neutral pickup characteristic. The direct mixing console audio output is compared with the microphone input to determine how the sound is changed by the sound system elements such as loudspeakers and amplifiers, and by the room acoustics indoors or by the weather conditions and acoustic environment outdoors. Smaart displays the difference between the intended sound from the mixer and the received sound at the microphone, and this real-time display informs the audio engineer's decisions regarding delay times, equalization and other sound system adjustment parameters.
Although pink noise is a traditional choice for test signal, Smaart is a source-independent analyzer, which means that it does not rely on a specific test signal to produce measurement data. Pink noise is still in common usage because its energy distribution allows for quick measurement acquisition, but music or another broadband test signal can be used instead.
Transfer function measurements can also be used to examine the frequency response of audio equipment, including individual amplifiers, loudspeakers and digital signal processors such as audio crossovers and equalizers. It can be used to compare a known neutral-response test microphone with another microphone in order to better understand its frequency response and, by changing the angle of the microphone under test, its polar response.
Transfer function measurements can be used to adjust audio crossover settings for multi-way loudspeakers; similarly, they can be used to adjust only the subwoofer-to-top box crossover characteristics in a sound system where the main, non-subwoofer loudspeakers are flown or rigged but the subwoofers are placed on the ground. One of the traces in the Smaart display shows phase response. To properly align adjacent frequency bands through a crossover, the two phase responses should be adjusted until they are seen in Smaart to be parallel through the crossover frequency.
The transfer function measurement can be used to measure frequency-related electrical impedance, one of the electrical characteristics of dynamic loudspeakers. Grateful Dead sound system engineer "Dr. Don" Pearson worked out the method in 2000, using Smaart to compare the voltage drop through a simple resistor between a loudspeaker and a random noise generator.
Real-time analyzer
In spectrograph view, Smaart displays a real-time spectrum analysis, showing the relative strength of audio frequencies for one audio signal. Needing only one channel of audio input, this capability can be used for a variety of purposes. With Smaart's input connected to the mixing console's pre-fade listen (PFL) or cue bus, spectrograph view can display the frequency response of individual channels, several selected channels, or various mixes. Spectrograph mode can be used to display room resonances: pink noise is applied to the room's sound system, and the signal from a test microphone in the room is displayed on Smaart. When the pink noise is muted, the display shows the lingering tails of noise frequencies that are resonating.
Impulse response
Smaart can be used to find the delay time between two signals, in which case the computer needs two input channels and the software uses a transfer function measurement engine. Called "Delay Locator", the software calculates the impulse responses of two continuous audio signals, finding the similarities in the signals and measuring how much time has elapsed between them. This is used to set delay times for delay towers at large outdoor sound systems, and it is used to set delay times for other loudspeaker zones in smaller systems. Veteran Van Halen touring sound engineer Jim Yakabuski calls such delay locator programs as Smaart a "must have" item, useful for quickly aligning sound system elements when setup time is limited.
Market
Smaart is primarily aimed at sound system operators to assist them in setting up and tuning sound systems. Other users include audio equipment designers and architectural acousticians. Author and sound engineer Bob McCarthy wrote in 2007 that because of Smaart's widespread acceptance at all levels of live sound mixing, the paradigm has reversed from the 1980s one of surprise at finding scientific tools in the concert sound scene to one of surprise if the observer finds that such tools are not being used to tune a sound system.
Smaart has been compared to other software-based sound system measurement tools such as SIM by Meyer Sound Laboratories and IASYS by Audio Control, both of which offer delay finder tools. Smaart has been described as "a newer, slimmer and much cheaper—but not necessarily better—version of the Meyer SIM system." MLSSA, developed by DRA Laboratories in 1987, and TEF, a time delay spectrometry product by Gold Line, are other products predating Smaart that are used to tune loudspeakers such as studio monitors. A software tool that reached Mac users in 1997 was named SpectraFoo, by Metric Halo. At the same time, some early Smaart users found that after tweaking their MIDI drivers they could get Smaart to work on an Apple computer, the software running inside an x86 emulator such as SoftWindows "with varying results".
History
As early as 1978, field analysis of rock concert audio was undertaken by Don Pearson, known by his nickname "Dr. Don", while working on sound systems used by the Grateful Dead. Pearson published articles about impulse response measurements taken during setup and testing of concert sound systems, and recommended the Dead buy an expensive Brüel & Kjær 2032 Dual Channel FFT analyzer, made for industrial engineering. Along with Dead audio engineer Dan Healy, Pearson developed methods of working with this system to set up sound systems on tour, and he assisted Meyer engineers working on a more suitable source-independent measurement system which was to become their SIM product. As well, Pearson had an "intimate involvement" with the engineers who were creating Smaart, including a meeting with Jamie Anderson.
Smaart was developed by Sam Berkow in association with Alexander "Thorny" Yuill-Thornton II, touring sound engineer with Luciano Pavarotti and The Three Tenors. In 1995, Berkow and Thorny founded SIA Software Company, produced Smaart and licensed the product to JBL. First exhibited in New York City at the Audio Engineering Society's 99th convention in October 1995 and described the next month in Billboard magazine, in May 1996 the software product was introduced at the price of $695, the equivalent of $ in today's currency. Studio Sound magazine described Smaart in 1996 as "the most talked about new product" at the 100th AES convention in Copenhagen, exemplifying a new trend in software audio measurement. Calvert Dayton joined SIA Software in 1996 as graphic designer, technical writer and website programmer.
Smaart was unusual because it helped audio professionals such as theatrical sound designers do what was previously possible only with highly sophisticated and expensive measurement devices. Audio system engineers from Clair Brothers used Smaart to tune the sound system at each stop during U2's PopMart Tour 1997–1998. As it increased in popularity, engineers who used Smaart found mixed results: touring veteran Doug Fowler wrote that "misuse was rampant" when the software first started appearing in the field. He warned users against faulty interpretation, saying "I still see bad decisions based on bad data, or bad decisions based on a fundamental lack of understanding of the issues at hand." Nevertheless, Clive Young, editor of Pro Sound News, wrote in 2005 that the introduction of Smaart in 1995 was the start of "the modern era of sound reinforcement system analysis software".
In 1998, JBL Smaart Pro won the TEC Awards category for computer software and peripherals. Eastern Acoustic Works (EAW) bought SIA Software, and brought in Jamie Anderson to manage the division. Version 3 was introduced under EAW's ownership, with the additional capability of accepting optional plug-ins which could be used to apply sound system adjustments, as measured by Smaart, to digital signal processing (DSP) equipment. The external third party DSP would perform the corrections indicated by Smaart.
Versions 4 and 5 were built upon the foundation of version 3, but with each major release, the application was getting more and more difficult to write, and further improvements appeared practically impossible to implement. For version 6, the designers decided to tear Smaart back down to its basics and rebuild it on a flexible multi-tasking, multi-platform framework which would allow it to be used on Mac OS X and Windows machines. Writing it took two years, and it was released in a package which included the earlier version 5 because there was not enough time to incorporate all elements of the existing feature set. Anderson said in 2007, "we released Version 6 without all of the features of 5, but we are adding those features back in." Smaart 6 was nominated for a TEC Award in 2007 but did not win.
EAW developed a digital mixing console prototype in 2005, the UMX.96; a console which incorporated SmaartLive 5 internally. Any selected channel on the mixer could be used as a source for Smaart analysis, displaying, for instance, the real-time results of channel equalization. The console could be configured to send multiple microphone inputs to Smaart, and it offered constant metering of sound pressure level in decibels. When it was put into production in 2007, band engineer Don Dodge took the mixer out on a world tour with Foreigner, the first concert mixed in March 2007. With its 15-inch touchscreen able to serve both audio control and Smaart analysis functions, Dodge continued to mix Foreigner on it throughout 2007 and 2008.
Rational Acoustics was incorporated on April 1, 2008. On November 9, 2009, under the leadership of Jamie and Karen Anderson, programmer Adam Black and technical chief Calvert Dayton, Rational Acoustics became the full owner of the Smaart brand. Rational released Smaart 7 on April 14, 2010; a version which uses less processing power than v5 and v6 because of efficiencies brought about in the redesigned code. Smaart 7 was written using a new object-oriented code architecture, it was given improved data acquisition. Other new features include graphic user interface changes and delay tracking. Users can run simultaneously displayed real-time measurements in multiple windows, as many as their computer hardware will allow. Smaart 7 was nominated in 2010 for a TEC Award but did not win. In April 2011, Smaart 7 was named one of four Live Design Sound Products of the Year 2010–2011.
Version history
May 1996 – JBL-Smaart 1.0
March 1997 – JBL-Smaart 1.4
1998 – SIA-Smaart Pro 2
April 1999 – SIA-Smaart Pro 3
2000 – SIA SmaartLive 4
October 2000 – SIA SmaartLive 4.1
April 2001 – SIA SmaartLive 4.5
September 2001 – SIA SmaartLive 4.6
June 2002 – SIA SmaartLive 5
October 2003 – SIA SmaartLive 5.3
2006 – EAW Smaart 6
April 2010 – Smaart 7
October 2010 – Smaart 7.1
April 2011 – Smaart 7.2
July 2011 – Smaart 7.3
August 2012 - Smaart 7.4
April 2014 - Smaart 7.5
March 2016 - Smaart 8.0
November 2016 – Smaart 8.1
December 2017 – Smaart 8.2
October 2018 – Smaart 8.3
November 2019 – Smaart 8.4
June 2020 - Smaart 8.5
September 2022 - Smaart 9 (Suite, RT, LE, and SPL)
References
External links
Rational Acoustics Home Page
Smaart Basics: Example System Overview, video with Jamie Anderson
Sam Berkow NAMM Oral History Interview (2011)
1996 software
Audiovisual introductions in 1996
Acoustics
Windows multimedia software
MacOS multimedia software | Smaart | [
"Physics"
] | 3,300 | [
"Classical mechanics",
"Acoustics"
] |
44,628,427 | https://en.wikipedia.org/wiki/Cloud%20robotics | Cloud robotics is a field of robotics that attempts to invoke cloud technologies such as cloud computing, cloud storage, and other Internet technologies centered on the benefits of converged infrastructure and shared services for robotics. When connected to the cloud, robots can benefit from the powerful computation, storage, and communication resources of modern data center in the cloud, which can process and share information from various robots or agent (other machines, smart objects, humans, etc.). Humans can also delegate tasks to robots remotely through networks. Cloud computing technologies enable robot systems to be endowed with powerful capability whilst reducing costs through cloud technologies. Thus, it is possible to build lightweight, low-cost, smarter robots with an intelligent "brain" in the cloud. The "brain" consists of data center, knowledge base, task planners, deep learning, information processing, environment models, communication support, etc.
Components
A cloud for robots potentially has at least six significant components:
Building a "cloud brain" for robots. It is the main object of cloud robotics.
Offering a global library of images, maps, and object data, often with geometry and mechanical properties, expert system, knowledge base (i.e. semantic web, data centres);
Massively-parallel computation on demand for sample-based statistical modelling and motion planning, task planning, multi-robot collaboration, scheduling and coordination of system;
Robot sharing of outcomes, trajectories, and dynamic control policies and robot learning support;
Human sharing of "open-source" code, data, and designs for programming, experimentation, and hardware construction;
On-demand human guidance and assistance for evaluation, learning, and error recovery;
Augmented human–robot interaction through various way (Semantics knowledge base, Apple SIRI like service etc.).
Applications
Autonomous mobile robots Google's self-driving cars are cloud robots. The cars use the network to access Google's enormous database of maps and satellite and environment model (like Streetview) and combines it with streaming data from GPS, cameras, and 3D sensors to monitor its own position within centimetres, and with past and current traffic patterns to avoid collisions. Each car can learn something about environments, roads, or driving, or conditions, and it sends the information to the Google cloud, where it can be used to improve the performance of other cars.
Cloud medical robots a medical cloud (also called a healthcare cluster) consists of various services such as a disease archive, electronic medical records, a patient health management system, practice services, analytics services, clinic solutions, expert systems, etc. A robot can connect to the cloud to provide clinical service to patients, as well as deliver assistance to doctors (e.g. a co-surgery robot). Moreover, it also provides a collaboration service by sharing information between doctors and care givers about clinical treatment.
Assistive robots A domestic robot can be employed for healthcare and life monitoring for elderly people. The system collects the health status of users and exchange information with cloud expert system or doctors to facilitate elderly peoples life, especially for those with chronic diseases. For example, the robots are able to provide support to prevent the elderly from falling down, emergency healthy support such as heart disease, blooding disease. Care givers of elderly people can also get notification when in emergency from the robot through network.
Industrial robots As highlighted by the German government's Industry 4.0 Plan, "Industry is on the threshold of the fourth industrial revolution. Driven by the Internet, the real and virtual worlds are growing closer and closer together to form the Internet of Things. Industrial production of the future will be characterised by the strong individualisation of products under the conditions of highly flexible (large series) production, the extensive integration of customers and business partners in business and value-added processes, and the linking of production and high-quality services leading to so-called hybrid products." In manufacturing, such cloud based robot systems could learn to handle tasks such as threading wires or cables, or aligning gaskets from a professional knowledge base. A group of robots can share information for some collaborative tasks. Even more, a consumer is able to place customised product orders to manufacturing robots directly with online ordering systems. Another potential paradigm is shopping-delivery robot systems. Once an order is placed, a warehouse robot dispatches the item to an autonomous car or autonomous drone to deliver it to its recipient.
Learn a Cloud Brain for Robots
Approach: Lifelong Learning. Leveraging lifelong learning to build a cloud brain for robots was proposed by CAS. The author was motivated by the problem of how to make robots fuse and transfer their experience so that they can effectively use prior knowledge and quickly adapt to new environments. To address the problem, they present a learning architecture for navigation in cloud robotic systems: Lifelong Federated Reinforcement Learning (LFRL). In the work, they propose a knowledge fusion algorithm for upgrading a shared model deployed on the cloud. Then, effective transfer learning methods in LFRL are introduced. LFRL is consistent with human cognitive science and fits well in cloud robotic systems. Experiments show that LFRL greatly improves the efficiency of reinforcement learning for robot navigation. The cloud robotic system deployment also shows that LFRL is capable of fusing prior knowledge.
Approach: Federated Learning. Leveraging lifelong learning to build a cloud brain for robots was proposed in 2020. Humans are capable of learning a new behavior by observing others to perform the skill. Similarly, robots can also implement this by imitation learning. Furthermore, if with external guidance, humans can master the new behavior more efficiently. So, how can robots achieve this? To address the issue, The authors present a novel framework named FIL. It provides a heterogeneous knowledge fusion mechanism for cloud robotic systems. Then, a knowledge fusion algorithm in FIL is proposed. It enables the cloud to fuse heterogeneous knowledge from local robots and generate guide models for robots with service requests. After that, we introduce a knowledge transfer scheme to facilitate local robots acquiring knowledge from the cloud. With FIL, a robot is capable of utilizing knowledge from other robots to increase its imitation learning in accuracy and efficiency. Compared with transfer learning and meta-learning, FIL is more suitable to be deployed in cloud robotic systems. They conduct experiments of a self-driving task for robots (cars). The experimental results demonstrate that the shared model generated by FIL increases imitation learning efficiency of local robots in cloud robotic systems.
Approach: Peer-assisted Learning. Leveraging peer-assisted learning to build a cloud brain for robots was proposed by UM. A technological revolution is occurring in the field of robotics with the data-driven deep learning technology. However, building datasets for each local robot is laborious. Meanwhile, data islands between local robots make data unable to be utilized collaboratively. To address this issue, the work presents Peer-Assisted Robotic Learning (PARL) in robotics, which is inspired by the peer-assisted learning in cognitive psychology and pedagogy. PARL implements data collaboration with the framework of cloud robotic systems. Both data and models are shared by robots to the cloud after semantic computing and training locally. The cloud converges the data and performs augmentation, integration, and transferring. Finally, fine tune this larger shared dataset in the cloud to local robots. Furthermore, we propose the DAT Network (Data Augmentation and Transferring Network) to implement the data processing in PARL. DAT Network can realize the augmentation of data from multi-local robots. The authors conduct experiments on a simplified self-driving task for robots (cars). DAT Network has a significant improvement in the augmentation in self-driving scenarios. Along with this, the self-driving experimental results also demonstrate that PARL is capable of improving learning effects with data collaboration of local robots.
Research
was funded by the European Union's Seventh Framework Programme for research, technological development projects, specifically to explore the field of cloud robotics. The goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots, paving the way for rapid advances in machine cognition and behaviour, and ultimately, for more subtle and sophisticated human-machine interaction. RoboEarth offers a Cloud Robotics infrastructure. RoboEarth's World-Wide-Web style database stores knowledge generated by humans – and robots – in a machine-readable format. Data stored in the RoboEarth knowledge base include software components, maps for navigation (e.g., object locations, world models), task knowledge (e.g., action recipes, manipulation strategies), and object recognition models (e.g., images, object models). The RoboEarth Cloud Engine includes support for mobile robots, autonomous vehicles, and drones, which require much computation for navigation.
Rapyuta is an open source cloud robotics framework based on RoboEarth Engine developed by the robotics researcher at ETHZ. Within the framework, each robot connected to Rapyuta can have a secured computing environment (rectangular boxes) giving them the ability to move their heavy computation into the cloud. In addition, the computing environments are tightly interconnected with each other and have a high bandwidth connection to the RoboEarth knowledge repository.
KnowRob is an extensional project of RoboEarth. It is a knowledge processing system that combines knowledge representation and reasoning methods with techniques for acquiring knowledge and for grounding the knowledge in a physical system and can serve as a common semantic framework for integrating information from different sources.
is a large-scale computational system that learns from publicly available Internet resources, computer simulations, and real-life robot trials. It accumulates everything robotics into a comprehensive and interconnected knowledge base. Applications include prototyping for robotics research, household robots, and self-driving cars. The goal is as direct as the project's name—to create a centralised, always-online brain for robots to tap into. The project is dominated by Stanford University and Cornell University. And the project is supported by the National Science Foundation, the Office of Naval Research, the Army Research Office, Google, Microsoft, Qualcomm, the Alfred P. Sloan Foundation and the National Robotics Initiative, whose goal is to advance robotics to help make the United States more competitive in the world economy.
MyRobots is a service for connecting robots and intelligent devices to the Internet. It can be regarded as a social network for robots and smart objects (i.e. Facebook for robots). With socialising, collaborating and sharing, robots can benefit from those interactions too by sharing their sensor information giving insight on their perspective of their current state.
is funded by the INTERREG IVA France (Channel) – England European cross-border co-operation programme. The project aims to develop new technologies for disabled people through social and technological innovation and through the users' social and psychological integrity. Objectives is to produce a cognitive ambient assistive living system with Healthcare cluster in cloud with domestic service robots like humanoid, intelligent wheelchair which connect with the cloud.
ROS (Robot Operating System) provides an eco-system to support cloud robotics. ROS is a flexible and distributed framework for robot software development. It is a collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behaviour across a wide variety of robotic platforms. A library for ROS that is a pure Java implementation, called rosjava, allows Android applications to be developed for robots. Since Android has a booming market and billion users, it would be significant in the field of Cloud Robotics.
DAVinci Project is a proposed software framework that seeks to explore the possibilities of parallelizing some of the robotics algorithms as Map/Reduce tasks in Hadoop. The project aims to build a cloud computing environment capable of providing a compute cluster built with commodity hardware exposing a suite of robotic algorithms as a SaaS and share data co-operatively across the robotic ecosystem. This initiative is not available publicly.
C2RO (C2RO Cloud Robotics) is a platform that processes real-time applications such as collision avoidance and object recognition in the cloud. Previously, high latency times prevented these applications from being processed in the cloud thus requiring on-system computational hardware (e.g. Graphics Processing Unit or GPU). C2RO published a peer-reviewed paper at IEEE PIMRC17 showing its platform could make autonomous navigation and other AI services available on robots- even those with limited computational hardware (e.g. a Raspberry Pi)- from the cloud. C2RO eventually claimed to be the first platform to demonstrate cloud-based SLAM (simultaneous localization and mapping) at RoboBusiness in September 2017.
Noos is a cloud robotics service, providing centralised intelligence to robots that are connected to it. The service went live in December 2017. By using the Noos-API, developers could access services for computer vision, deep learning, and SLAM. Noos was developed and maintained by Ortelio Ltd.
Rocos is a centralized cloud robotics platform that provides the developer tooling and infrastructure to build, test, deploy, operate and automate robot fleets at scale. Founded in October 2017, the platform went live in January 2019.
Limitations of cloud robotics
Though robots can benefit from various advantages of cloud computing, cloud is not the solution to all of robotics.
Controlling a robot's motion which relies heavily on (real-time) sensors and feedback of controller may not benefit much from the cloud.
Tasks that involve real-time execution require on-board processing.
Cloud-based applications can get slow or unavailable due to high-latency responses or network hitch. If a robot relies too much on the cloud, a fault in the network could leave it "brainless."
Challenges
The research and development of cloud robotics has following potential issues and challenges:
Scalable parallelisation-grid-computing, parallelisation schemes scale with the size of automation infrastructure.
Effective load balancing: Balancing operations between local and cloud computation.
Knowledge bases and representations
Collective learning for automation in cloud
Infrastructure/Platform or Software as a Service
Internet of Things for robotics
Integrated and collaborative fault-tolerant control
Big Data: Data, collected and/or disseminated over large, accessible networks can enable decisions for classification problems or reveal patterns.
Wireless communication, Connectivity to the cloud
System architectures of robot cloud
Open-source, open-access infrastructures
Workload-sharing
Standards and Protocols
Risks
Environmental security - The concentration of computing resources and users in a cloud computing environment also represents a concentration of security threats. Because of their size and significance, cloud environments are often targeted by virtual machines and bot malware, brute force attacks, and other attacks.
Data privacy and security - Hosting confidential data with cloud service providers involves the transfer of a considerable amount of an organisation's control over data security to the provider. For example, every cloud contains a huge information from the clients include personal data. If a household robot is hacked, users could have risk of their personal privacy and security, like house layout, life snapshot, home-view, etc. It may be accessed and leaked to the world around by criminals. Another problems is once a robot is hacked and controlled by someone else, which may put the user in danger.
Ethical problems - Some ethics of robotics, especially for cloud based robotics must be considered. Since a robot is connected via networks, it has risk to be accessed by other people. If a robot is out of control and carries out illegal activities, who should be responsible for it.
History
The term "Cloud Robotics" first appeared in the public lexicon as part of a talk given by James Kuffner in 2010 at the IEEE/RAS International Conference on Humanoid Robotics entitled "Cloud-enabled Robots".
Since then, "Cloud Robotics" has become a general term encompassing the concepts of information sharing, distributed intelligence, and fleet learning that is possible via networked robots and modern cloud computing. Kuffner was part of Google when he delivered his presentation and the technology company has teased its various cloud robotics initiatives until 2019 when it launched the Google Cloud Robotics Platform for developers.
From the early days of robot development, it was common to have computation done on a computer that was separated from the actual robot mechanism, but connected by wires for power and control. As wireless communication technology developed, new forms of experimental "remote brain" robots were developed controlled by small, onboard compute resources for robot control and safety, that were wirelessly connected to a more powerful remote computer for heavy processing.
The term "cloud computing" was popularized with the launch of Amazon EC2 in 2006. It marked the availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization and service-oriented architecture.
In a correspondence with Popular Science in July 2006, Kuffner wrote that after a robot was programmed or successfully learned to perform a task it could share its model and relevant data with all other cloud-connected robots:
Some publications and events related to Cloud Robotics (in chronological order):
The IEEE RAS Technical Committee on Internet and Online Robots was founded by Ken Goldberg and Roland Siegwart et al. in May 2001. The committee then expanded to IEEE Society of Robotics and Automation's Technical Committee on Networked Robots in 2004.
James J. Kuffner, a former CMU robotics professor, and research scientist at Google, now CEO of Toyota Research Institute—Advanced Development, spoke on cloud robotics in IEEE/RAS International Conference on Humanoid Robotics 2010. It describes "a new approach to robotics that takes advantage of the Internet as a resource for massively parallel computation and sharing of vast data resources."
Ryan Hickman, a Google Product Manager, led an internal volunteer effort in 2010 to connect robots with the Google's cloud services. This work was later expanded to include open source ROS support and was demonstrated on stage by Ryan Hickman, Damon Kohler, Brian Gerkey, and Ken Conley at Google I/O 2011.
National Robotics Initiative of US announced in 2011 aimed to explore how robots can enhance the work of humans rather than replacing them. It claims that next generation of robots are more aware than oblivious, more social than solitary.
NRI Workshop on Cloud Robotics: Challenges and Opportunities- February 2013.
A Roadmap for U.S. Robotics From Internet to Robotics 2013 Edition- by Georgia Institute of Technology, Carnegie Mellon University Robotics Technology Consortium, University of Pennsylvania, University of Southern California, Stanford University, University of California–Berkeley, University of Washington, Massachusetts Institute of TechnologyUS and Robotics OA US. The Roadmap highlighted "Cloud" Robotics and Automation for Manufacturing in the future years.
Cloud-Based Robot Grasping with the Google Object Recognition Engine.
2013 IEEE IROS Workshop on Cloud Robotics. Tokyo. November 2013.
Cloud Robotics-Enable cloud computing for robots. The author proposed some paradigms of using cloud computing in robotics. Some potential field and challenges were coined. R. Li 2014.
Special Issue on Cloud Robotics and Automation- A special issue of the IEEE Transactions on Automation Science and Engineering, April 2015.
Robot APP Store Robot Applications in Cloud, provide applications for robot just like computer/phone app.
DARPA Cloud Robotics.
The first industrial cloud robotics platform, Tend, was founded by Mark Silliman, James Gentes and Robert Kieffer in February 2017. Tend allows robots to be remotely controlled and monitored via websockets and NodeJs.
Cloud robotic architectures: directions for future research from a comparative analysis.
See also
Fog computing
Fog robotics
Internet of things
Multi agent system
Outline of robotics
Swarm robotics
Ubiquitous robot
Cloud storage
References
External links
MyRobots
The age of cloud robotics - Robotics business review.
Cloud Robotics - IEEE Spectrum
Cloud robotics on RoboHub
Cloud computing: state-of-the-art and research challenges
Automation EXPO21XX
Cloud Robotics with Ken Goldberg (Video)
Cloud Robotics Hackathon
Robotics
Cloud computing
Assistive technology | Cloud robotics | [
"Engineering"
] | 4,022 | [
"Robotics",
"Automation"
] |
44,628,821 | https://en.wikipedia.org/wiki/Matrix%20regularization | In the field of statistical learning theory, matrix regularization generalizes notions of vector regularization to cases where the object to be learned is a matrix. The purpose of regularization is to enforce conditions, for example sparsity or smoothness, that can produce stable predictive functions. For example, in the more common vector framework, Tikhonov regularization optimizes over
to find a vector that is a stable solution to the regression problem. When the system is described by a matrix rather than a vector, this problem can be written as
where the vector norm enforcing a regularization penalty on has been extended to a matrix norm on .
Matrix regularization has applications in matrix completion, multivariate regression, and multi-task learning. Ideas of feature and group selection can also be extended to matrices, and these can be generalized to the nonparametric case of multiple kernel learning.
Basic definition
Consider a matrix to be learned from a set of examples, , where goes from to , and goes from to . Let each input matrix be , and let be of size . A general model for the output can be posed as
where the inner product is the Frobenius inner product. For different applications the matrices will have different forms, but for each of these the optimization problem to infer can be written as
where defines the empirical error for a given , and is a matrix regularization penalty. The function is typically chosen to be convex and is often selected to enforce sparsity (using -norms) and/or smoothness (using -norms). Finally, is in the space of matrices with Frobenius inner product .
General applications
Matrix completion
In the problem of matrix completion, the matrix takes the form
where and are the canonical basis in and . In this case the role of the Frobenius inner product is to select individual elements from the matrix . Thus, the output is a sampling of entries from the matrix .
The problem of reconstructing from a small set of sampled entries is possible only under certain restrictions on the matrix, and these restrictions can be enforced by a regularization function. For example, it might be assumed that is low-rank, in which case the regularization penalty can take the form of a nuclear norm.
where , with from to , are the singular values of .
Multivariate regression
Models used in multivariate regression are parameterized by a matrix of coefficients. In the Frobenius inner product above, each matrix is
such that the output of the inner product is the dot product of one row of the input with one column of the coefficient matrix. The familiar form of such models is
Many of the vector norms used in single variable regression can be extended to the multivariate case. One example is the squared Frobenius norm, which can be viewed as an -norm acting either entrywise, or on the singular values of the matrix:
In the multivariate case the effect of regularizing with the Frobenius norm is the same as the vector case; very complex models will have larger norms, and, thus, will be penalized more.
Multi-task learning
The setup for multi-task learning is almost the same as the setup for multivariate regression. The primary difference is that the input variables are also indexed by task (columns of ). The representation with the Frobenius inner product is then
The role of matrix regularization in this setting can be the same as in multivariate regression, but matrix norms can also be used to couple learning problems across tasks. In particular, note that for the optimization problem
the solutions corresponding to each column of are decoupled. That is, the same solution can be found by solving the joint problem, or by solving an isolated regression problem for each column. The problems can be coupled by adding an additional regularization penalty on the covariance of solutions
where models the relationship between tasks. This scheme can be used to both enforce similarity of solutions across tasks, and to learn the specific structure of task similarity by alternating between optimizations of and . When the relationship between tasks is known to lie on a graph, the Laplacian matrix of the graph can be used to couple the learning problems.
Spectral regularization
Regularization by spectral filtering has been used to find stable solutions to problems such as those discussed above by addressing ill-posed matrix inversions (see for example Filter function for Tikhonov regularization). In many cases the regularization function acts on the input (or kernel) to ensure a bounded inverse by eliminating small singular values, but it can also be useful to have spectral norms that act on the matrix that is to be learned.
There are a number of matrix norms that act on the singular values of the matrix. Frequently used examples include the Schatten p-norms, with p = 1 or 2. For example, matrix regularization with a Schatten 1-norm, also called the nuclear norm, can be used to enforce sparsity in the spectrum of a matrix. This has been used in the context of matrix completion when the matrix in question is believed to have a restricted rank. In this case the optimization problem becomes:
Spectral Regularization is also used to enforce a reduced rank coefficient matrix in multivariate regression. In this setting, a reduced rank coefficient matrix can be found by keeping just the top singular values, but this can be extended to keep any reduced set of singular values and vectors.
Structured sparsity
Sparse optimization has become the focus of much research interest as a way to find solutions that depend on a small number of variables (see e.g. the Lasso method). In principle, entry-wise sparsity can be enforced by penalizing the entry-wise -norm of the matrix, but the -norm is not convex. In practice this can be implemented by convex relaxation to the -norm. While entry-wise regularization with an -norm will find solutions with a small number of nonzero elements, applying an -norm to different groups of variables can enforce structure in the sparsity of solutions.
The most straightforward example of structured sparsity uses the norm with and :
For example, the norm is used in multi-task learning to group features across tasks, such that all the elements in a given row of the coefficient matrix can be forced to zero as a group. The grouping effect is achieved by taking the -norm of each row, and then taking the total penalty to be the sum of these row-wise norms. This regularization results in rows that will tend to be all zeros, or dense. The same type of regularization can be used to enforce sparsity column-wise by taking the -norms of each column.
More generally, the norm can be applied to arbitrary groups of variables:
where the index is across groups of variables, and indicates the cardinality of group .
Algorithms for solving these group sparsity problems extend the more well-known Lasso and group Lasso methods by allowing overlapping groups, for example, and have been implemented via matching pursuit: and proximal gradient methods. By writing the proximal gradient with respect to a given coefficient, , it can be seen that this norm enforces a group-wise soft threshold
where is the indicator function for group norms .
Thus, using norms it is straightforward to enforce structure in the sparsity of a matrix either row-wise, column-wise, or in arbitrary blocks. By enforcing group norms on blocks in multivariate or multi-task regression, for example, it is possible to find groups of input and output variables, such that defined subsets of output variables (columns in the matrix ) will depend on the same sparse set of input variables.
Multiple kernel selection
The ideas of structured sparsity and feature selection can be extended to the nonparametric case of multiple kernel learning. This can be useful when there are multiple types of input data (color and texture, for example) with different appropriate kernels for each, or when the appropriate kernel is unknown. If there are two kernels, for example, with feature maps and that lie in corresponding reproducing kernel Hilbert spaces , then a larger space, , can be created as the sum of two spaces:
assuming linear independence in and . In this case the -norm is again the sum of norms:
Thus, by choosing a matrix regularization function as this type of norm, it is possible to find a solution that is sparse in terms of which kernels are used, but dense in the coefficient of each used kernel. Multiple kernel learning can also be used as a form of nonlinear variable selection, or as a model aggregation technique (e.g. by taking the sum of squared norms and relaxing sparsity constraints). For example, each kernel can be taken to be the Gaussian kernel with a different width.
See also
Regularization (mathematics)
References
Estimation theory
Machine learning
Matrices | Matrix regularization | [
"Mathematics",
"Engineering"
] | 1,800 | [
"Matrices (mathematics)",
"Artificial intelligence engineering",
"Mathematical objects",
"Machine learning"
] |
44,629,679 | https://en.wikipedia.org/wiki/Compatibilization | In polymer chemistry, compatibilization is the addition of a substance to an immiscible blend of polymers that will increase their stability. Polymer blends are typically described by coarse, unstable phase morphologies; this results in poor mechanical properties. Compatibilizing the system will make a more stable and better blended phase morphology by creating interactions between the two previously immiscible polymers. Not only does this enhance the mechanical properties of the blend, but it often yields properties that are generally not attainable in either single pure component.
Block or graft copolymers as compatibilizing agents
Block or graft copolymers are commonly used as compatibilizing agents. The copolymer used is made of the two components in the immiscible blend. The respective portions of the copolymer are able to interact with the two phases of the blend to make the phase morphology more stable. The increased stability is caused by reducing the size of the phase-separated particles in the blend. The size reduction comes from the lower interfacial tension, due to accumulating block copolymers at the many interfaces between the two copolymers. This helps the immiscible blends break up into smaller particles in the melt phase. In turn, these phase separated particles will not be as inclined to consolidate and grow because the interfacial tension is now much lower. This stabilizes the polymer blend to a usable product.
An example of this are Ethylene/propylene copolymers. They are able to act as good compatibilizing agents for blends of polypropylene and low density polyethylene. In this specific application, longer ethylene sequences are preferred in the copolymer. This is because cocrystallization also factors into this case, and the longer ethylene sequences will retain some residual crystallinity.
Reactive compatibilization
Reactive compatibilization is a procedure in which immiscible polymer blends are compatibilized by creating copolymers in the solution or melt state. Copolymers are formed when the proper functional groups in each component of the immiscible blend interact in the compatibilization process. These interactions include hydrogen, ionic or covalent bonding. The functional groups that cause these interactions can be the end groups that are already present in the blend polymers (e.g., carboxylic acids or alcohols on polyesters, or amine groups on nylons). Another approach is to add functional groups to the component chains by grafting. The many possible functional groups allow for many types of commercial polymer blends, including polyamide/polyalkene blend systems.
There are a number of advantages reactive compatibilization has over using the traditional block or graft copolymer as the compatibilizing agent. Unlike the latter approach, reactive compatibilization does not rely on diffusing pre-formed copolymers. Copolymers form at the interfaces of the two immiscible blends and do not need to be dispersed. In the traditional approach the system needs to be well mixed when adding the copolymers. Reactive compatibilization is also much more efficient than traditional compatibilization. This is because in reactive compatibilization, functional groups are either already present, or easily grafted on the blend components. In the traditional compatibilization, copolymers must be synthesized on a case-by-case basis for the components to blend.
References
Polymers
Polymer chemistry | Compatibilization | [
"Chemistry",
"Materials_science",
"Engineering"
] | 742 | [
"Polymers",
"Materials science",
"Polymer chemistry"
] |
44,632,031 | https://en.wikipedia.org/wiki/M-theory%20%28learning%20framework%29 | In machine learning and computer vision, M-theory is a learning framework inspired by feed-forward processing in the ventral stream of visual cortex and originally developed for recognition and classification of objects in visual scenes. M-theory was later applied to other areas, such as speech recognition. On certain image recognition tasks, algorithms based on a specific instantiation of M-theory, HMAX, achieved human-level performance.
The core principle of M-theory is extracting representations invariant under various transformations of images (translation, scale, 2D and 3D rotation and others). In contrast with other approaches using invariant representations, in M-theory they are not hardcoded into the algorithms, but learned. M-theory also shares some principles with compressed sensing. The theory proposes multilayered hierarchical learning architecture, similar to that of visual cortex.
Intuition
Invariant representations
A great challenge in visual recognition tasks is that the same object can be seen in a variety of conditions. It can be seen from different distances, different viewpoints, under different lighting, partially occluded, etc. In addition, for particular classes objects, such as faces, highly complex specific transformations may be relevant, such as changing facial expressions. For learning to recognize images, it is greatly beneficial to factor out these variations. It results in much simpler classification problem and, consequently, in great reduction of sample complexity of the model.
A simple computational experiment illustrates this idea. Two instances of a classifier were trained to distinguish images of planes from those of cars. For training and testing of the first instance, images with arbitrary viewpoints were used. Another instance received only images seen from a particular viewpoint, which was equivalent to training and testing the system on invariant representation of the images. One can see that the second classifier performed quite well even after receiving a single example from each category, while performance of the first classifier was close to random guess even after seeing 20 examples.
Invariant representations has been incorporated into several learning architectures, such as neocognitrons. Most of these architectures, however, provided invariance through custom-designed features or properties of architecture itself. While it helps to take into account some sorts of transformations, such as translations, it is very nontrivial to accommodate for other sorts of transformations, such as 3D rotations and changing facial expressions. M-theory provides a framework of how such transformations can be learned. In addition to higher flexibility, this theory also suggests how human brain may have similar capabilities.
Templates
Another core idea of M-theory is close in spirit to ideas from the field of compressed sensing. An implication from Johnson–Lindenstrauss lemma says that a particular number of images can be embedded into a low-dimensional feature space with the same distances between images by using random projections. This result suggests that dot product between the observed image and some other image stored in memory, called template, can be used as a feature helping to distinguish the image from other images. The template need not to be anyhow related to the image, it could be chosen randomly.
Combining templates and invariant representations
The two ideas outlined in previous sections can be brought together to construct a framework for learning invariant representations. The key observation is how dot product between image and a template behaves when image is transformed (by such transformations as translations, rotations, scales, etc.). If transformation is a member of a unitary group of transformations, then the following holds:
In other words, the dot product of transformed image and a template is equal to the dot product of original image and inversely transformed template. For instance, for image rotated by 90 degrees, the inversely transformed template would be rotated by −90 degrees.
Consider the set of dot products of an image to all possible transformations of template: . If one applies a transformation to , the set would become . But because of the property (1), this is equal to . The set is equal to just the set of all elements in . To see this, note that every is in due to the closure property of groups, and for every in G there exist its prototype such as (namely, ). Thus, . One can see that the set of dot products remains the same despite that a transformation was applied to the image! This set by itself may serve as a (very cumbersome) invariant representation of an image. More practical representations can be derived from it.
In the introductory section, it was claimed that M-theory allows to learn invariant representations. This is because templates and their transformed versions can be learned from visual experience – by exposing the system to sequences of transformations of objects. It is plausible that similar visual experiences occur in early period of human life, for instance when infants twiddle toys in their hands. Because templates may be totally unrelated to images that the system later will try to classify, memories of these visual experiences may serve as a basis for recognizing many different kinds of objects in later life. However, as it is shown later, for some kinds of transformations, specific templates are needed.
Theoretical aspects
From orbits to distribution measures
To implement the ideas described in previous sections, one need to know how to derive a computationally efficient invariant representation of an image. Such unique representation for each image can be characterized as it appears by a set of one-dimensional probability distributions (empirical distributions of the dot-products between image and a set of templates stored during unsupervised learning). These probability distributions in their turn can be described by either histograms or a set of statistical moments of it, as it will be shown below.
Orbit is a set of images generated from a single image under the action of the group .
In other words, images of an object and of its transformations correspond to an orbit . If two orbits have a point in common they are identical everywhere, i.e. an orbit is an invariant and unique representation of an image. So, two images are called equivalent when they belong to the same orbit: if such that . Conversely, two orbits are different if none of the images in one orbit coincide with any image in the other.
A natural question arises: how can one compare two orbits? There are several possible approaches. One of them employs the fact that intuitively two empirical orbits are the same irrespective of the ordering of their points. Thus, one can consider a probability distribution induced by the group's action on images ( can be seen as a realization of a random variable).
This probability distribution can be almost uniquely characterized by one-dimensional probability distributions induced by the (one-dimensional) results of projections , where are a set of templates (randomly chosen images) (based on the Cramer–Wold theorem and concentration of measures).
Consider images . Let , where is a universal constant. Then
with probability , for all .
This result (informally) says that an approximately invariant and unique representation of an image can be obtained from the estimates of 1-D probability distributions for . The number of projections needed to discriminate orbits, induced by images, up to precision (and with confidence ) is , where is a universal constant.
To classify an image, the following "recipe" can be used:
Memorize a set of images/objects called templates;
Memorize observed transformations for each template;
Compute dot products of its transformations with image;
Compute histogram of the resulting values, called signature of the image;
Compare the obtained histogram with signatures stored in memory.
Estimates of such one-dimensional probability density functions (PDFs) can be written in terms of histograms as , where is a set of nonlinear functions. These 1-D probability distributions can be characterized with N-bin histograms or set of statistical moments. For example, HMAX represents an architecture in which pooling is done with a max operation.
Non-compact groups of transformations
In the "recipe" for image classification, groups of transformations are approximated with finite number of transformations. Such approximation is possible only when the group is compact.
Such groups as all translations and all scalings of the image are not compact, as they allow arbitrarily big transformations. However, they are locally compact. For locally compact groups, invariance is achievable within certain range of transformations.
Assume that is a subset of transformations from for which the transformed patterns exist in memory. For an image and template , assume that is equal to zero everywhere except some subset of . This subset is called support of and denoted as . It can be proven that if for a transformation , support set will also lie within , then signature of is invariant with respect to . This theorem determines the range of transformations for which invariance is guaranteed to hold.
One can see that the smaller is , the larger is the range of transformations for which invariance is guaranteed to hold. It means that for a group that is only locally compact, not all templates would work equally well anymore. Preferable templates are those with a reasonably small for a generic image. This property is called localization: templates are sensitive only to images within a small range of transformations. Although minimizing is not absolutely necessary for the system to work, it improves approximation of invariance. Requiring localization simultaneously for translation and scale yields a very specific kind of templates: Gabor functions.
The desirability of custom templates for non-compact group is in conflict with the principle of learning invariant representations. However, for certain kinds of regularly encountered image transformations, templates might be the result of evolutionary adaptations. Neurobiological data suggests that there is Gabor-like tuning in the first layer of visual cortex. The optimality of Gabor templates for translations and scales is a possible explanation of this phenomenon.
Non-group transformations
Many interesting transformations of images do not form groups. For instance, transformations of images associated with 3D rotation of corresponding 3D object do not form a group, because it is impossible to define an inverse transformation (two objects may looks the same from one angle but different from another angle). However, approximate invariance is still achievable even for non-group transformations, if localization condition for templates holds and transformation can be locally linearized.
As it was said in the previous section, for specific case of translations and scaling, localization condition can be satisfied by use of generic Gabor templates. However, for general case (non-group) transformation, localization condition can be satisfied only for specific class of objects. More specifically, in order to satisfy the condition, templates must be similar to the objects one would like to recognize. For instance, if one would like to build a system to recognize 3D rotated faces, one need to use other 3D rotated faces as templates. This may explain the existence of such specialized modules in the brain as one responsible for face recognition. Even with custom templates, a noise-like encoding of images and templates is necessary for localization. It can be naturally achieved if the non-group transformation is processed on any layer other than the first in hierarchical recognition architecture.
Hierarchical architectures
The previous section suggests one motivation for hierarchical image recognition architectures. However, they have other benefits as well.
Firstly, hierarchical architectures best accomplish the goal of ‘parsing’ a complex visual scene with many objects consisting of many parts, whose relative position may greatly vary. In this case, different elements of the system must react to different objects and parts. In hierarchical architectures, representations of parts at different levels of embedding hierarchy can be stored at different layers of hierarchy.
Secondly, hierarchical architectures which have invariant representations for parts of objects may facilitate learning of complex compositional concepts. This facilitation may happen through reusing of learned representations of parts that were constructed before in process of learning of other concepts. As a result, sample complexity of learning compositional concepts may be greatly reduced.
Finally, hierarchical architectures have better tolerance to clutter. Clutter problem arises when the target object is in front of a non-uniform background, which functions as a distractor for the visual task. Hierarchical architecture provides signatures for parts of target objects, which do not include parts of background and are not affected by background variations.
In hierarchical architectures, one layer is not necessarily invariant to all transformations that are handled by the hierarchy as a whole. Some transformations may pass through that layer to upper layers, as in the case of non-group transformations described in the previous section. For other transformations, an element of the layer may produce invariant representations only within small range of transformations. For instance, elements of the lower layers in hierarchy have small visual field and thus can handle only a small range of translation. For such transformations, the layer should provide covariant rather than invariant, signatures. The property of covariance can be written as , where is a layer, is the signature of image on that layer, and stands for "distribution of values of the expression for all ".
Relation to biology
M-theory is based on a quantitative theory of the ventral stream of visual cortex. Understanding how visual cortex works in object recognition is still a challenging task for neuroscience. Humans and primates are able to memorize and recognize objects after seeing just couple of examples unlike any state-of-the art machine vision systems that usually require a lot of data in order to recognize objects. Prior to the use of visual neuroscience in computer vision has been limited to early vision for deriving stereo algorithms (e.g.,) and to justify the use of DoG (derivative-of-Gaussian) filters and more recently of Gabor filters. No real attention has been given to biologically plausible features of higher complexity. While mainstream computer vision has always been inspired and challenged by human vision, it seems to have never advanced past the very first stages of processing in the simple cells in V1 and V2. Although some of the systems inspired – to various degrees – by neuroscience, have been tested on at least some natural images, neurobiological models of object recognition in cortex have not yet been extended to deal with real-world image databases.
M-theory learning framework employs a novel hypothesis about the main computational function of the ventral stream: the representation of new objects/images in terms of a signature, which is invariant to transformations learned during visual experience. This allows recognition from very few labeled examples – in the limit, just one.
Neuroscience suggests that natural functionals for a neuron to compute is a high-dimensional dot product between an "image patch" and another image patch (called template)
which is stored in terms of synaptic weights (synapses per neuron). The standard computational model of a neuron is based on a dot product and a threshold. Another important feature of the visual cortex is that it consists of simple and complex cells. This idea was originally proposed by Hubel and Wiesel. M-theory employs this idea. Simple cells compute dot products of an image and transformations of templates for ( is a number of simple cells). Complex cells are responsible for pooling and computing empirical histograms or statistical moments of it. The following formula for constructing histogram can be computed by neurons:
where is a smooth version of step function, is the width of a histogram bin, and is the number of the bin.
Applications
Applications to computer vision
In authors applied M-theory to unconstrained face recognition in natural photographs. Unlike the DAR (detection, alignment, and recognition) method, which handles clutter by detecting objects and cropping closely around them so that very little background remains, this approach accomplishes detection and alignment implicitly by storing transformations of training images (templates) rather than explicitly detecting and aligning or cropping faces at test time. This system is built according to the principles of a recent theory of invariance in hierarchical networks and can evade the clutter problem generally problematic for feedforward systems.
The resulting end-to-end system achieves a drastic improvement in the state of the art on this end-to-end task, reaching the same level of performance as the best systems operating on aligned, closely cropped images (no outside training data). It also performs well on two newer datasets, similar to LFW, but more difficult: significantly jittered (misaligned) version of LFW and SUFR-W (for example, the model's accuracy in the LFW "unaligned & no outside data used" category is 87.55±1.41% compared to state-of-the-art APEM (adaptive probabilistic elastic matching): 81.70±1.78%).
The theory was also applied to a range of recognition tasks: from invariant single object recognition in clutter to multiclass categorization problems on publicly available data sets (CalTech5, CalTech101, MIT-CBCL) and complex (street) scene understanding tasks that requires the recognition of both shape-based as well as texture-based objects (on StreetScenes data set). The approach performs really well: It has the capability of learning from only a few training examples and was shown to outperform several more complex state-of-the-art systems constellation models, the hierarchical SVM-based face-detection system. A key element in the approach is a new set of scale and position-tolerant feature detectors, which are biologically plausible and agree quantitatively with the tuning properties of cells along the ventral stream of visual cortex. These features are adaptive to the training set, though we also show that a universal feature set, learned from a set of natural images unrelated to any categorization task, likewise achieves good performance.
Applications to speech recognition
This theory can also be extended for the speech recognition domain.
As an example, in an extension of a theory for unsupervised learning of invariant visual representations to the auditory domain and empirically evaluated its validity for voiced speech sound classification was proposed. Authors empirically demonstrated that a single-layer, phone-level representation, extracted from base speech features, improves segment classification accuracy and decreases the number of training examples in comparison with standard spectral and cepstral features for an acoustic classification task on TIMIT dataset.
References
Machine learning
Computer vision
Speech recognition | M-theory (learning framework) | [
"Engineering"
] | 3,719 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Machine learning",
"Computer vision"
] |
44,632,714 | https://en.wikipedia.org/wiki/Tantalum%E2%80%93tungsten%20alloys | Tantalum–tungsten alloys are in the refractory metals group that maintain useful physical and chemical properties even at high temperatures. The tantalum–tungsten alloys are characterized by their high melting point and the tension resistance. The properties of the final alloy are a combination of properties from the two elements: tungsten, the element with the highest melting point in the periodic table, and tantalum which has high corrosion resistance.
The tantalum–tungsten alloys typically vary in their percentage of tungsten. Some common variants are:
(Ta – 2.5% W) → also called 'tantaloy 63 metal.' The percentage of tungsten is about 2 to 3% and includes 0.5% of niobium. This alloy has a good resistance to corrosion and performs well at high temperatures. An example application is piping in chemical industries.
(Ta - 7.5% W) → also called 'tantaloy 61 metal,' has between 7 and 8% tungsten. The difference from this alloy to the others is that this alloy represents a high resilience modulus while maintaining its refractory properties.
(Ta - 10% W) → also called 'tantaloy 60 metal,' contains 9 to 11% tungsten. This alloy is less ductile than the other alloys and exhibits less plasticity. Applications include high-temperature, high-corrosion environments such as aerospace components, furnaces, and piping in nuclear plants.
Mechanical properties
The alloys of tantalum–tungsten have high corrosion resistance, and refractory properties. The crystalline structure of the material is body-centered cubic with a substitutional solid solution with atoms of tungsten. The alloy also has a high melting point and can reach high elastic modulus and high tensile strength.
Phase diagram
The equilibrium phase diagram of the alloy formed between the two components tantalum and tungsten is a binary diagram, where the two components are totally soluble on each other. In this diagram the melting temperature of the two elements are shown. It can be seen that there are two lines, representing the solidus and liquidus.
References
Metallurgy
Alloys, Tantalum
Alloys, Tungsten
Refractory metals | Tantalum–tungsten alloys | [
"Chemistry",
"Materials_science",
"Engineering"
] | 453 | [
"Metallurgy",
"Materials science",
"Refractory metals",
"Alloys",
"nan",
"Tungsten alloys"
] |
44,636,075 | https://en.wikipedia.org/wiki/Monowheel%20tractor | A monowheel tractor or monowheel-drive tractor is a light transport and agricultural vehicle that is driven and controlled by an engine and steering mechanism mounted on a single large wheel, with the load-carrying body trailing behind. Despite the name, they are tricycles.
Development
Monowheel tractors developed in two periods, both during times of rapid upheaval after warfare. Both types had quite different circumstances and goals. Small wheel tractors appeared after World War I, during a time of new opportunity. Large-wheel tractors appeared after World War II, during a period of austerity.
Small wheel tractors
The first monowheel tractors appeared in the 1920s, as a result of technical developments in small petrol engines. These had been driven by improving engine technology, particularly for motorbikes. Such engines now represented an affordable and portable power source. An entire powertrain could be constructed as a single monobloc unit, carried on a single wheel, and this mounted onto a trailer through a large swivel bearing. The engine and its drivetrain represented a relatively high technology machine for the period, although its trailer could be much more crude. The engine units were made by the new light engineering works of the time, such as R A Lister, and custom-made trailers could be produced for a wide range of tasks by less sophisticated workshops, down to village blacksmiths.
The completed vehicles had small wheels and little suspension. This limited their use to smooth floors, such as factories and railway stations, rather than muddy farm tracks or even the roads of the period. In most cases they were replacing hand barrows. Their advantage was that they were faster than a loaded barrow, limited to around the same speed as walking with an empty barrow and they only required one operator, no matter what load. Loads of up to 2 tons could be carried.
Some vehicle were driven by a driver riding or standing on-board with direct tiller steering, others were pedestrian controlled by walking alongside. All were highly manoeuvrable, the full swivel of the self-contained engine unit allowing them to turn within their own length.
Examples include:
Lister Auto-Truck
Reliance of Heckmondwike, from 1935, originally the Redshaw Lister Woollen Machinery Co.
Salsbury Motors 'Turret Truck' of 1946. This used a cylindrical motor enclosure that could be rotated completely to allow reverse. The 6hp motor and continuously variable transmission were based on Salsbury's pre-war scooter designs.
Large wheel tractors
After World War II, tractors were a well-developed and widespread piece of agricultural machinery, although they were still expensive. Some items, such as their large rubber tyres, were particularly difficult as they relied on imported raw materials. Britain, for some years after the war, was in a period of austerity and currency controls applied to overseas purchases. An obvious simplification was to take the technology of the tractor, but use only a single wheel and a smaller engine. Many of the large monowheel tractor's tasks would be in either replacing horse carts, or else as a cheaper substitute for more conventional tractors.
S. E. Opperman of Boreham Wood did this in 1945 with their Opperman Motocart. This used a tricycle cart chassis of welded sheet steel, drawn by a tractor wheel mounted on a single small-diameter kingpin above it. The entire powertrain was carried on the wheel hub, including an JAP or Douglas single cylinder petrol engine. Although there was still no suspension other than the large front tractor tyre, the Motocart's wheel steering and large wheels allowed a greater speed than other carts, up to (for legal reasons). Many were road registered, although not provided with full lighting. Using the full range of low gears, a load of up to tons on trial was carried up steep hills.
A similar, although smaller, vehicle was patented in the US. This was intended as a manoeuvrable light tipper for construction sites.
The monowheel concept continued into the 1960s, although now aimed more at "colonial" overseas use. In 1966 the British government through the NRDC was working on a design developed for the National Institute of Agricultural Engineers.
See also
Two-wheel tractor
References
Tractors | Monowheel tractor | [
"Engineering"
] | 861 | [
"Engineering vehicles",
"Tractors"
] |
44,637,222 | https://en.wikipedia.org/wiki/Specialist%20Printing%20Equipment%20and%20Materials%20%28Offences%29%20Act%202015 | The Specialist Printing Equipment and Materials (Offences) Act 2015 is an Act of the Parliament of the United Kingdom. The Bill makes a specific offence of knowingly supplying printing equipment for the production of fake or fraudulent identity documents. It was introduced as a private member's bill by David Amess and Baroness Berridge.
Provisions
The provisions of the Act include:
Making it an offence to supply specialist printing equipment in the knowledge that it is to be used for fraudulent or criminal purposes.
A person found guilty of this is liable of a prison term of up to 10 years, a fine or both.
Defining 'specialist printing equipment' as any equipment which is designed, adapted for or otherwise capable of being used for the making of relevant documents or any material or article that is used in the making of such documents).
Defining 'relevant documents' as anything that is or purports to be:
(a) an identity document (defined as any document that gives a person the right to reside or remain within the UK, a registration card [as defined in the Immigration Act 1971], a United Kingdom passport [as defined in the Immigration Act 1971], a passport from another country or issued by another international entity or a document that can be used instead of a passport)
(b) a travel document (defined as a licence to drive a motor vehicle granted under the Road Traffic Act 1988 or the Road Traffic (Northern Ireland) Order 1981, a driving licence from another country or issued by another international entity, a ticket or document authorising travel on public passenger transport services, a permit authorising a concession when travelling on public transport, a badge of a form prescribed under the Chronically Sick and Disabled Persons Act 1970)
(c) an entry document (defined as a security pass or other document used as such or a ticket, or other document used in that capacity, to an event)
(d) a document used for verifying the holder's age or national insurance number
(e) a currency note or protected coin, as defined in the Forgery and Counterfeiting Act 1981
(f) a debit or credit card
(g) any other instrument to which Section 5 of the Forgery and Counterfeiting Act 1981 applies
Ensuring the Act applies to those in the employ or public service of the Crown as it does to other individuals.
References
External links
2015 in British politics
Printing devices
United Kingdom Acts of Parliament 2015
Identity documents of the United Kingdom | Specialist Printing Equipment and Materials (Offences) Act 2015 | [
"Physics",
"Technology"
] | 492 | [
"Physical systems",
"Machines",
"Printing devices"
] |
36,053,297 | https://en.wikipedia.org/wiki/2012%20National%20Reconnaissance%20Office%20space%20telescope%20donation%20to%20NASA | The 2012 National Reconnaissance Office space telescope donation to NASA was the declassification and donation to NASA of two identical space telescopes by the United States National Reconnaissance Office. The donation has been described by scientists as a substantial improvement over NASA's current Hubble Space Telescope. Although the telescopes themselves were given to NASA at no cost, the space agency must still pay for the cost of instruments and electronics for the telescopes, as well as the satellites to house them and the launch of the telescopes. On February 17, 2016, the Nancy Grace Roman Space Telescope (then known as the Wide Field Infrared Survey Telescope or WFIRST) was formally designated as a mission by NASA, predicated on using one of the space telescopes.
Background
While the Hubble Space Telescope of the National Aeronautics and Space Administration (NASA) has collected a large amount of astronomical data, has outlived all expectations, and has been described as one of the space agency's most successful missions, the facility will eventually succumb to the extreme environment of space. In addition, with the James Webb Space Telescope costing at least US$9 billion, the agency's astrophysics budget is strained. As a result, NASA's other astrophysics missions have been delayed until funding becomes available.
In January 2011 the National Reconnaissance Office (NRO) revealed to NASA the existence of two unneeded telescope optical systems, originally built to be used in reconnaissance satellites, and available to the civilian agency. NASA accepted the offer in August 2011 and announced the donation on 4 June 2012. The instruments were constructed between the late 1990s and early 2000s, reportedly for NRO's unsuccessful Future Imagery Architecture program; in addition to the two completed telescopes, a primary mirror and other parts for a third also exist.
While NRO considers them to be obsolete, the telescopes are nevertheless new and unused. All charge-coupled devices (CCDs) and electronics have been removed, however, and NASA must add them at its own expense. When the telescopes' specifications were presented to scientists, large portions were censored due to national security. An unnamed space analyst stated that the instruments may be a part of the KH-11 Kennen line of satellites which have been launched since 1976, but which have now been largely superseded by newer telescopes with wider fields of view than the KH-11. The analyst stated, however, that the telescopes have "state-of-the-art optics" despite their obsolescence for reconnaissance purposes.
Potential uses
The early consensus for the usage of the telescopes was to follow the NASA Astronomy and Astrophysics Decadal Survey of 2010, which lists the Wide Field Infrared Survey Telescope (now renamed the Nancy Grace Roman Space Telescope) as its highest priority. Observing in the infrared section of the electromagnetic spectrum, the Nancy Grace Roman Space Telescope will be used to study the role of dark energy in the Universe, as well as to directly image Jupiter-sized extrasolar planets.
The NRO telescope design has several features which make it useful for Roman/WFIRST and superior to the Hubble. The NRO instrument's primary mirror is the same size and quality as the Hubble's. With double the mirror diameter of the original WFIRST design, it allows for up to twice the image resolution and gathers four times the light. Unlike civilian telescopes, the NRO instrument also has a steerable secondary mirror for additional precision. The telescope has a much wider field of view than Hubble due to its shorter focal length, allowing it to observe about 100 times the area at any given time as Hubble can. This has led to the donated telescopes' characterization as "Stubby Hubbles". Their obstructed design, however, may make imaging extrasolar planets more challenging, and would be unsuitable for imaging the most distant galaxies at its longest infrared wavelengths, which requires cooling beyond the original NRO design temperature range.
Whether using the NRO telescopes would save NASA money is unclear. While each is worth at least $250 million, their larger size compared to the proposed WFIRST design would require a larger rocket and camera. According to one NASA estimate using an NRO telescope would raise the cost of WFIRST by $250 million above its $1.5 billion budget. Another estimate states that NASA would save up to $250 million. The agency's deputy acting director for astrophysics Michael Moore states that using both telescopes may ultimately save NASA $1.5 billion. David Spergel estimated that using an NRO telescope would add about $100 million to WFIRST's cost, but would prefer to spend another $200 million for a coronagraph to improve its direct-imaging capability.
Due to the budgetary constraints arising from the continued construction of the James Webb Space Telescope, NASA has stated that Roman/WFIRST may not be launched until 2024 at the earliest, despite early speculation that by using an NRO telescope the mission might launch by roughly 2020, at about the same time as the European Space Agency's Euclid. In addition, the availability of a telescope is believed to increase the probability that the mission will be launched at all.
While the first telescope is now in use as the basis for Roman/WFIRST, NASA currently does not have plans or funding for the usage of the second. Astronomers had studied possible additional uses, and NASA considered dozens of proposals; the only prohibition is Earth observation, a condition of the NRO donation. Possibilities include observing Earth's aurora and ionosphere, or asteroids and other faint objects within the Solar System. NASA has also suggested that the telescope could be sent to Mars, photographing the surface with a resolution about four times finer than the current Mars Reconnaissance Orbiter's HiRISE instrument. From Martian orbit the telescope could also view the outer Solar System and the asteroid belt.
See also
Multiple Mirror Telescope - an astronomical telescope in Arizona, built with donated NRO mirrors
References
Space telescopes
National Reconnaissance Office
NASA
2012 in the United States | 2012 National Reconnaissance Office space telescope donation to NASA | [
"Astronomy"
] | 1,210 | [
"Space telescopes"
] |
36,053,570 | https://en.wikipedia.org/wiki/Phase-space%20formulation | The phase-space formulation is a formulation of quantum mechanics that places the position and momentum variables on equal footing in phase space. The two key features of the phase-space formulation are that the quantum state is described by a quasiprobability distribution (instead of a wave function, state vector, or density matrix) and operator multiplication is replaced by a star product.
The theory was fully developed by Hilbrand Groenewold in 1946 in his PhD thesis, and independently by Joe Moyal, each building on earlier ideas by Hermann Weyl and Eugene Wigner.
In contrast to the phase-space formulation, the Schrödinger picture uses the position or momentum representations (see also position and momentum space).
The chief advantage of the phase-space formulation is that it makes quantum mechanics appear as similar to Hamiltonian mechanics as possible by avoiding the operator formalism, thereby "'freeing' the quantization of the 'burden' of the Hilbert space". This formulation is statistical in nature and offers logical connections between quantum mechanics and classical statistical mechanics, enabling a natural comparison between the two (see classical limit). Quantum mechanics in phase space is often favored in certain quantum optics applications (see optical phase space), or in the study of decoherence and a range of specialized technical problems, though otherwise the formalism is less commonly employed in practical situations.
The conceptual ideas underlying the development of quantum mechanics in phase space have branched into mathematical offshoots such as Kontsevich's deformation-quantization (see Kontsevich quantization formula) and noncommutative geometry.
Phase-space distribution
The phase-space distribution of a quantum state is a quasiprobability distribution. In the phase-space formulation, the phase-space distribution may be treated as the fundamental, primitive description of the quantum system, without any reference to wave functions or density matrices.
There are several different ways to represent the distribution, all interrelated. The most noteworthy is the Wigner representation, , discovered first. Other representations (in approximately descending order of prevalence in the literature) include the Glauber–Sudarshan P, Husimi Q, Kirkwood–Rihaczek, Mehta, Rivier, and Born–Jordan representations. These alternatives are most useful when the Hamiltonian takes a particular form, such as normal order for the Glauber–Sudarshan P-representation. Since the Wigner representation is the most common, this article will usually stick to it, unless otherwise specified.
The phase-space distribution possesses properties akin to the probability density in a 2n-dimensional phase space. For example, it is real-valued, unlike the generally complex-valued wave function. We can understand the probability of lying within a position interval, for example, by integrating the Wigner function over all momenta and over the position interval:
If is an operator representing an observable, it may be mapped to phase space as through the Wigner transform. Conversely, this operator may be recovered by the Weyl transform.
The expectation value of the observable with respect to the phase-space distribution is
A point of caution, however: despite the similarity in appearance, is not a genuine joint probability distribution, because regions under it do not represent mutually exclusive states, as required in the third axiom of probability theory. Moreover, it can, in general, take negative values even for pure states, with the unique exception of (optionally squeezed) coherent states, in violation of the first axiom.
Regions of such negative value are provable to be "small": they cannot extend to compact regions larger than a few , and hence disappear in the classical limit. They are shielded by the uncertainty principle, which does not allow precise localization within phase-space regions smaller than , and thus renders such "negative probabilities" less paradoxical. If the left side of the equation is to be interpreted as an expectation value in the Hilbert space with respect to an operator, then in the context of quantum optics this equation is known as the optical equivalence theorem. (For details on the properties and interpretation of the Wigner function, see its main article.)
An alternative phase-space approach to quantum mechanics seeks to define a wave function (not just a quasiprobability density) on phase space, typically by means of the Segal–Bargmann transform. To be compatible with the uncertainty principle, the phase-space wave function cannot be an arbitrary function, or else it could be localized into an arbitrarily small region of phase space. Rather, the Segal–Bargmann transform is a holomorphic function of . There is a quasiprobability density associated to the phase-space wave function; it is the Husimi Q representation of the position wave function.
Star product
The fundamental noncommutative binary operator in the phase-space formulation that replaces the standard operator multiplication is the star product, represented by the symbol ★. Each representation of the phase-space distribution has a different characteristic star product. For concreteness, we restrict this discussion to the star product relevant to the Wigner–Weyl representation.
For notational convenience, we introduce the notion of left and right derivatives. For a pair of functions f and g, the left and right derivatives are defined as
The differential definition of the star product is
where the argument of the exponential function can be interpreted as a power series.
Additional differential relations allow this to be written in terms of a change in the arguments of f and g:
It is also possible to define the ★-product in a convolution integral form, essentially through the Fourier transform:
(Thus, e.g., Gaussians compose hyperbolically:
or
etc.)
The energy eigenstate distributions are known as stargenstates, ★-genstates, stargenfunctions, or ★-genfunctions, and the associated energies are known as stargenvalues or ★-genvalues. These are solved, analogously to the time-independent Schrödinger equation, by the ★-genvalue equation,
where is the Hamiltonian, a plain phase-space function, most often identical to the classical Hamiltonian.
Time evolution
The time evolution of the phase space distribution is given by a quantum modification of Liouville flow. This formula results from applying the Wigner transformation to the density matrix version of the quantum Liouville equation,
the von Neumann equation.
In any representation of the phase space distribution with its associated star product, this is
or, for the Wigner function in particular,
where {{ , }} is the Moyal bracket, the Wigner transform of the quantum commutator, while { , } is the classical Poisson bracket.
This yields a concise illustration of the correspondence principle: this equation manifestly reduces to the classical Liouville equation in the limit ħ → 0. In the quantum extension of the flow, however, the density of points in phase space is not conserved; the probability fluid appears "diffusive" and compressible.
The concept of quantum trajectory is therefore a delicate issue here. See the movie for the Morse potential, below, to appreciate the nonlocality of quantum phase flow.
N.B. Given the restrictions placed by the uncertainty principle on localization, Niels Bohr vigorously denied the physical existence of such trajectories on the microscopic scale. By means of formal phase-space trajectories, the time evolution problem of the Wigner function can be rigorously solved using the path-integral method and the method of quantum characteristics, although there are severe practical obstacles in both cases.
Examples
Simple harmonic oscillator
The Hamiltonian for the simple harmonic oscillator in one spatial dimension in the Wigner–Weyl representation is
The ★-genvalue equation for the static Wigner function then reads
Consider, first, the imaginary part of the ★-genvalue equation,
This implies that one may write the ★-genstates as functions of a single argument:
With this change of variables, it is possible to write the real part of the ★-genvalue equation in the form of a modified Laguerre equation (not Hermite's equation!), the solution of which involves the Laguerre polynomials as
introduced by Groenewold, with associated ★-genvalues
For the harmonic oscillator, the time evolution of an arbitrary Wigner distribution is simple. An initial evolves by the above evolution equation driven by the oscillator Hamiltonian given, by simply rigidly rotating in phase space,
Typically, a "bump" (or coherent state) of energy can represent a macroscopic quantity and appear like a classical object rotating uniformly in phase space, a plain mechanical oscillator (see the animated figures). Integrating over all phases (starting positions at t = 0) of such objects, a continuous "palisade", yields a time-independent configuration similar to the above static ★-genstates , an intuitive visualization of the classical limit for large-action systems.
The eigenfunctions can also be characterized by being rotationally symmetric (thus time-invariant) pure states. That is, they are functions of form that satisfy .
Free particle angular momentum
Suppose a particle is initially in a minimally uncertain Gaussian state, with the expectation values of position and momentum both centered at the origin in phase space. The Wigner function for such a state propagating freely is
where α is a parameter describing the initial width of the Gaussian, and .
Initially, the position and momenta are uncorrelated. Thus, in 3 dimensions, we expect the position and momentum vectors to be twice as likely to be perpendicular to each other as parallel.
However, the position and momentum become increasingly correlated as the state evolves, because portions of the distribution farther from the origin in position require a larger momentum to be reached: asymptotically,
(This relative "squeezing" reflects the spreading of the free wave packet in coordinate space.)
Indeed, it is possible to show that the kinetic energy of the particle becomes asymptotically radial only, in agreement with the standard
quantum-mechanical notion of the ground-state nonzero angular momentum specifying orientation independence:
Morse potential
The Morse potential is used to approximate the vibrational structure of a diatomic molecule.
Quantum tunneling
Tunneling is a hallmark quantum effect where a quantum particle, not having sufficient energy to fly above, still goes through a barrier. This effect does not exist in classical mechanics.
Quartic potential
Schrödinger cat state
References
Hamiltonian mechanics
Symplectic geometry
Mathematical quantization
Foundational quantum physics
Articles containing video clips | Phase-space formulation | [
"Physics",
"Mathematics"
] | 2,199 | [
"Theoretical physics",
"Foundational quantum physics",
"Quantum mechanics",
"Classical mechanics",
"Hamiltonian mechanics",
"Mathematical quantization",
"Dynamical systems"
] |
36,054,202 | https://en.wikipedia.org/wiki/Two-body%20Dirac%20equations | In quantum field theory, and in the significant subfields of quantum electrodynamics (QED) and quantum chromodynamics (QCD), the two-body Dirac equations (TBDE) of constraint dynamics provide a three-dimensional yet manifestly covariant reformulation of the Bethe–Salpeter equation for two spin-1/2 particles. Such a reformulation is necessary since without it, as shown by Nakanishi, the Bethe–Salpeter equation possesses negative-norm solutions arising from the presence of an essentially relativistic degree of freedom, the relative time. These "ghost" states have spoiled the naive interpretation of the Bethe–Salpeter equation as a quantum mechanical wave equation. The two-body Dirac equations of constraint dynamics rectify this flaw. The forms of these equations can not only be derived from quantum field theory they can also be derived purely in the context of Dirac's constraint dynamics and relativistic mechanics and quantum mechanics. Their structures, unlike the more familiar two-body Dirac equation of Breit, which is a single equation, are that of two simultaneous quantum relativistic wave equations. A single two-body Dirac equation similar to the Breit equation can be derived from the TBDE. Unlike the Breit equation, it is manifestly covariant and free from the types of singularities that prevent a strictly nonperturbative treatment of the Breit equation.
In applications of the TBDE to QED, the two particles interact by way of four-vector potentials derived from the field theoretic electromagnetic interactions between the two particles. In applications to QCD, the two particles interact by way of four-vector potentials and Lorentz invariant scalar interactions, derived in part from the field theoretic chromomagnetic interactions between the quarks and in part by phenomenological considerations. As with the Breit equation a sixteen-component spinor Ψ is used.
Equations
For QED, each equation has the same structure as the ordinary one-body Dirac equation in the presence of an external electromagnetic field, given by the 4-potential . For QCD, each equation has the same structure as the ordinary one-body Dirac equation in the presence of an external field similar to the electromagnetic field and an additional external field given by in terms of a Lorentz invariant scalar . In natural units: those two-body equations have the form.
where, in coordinate space, p is the 4-momentum, related to the 4-gradient by (the metric used here is )
and γμ are the gamma matrices. The two-body Dirac equations (TBDE) have the property that if one of the masses becomes very large, say then the 16-component Dirac equation reduces to the 4-component one-body Dirac equation for particle one in an external potential.
In SI units:
where c is the speed of light and
Natural units will be used below. A tilde symbol is used over the two sets of potentials to indicate that they may have additional gamma matrix dependencies not present in the one-body Dirac equation. Any coupling constants such as the electron charge are embodied in the vector potentials.
Constraint dynamics and the TBDE
Constraint dynamics applied to the TBDE requires a particular form of mathematical consistency: the two Dirac operators must commute with each other. This is plausible if one views the two equations as two compatible constraints on the wave function. (See the discussion below on constraint dynamics.) If the two operators did not commute, (as, e.g., with the coordinate and momentum operators ) then the constraints would not be compatible (one could not e.g., have a wave function that satisfied both and ). This mathematical consistency or compatibility leads to three important properties of the TBDE. The first is a condition that eliminates the dependence on the relative time in the center of momentum (c.m.) frame defined by . (The variable is the total energy in the c.m. frame.) Stated another way, the relative time is eliminated in a covariant way. In particular, for the two operators to commute, the scalar and four-vector potentials can depend on the relative coordinate only through its component orthogonal to in which
This implies that in the c.m. frame , which has zero time component.
Secondly, the mathematical consistency condition also eliminates the relative energy in the c.m. frame. It does this by imposing on each Dirac operator a structure such that in a particular combination they lead to this interaction independent form, eliminating in a covariant way the relative energy.
In this expression is the relative momentum having the form for equal masses. In the c.m. frame (), the time component of the relative momentum, that is the relative energy, is thus eliminated. in the sense that .
A third consequence of the mathematical consistency is that each of the world scalar and four vector potentials has a term with a fixed dependence on and in addition to the gamma matrix independent forms of and which appear in the ordinary one-body Dirac equation for scalar and vector potentials.
These extra terms correspond to additional recoil spin-dependence not present in the one-body Dirac equation and vanish when one of the particles becomes very heavy (the so-called static limit).
More on constraint dynamics: generalized mass shell constraints
Constraint dynamics arose from the work of Dirac and Bergmann. This section shows how the elimination of relative time and energy takes place in the c.m. system for the simple system of two relativistic spinless particles. Constraint dynamics was first applied to the classical relativistic two particle system by Todorov, Kalb and Van Alstine, Komar, and Droz–Vincent. With constraint dynamics, these authors found a consistent and covariant approach to relativistic canonical Hamiltonian mechanics that also evades the Currie–Jordan–Sudarshan "No Interaction" theorem. That theorem states that without fields, one cannot have a relativistic Hamiltonian dynamics. Thus, the same covariant three-dimensional approach which allows the quantized version of constraint dynamics to remove quantum ghosts simultaneously circumvents at the classical level the C.J.S. theorem. Consider a constraint on the otherwise independent coordinate and momentum four vectors, written in the form . The symbol is called a weak equality and implies that the constraint is to be imposed only after any needed Poisson brackets are performed. In the presence of such constraints, the total
Hamiltonian is obtained from the Lagrangian by adding to the Legendre Hamiltonian the sum of the constraints times an appropriate set of Lagrange multipliers .
This total Hamiltonian is traditionally called the Dirac Hamiltonian. Constraints arise naturally from parameter invariant actions of the form
In the case of four vector and Lorentz scalar interactions for a single particle the Lagrangian is
The canonical momentum is
and by squaring leads to the generalized mass shell condition or generalized mass shell constraint
Since, in this case, the Legendre Hamiltonian vanishes
the Dirac Hamiltonian is simply the generalized mass constraint (with no interactions it would simply be the ordinary mass shell constraint)
One then postulates that for two bodies the Dirac Hamiltonian is the sum of two such mass shell constraints,
that is
and that each constraint be constant in the proper time associated with
Here the weak equality means that the Poisson bracket could result in terms proportional one of the constraints, the classical Poisson brackets for the relativistic two-body system being defined by
To see the consequences of having each constraint be a constant of the motion, take, for example
Since and and one has
The simplest solution to this is
which leads to (note the equality in this case is not a weak one in that no constraint need be imposed after the Poisson bracket is worked out)
(see Todorov, and Wong and Crater ) with the same defined above.
Quantization
In addition to replacing classical dynamical variables by their quantum counterparts, quantization of the constraint mechanics takes place by replacing the constraint on the dynamical variables with a restriction on the wave function
The first set of equations for i = 1, 2 play the role for spinless particles that the two Dirac equations play for spin-one-half particles. The classical Poisson brackets are replaced by commutators
Thus
and we see in this case that the constraint formalism leads to the vanishing commutator of the wave operators for the two particles. This is the analogue of the claim stated earlier that the two Dirac operators commute with one another.
Covariant elimination of the relative energy
The vanishing of the above commutator ensures that the dynamics is independent of the relative time in the c.m. frame. In order to covariantly eliminate the relative energy, introduce the relative momentum defined by
The above definition of the relative momentum forces the orthogonality of the total momentum and the relative momentum,
which follows from taking the scalar product of either equation with .
From Eqs.() and (), this relative momentum can be written in terms of and as
where
are the projections of the momenta and along the direction of the total momentum . Subtracting the two constraints and , gives
Thus on these states
The equation describes both the c.m. motion and the internal relative motion. To characterize the former motion, observe that since the potential depends only on the difference of the two coordinates
(This does not require that since the .) Thus, the total momentum is a constant of motion and is an eigenstate state characterized by a total momentum . In the c.m. system with the invariant center of momentum (c.m.) energy. Thus
and so is also an eigenstate of c.m. energy operators for each of the two particles,
The relative momentum then satisfies
so that
The above set of equations follow from the constraints and the definition of the relative momenta given in Eqs.() and (). If instead one chooses to define (for a more general choice see Horwitz),
independent of the wave function, then
and it is straight forward to show that the constraint Eq.() leads directly to:
in place of . This conforms with the earlier claim on the vanishing of the relative energy in the c.m. frame made in conjunction with the TBDE. In the second choice the c.m. value of the relative energy is not defined as zero but comes from the original generalized mass shell constraints. The above equations for the relative and constituent four-momentum are the relativistic analogues of the non-relativistic equations
Covariant eigenvalue equation for internal motion
Using Eqs.(),(),(), one can write in terms of and
where
Eq.() contains both the total momentum [through the ] and the relative momentum . Using Eq. (), one obtains the eigenvalue equation
so that becomes the standard triangle function displaying exact relativistic two-body kinematics:
With the above constraint Eqs.() on then where . This allows writing Eq. () in the form of an eigenvalue equation
having a structure very similar to that of the ordinary three-dimensional nonrelativistic Schrödinger equation. It is a manifestly covariant equation, but at the same time its three-dimensional structure is evident. The four-vectors and have only three independent components since
The similarity to the three-dimensional structure of the nonrelativistic Schrödinger equation can be made more explicit by writing the equation in the c.m. frame in which
Comparison of the resultant form
with the time independent Schrödinger equation
makes this similarity explicit.
The two-body relativistic Klein–Gordon equations
A plausible structure for the quasipotential can be found by observing that the one-body Klein–Gordon equation takes the form when one introduces a scalar interaction and timelike vector interaction via and . In the two-body case, separate classical and quantum field theory
arguments show that when one includes world scalar and vector interactions then depends on two underlying invariant functions and through the two-body Klein–Gordon-like potential form with the same general structure, that is
Those field theories further yield the c.m. energy dependent forms
and
ones that Tododov introduced as the relativistic reduced mass and effective particle energy for a two-body system. Similar to what happens in the nonrelativistic two-body problem, in the relativistic case we have the motion of this effective particle taking place as if it were in an external field (here generated by and ). The two kinematical variables and are related to one another by the Einstein condition
If one introduces the four-vectors, including a vector interaction
and scalar interaction , then the following classical minimal constraint form
reproduces
Notice, that the interaction in this "reduced particle" constraint depends on two invariant scalars, and , one guiding the time-like vector interaction and one the scalar interaction.
Is there a set of two-body Klein–Gordon equations analogous to the two-body Dirac equations? The classical relativistic constraints analogous to the quantum two-body Dirac equations (discussed in the introduction) and that have the same structure as the above Klein–Gordon one-body form are
Defining structures that display time-like vector and scalar interactions
gives
Imposing
and using the constraint , reproduces Eqs.() provided
The corresponding Klein–Gordon equations are
and each, due to the constraint is equivalent to
Hyperbolic versus external field form of the two-body Dirac equations
For the two body system there are numerous covariant forms of interaction. The simplest way of looking at these is from the point of view of the gamma matrix structures of the corresponding interaction vertices of the single particle exchange diagrams. For scalar, pseudoscalar, vector, pseudovector, and tensor exchanges those matrix structures are respectively
in which
The form of the Two-Body Dirac equations which most readily incorporates each or any number of these intereractions in concert is the so-called hyperbolic form of the TBDE. For combined scalar and vector interactions those forms ultimately reduce to the ones given in the first set of equations of this article. Those equations are called the external field-like forms because their appearances are individually the same as those for the usual one-body Dirac equation in the presence of external vector and scalar fields.
The most general hyperbolic form for compatible TBDE is
where represents any invariant interaction singly or in combination. It has a matrix structure in addition to coordinate dependence. Depending on what that matrix structure is one has either scalar, pseudoscalar, vector, pseudovector, or tensor interactions. The operators and are auxiliary constraints satisfying
in which the are the free Dirac operators
This, in turn leads to the two compatibility conditions
and
provided that These compatibility conditions do not restrict the gamma matrix structure of . That matrix structure is determined by the type of vertex-vertex structure incorporated in the interaction. For the two types of invariant interactions emphasized in this article they are
For general independent scalar and vector interactions
The vector interaction specified by the above matrix structure for an electromagnetic-like interaction would correspond to the Feynman gauge.
If one inserts Eq.() into () and brings the free Dirac operator () to the right of the matrix hyperbolic functions and uses standard gamma matrix commutators and anticommutators and one arrives at
in which
The (covariant) structure of these equations are analogous to those of a Dirac equation for each of the two particles, with and playing the roles that and do in the single particle Dirac equation
Over and above the usual kinetic part and time-like vector and scalar potential portions, the spin-dependent modifications involving and the last set of derivative terms are two-body recoil effects absent for the one-body Dirac equation but essential for the compatibility (consistency) of the two-body equations. The connections between what are designated as the vertex invariants and the mass and energy potentials are
Comparing Eq.() with the first equation of this article one finds that the spin-dependent vector interactions are
Note that the first portion of the vector potentials is timelike (parallel to while the next portion is spacelike (perpendicular to . The spin-dependent scalar potentials are
The parametrization for and takes advantage of the Todorov effective external potential forms (as seen in the above section on the two-body Klein Gordon equations) and at the same time displays the correct static limit form for the Pauli reduction to Schrödinger-like form. The choice for these parameterizations (as with the two-body Klein Gordon equations) is closely tied to classical or quantum field theories for separate scalar and vector interactions. This amounts to working in the Feynman gauge with the simplest relation between space- and timelike parts of the vector interaction. The mass and energy potentials are respectively
so that
Applications and limitations
The TBDE can be readily applied to two body systems such as positronium, muonium, hydrogen-like atoms, quarkonium, and the two-nucleon system. These applications involve two particles only and do not involve creation or annihilation of particles beyond the two. They involve only elastic processes. Because of the connection between the potentials used in the TBDE and the corresponding quantum field theory, any radiative correction to the lowest order interaction can be incorporated into those potentials. To see how this comes about, consider by contrast how one computes scattering amplitudes without quantum field theory. With no quantum field theory one must come upon potentials by classical arguments or phenomenological considerations. Once one has the potential between two particles, then one can compute the scattering amplitude from the Lippmann–Schwinger equation
in which is a Green function determined from the Schrödinger equation. Because of the similarity between the Schrödinger equation Eq. () and the relativistic constraint equation (), one can derive the same type of equation as the above
called the quasipotential equation with a very similar to that given in the Lippmann–Schwinger equation. The difference is that with the quasipotential equation, one starts with the scattering amplitudes of quantum field theory, as determined from Feynman diagrams and deduces the quasipotential Φ perturbatively. Then one can use that Φ in (), to compute energy levels of two particle systems that are implied by the field theory. Constraint dynamics provides one of many, in fact an infinite number of, different types of quasipotential equations (three-dimensional truncations of the Bethe–Salpeter equation) differing from one another by the choice of .
The relatively simple solution to the problem of relative time and energy from the generalized mass shell constraint for two particles, has no simple extension, such as presented here with the variable, to either two particles in an external field or to 3 or more particles. Sazdjian has presented a recipe for this extension when the particles are confined and cannot split into clusters of a smaller number of particles with no inter-cluster interactions Lusanna has developed an approach, one that does not involve generalized mass shell constraints with no such restrictions, which extends to N bodies with or without fields. It is formulated on spacelike hypersurfaces and when restricted to the family of hyperplanes orthogonal to the total timelike momentum gives rise to a covariant intrinsic 1-time formulation (with no relative time variables) called the "rest-frame instant form" of dynamics,
See also
Breit equation
4-vector
Dirac equation
Dirac equation in the algebra of physical space
Dirac operator
Electromagnetism
Kinetic momentum
Many body problem
Invariant mass
Particle physics
Positronium
Ricci calculus
Special relativity
Spin
Quantum entanglement
Relativistic quantum mechanics
References
Various forms of radial equations for the Dirac two-body problem W. Królikowski (1991), Institute of theoretical physics (Warsaw, Poland)
Quantum field theory
Mathematical physics
Equations of physics
Dirac equation | Two-body Dirac equations | [
"Physics",
"Mathematics"
] | 4,183 | [
"Quantum field theory",
"Equations of physics",
"Applied mathematics",
"Theoretical physics",
"Eponymous equations of physics",
"Quantum mechanics",
"Mathematical objects",
"Equations",
"Mathematical physics",
"Dirac equation"
] |
36,057,380 | https://en.wikipedia.org/wiki/ANAEM | The Ankara Nuclear Research and Training Center (), known as ANAEM, is a nuclear research and training center of Turkey. The organization was established on August 18, 2010 as a subunit of Turkish Atomic Energy Administration (, TAEK) in its campus at Ankara University's Faculty of Science situated in Beşevler neighborhood in central Ankara. The organization consists of three divisions, which are engaged in education, training and public relations on nuclear matters.
See also
ÇNAEM Çekmece Nuclear Research and Training Center in Istanbul
SANAEM Sarayköy Nuclear Research and Training Center in Ankara
References
Nuclear research institutes
Research institutes in Turkey
Nuclear technology in Turkey
Science and technology in Turkey
Buildings and structures in Ankara
Organizations based in Ankara
Organizations established in 2010
Ankara University
Yenimahalle, Ankara
2010 establishments in Turkey | ANAEM | [
"Physics",
"Engineering"
] | 162 | [
"Nuclear research institutes",
"Nuclear and atomic physics stubs",
"Nuclear organizations",
"Nuclear physics"
] |
36,059,636 | https://en.wikipedia.org/wiki/Neutron%20Trail | The Neutron Trail is an open cultural dialogue into our shared nuclear legacy intended to raise awareness and stimulate strategic thinking around nuclear power and nuclear disarmament. Neutron Trail deals with paradoxical human dilemmas, such as the world's need for large outputs of energy amid ongoing and often charged discussions regarding sustainability, and pervasive public fears surrounding nuclear energy. Through visiting the people and places most impacted by society's nuclear legacy, transmedia projects, public lectures and workshops, the Neutron Trail works to engage people from all walks of life in an ongoing exploration and evaluation of existing perceptions — true and untrue — about nuclear energy and weapons.
Travels
Enrico Fermi built the first nuclear reactor (Chicago Pile-1) in an abandoned squash court at the University of Chicago. He ran the historic experiment for 28 minutes. In April 2009 artist Matthew Day Jackson invited Olivia to participate in a ceremonial game of squash. Jackson and the younger Fermi's game tied “the physics of a squash ball to the physics of the first nuclear reactor”; was filmed in Los Alamos, New Mexico and displayed in a 28–minute video loop at the M.I.T. List Visual Arts Center and the Contemporary Arts Museum Houston.
Jackson also filmed Olivia's visit to Trinity in Southern New Mexico, location of the first atomic bomb test on July 16, 1945.
On September 29, 2011, for the 110th anniversary of Enrico Fermi's birth, she visited her grandfather's former lab at Via Panisperna in Rome, where Enrico conducted his Nobel Prize–winning research. It is now the site of Centro Fermi: Enrico Fermi Historical Museum of Physics.
In October 2011 Olivia visited CERN in Geneva, Switzerland. She toured various key experimental areas including the CERN Control Centre, ATLAS, the SM18 magnet test facility, and ISOLDE; met with Nobel Prize in Physics-winning physicist and former director of CERN Carlo Rubbia; and joined in a discussion with CERN's ConCERNed for Humanity Club founders and guests.
Her visit to Richland, WA in 2012 drew the interest of scientists, students and environmental activists who came to her talk and workshops. She travelled to Hiroshima and Nagasaki in November 2014 extending her dialogue project to speaking with atomic bomb survivors including Toshiko Tanaka and artist who promotes peace.
Public events
Fermi has given Neutron Trail talks in the US, Canada and Europe. On Jan 19, 2012, she gave a talk sponsored by the United Nations Association in Canada titled “Positioning Change and Global Nuclear Disarmament.” She presented personal stories of activists, images and timelines of individuals championing the nuclear disarmament movement.
A TEDx Transmedia Talk in Rome on September 30, 2011, entitled “Becoming the Inspiration We Seek – The Alchemy of Opposites.” is about the power of embracing contradictions, especially those inherent in our nuclear legacy, such as the thrill of discovery commingled with the pride and shame of the Hiroshima and Nagasaki bombings. It includes a photograph of her grandfather Enrico Fermi, with physicist Edward Teller, holding an image of the Hiroshima bomb cloud.
On March 21, 2011, BBC World Radio contacted Fermi to give an interview regarding the crisis — then taking place — at the Fukushima 1 Nuclear Power Plant in Japan. During the five-minute interview, she spoke about her position on nuclear energy and the Neutron Trail.
On April 15, 2011, Olivia was the keynote speaker for the Society of Italian Researchers and Professionals of Western Canada's inaugural meeting. This Neutron Trail talk was about her grandparents’ lives with personal anecdotes and archival images. During the question and answer period, she offered her opinions on the Fukushima Daiichi disaster: “Today we need nuclear power, and the challenge is to make it safe.”
External links
Official Website
Fermi Effect Website
References
Nuclear power | Neutron Trail | [
"Physics"
] | 783 | [
"Power (physics)",
"Physical quantities",
"Nuclear power"
] |
23,296,099 | https://en.wikipedia.org/wiki/Sacher%20hexachord | The Sacher hexachord (6-Z11, a musical cryptogram on the name of Swiss conductor Paul Sacher) is a hexachord notable for its use in a set of twelve compositions (12 Hommages à Paul Sacher) created at the invitation of Mstislav Rostropovich for Sacher's seventieth birthday in 1976.
The twelve compositions include Pierre Boulez's Messagesquisse, Benjamin Britten's Tema "Sacher", Hans Werner Henze's Capriccio, Witold Lutosławski's Sacher Variation, and Henri Dutilleux's Trois strophes sur le nom de Sacher. Further, Boulez's Répons, Dérive 1, Incises and Sur Incises all use tone rows with the same pitches.
The hexachord's complement is its Z-relation, 6-Z40.
See also
Schoenberg hexachord
References
External links
"eSACHERe World Premiere of unique set of 12 works, solo cello: František Brikcius", mfiles.co.uk
Cryptography
Hexachords
Musical set theory | Sacher hexachord | [
"Mathematics",
"Engineering"
] | 245 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
23,296,477 | https://en.wikipedia.org/wiki/Aerotoxic%20Association | The Aerotoxic Association was founded on 18 June 2007, at the British Houses of Parliament by former BAe 146 Training Captain John Hoyte, to raise public awareness about the ill health allegedly caused after exposure to airliner cabin air that he claimed been contaminated to toxic levels, by engine oil leaking into the bleed air system, which pressurizes all jet aircraft, with the exception of the Boeing 787.
In addition to providing help and support to aircrew and passengers, the Aerotoxic Association promotes known technical solutions, such as toxic air detectors, and campaigns for changes in regulations to improve the quality of cabin air on airliners.
The phrase "aerotoxic syndrome" was first coined by Chris Winder and Jean-Christophe Balouet in 2000 to describe the ill health allegedly caused by exposure to air which they claimed had been contaminated by jet engine oil.
this syndrome is not recognized in medicine.
Criticism
That report examined all exposures dating back to 1943 which showed that all documented exposures were to high concentrations, greatly in excess of the amount present in jet engine oil. He also noted that studies in Canada and the USA were unable to detect TCP in the cabin during flight. Prof Bagshaw notes that the symptoms are "largely the same as those reported by participants in all phase I drug trials", and are similar to the symptoms experienced by patients with chronic fatigue syndrome, gulf war syndrome, Lyme disease, chronic stress and chronic hyperventilation.
"A syndrome is a symptom complex, consistent and common to a given condition. Individuals who have 'aerotoxic syndrome' describe a wide range of inconsistent symptoms and signs with much individual variability. The evidence was independently reviewed by the Aerospace Medical Association, the US National Academy of Sciences and the Australian Civil Aviation Safety Authority Expert Panel. All concluded there is insufficient consistency to establish a medical syndrome and the 'aerotoxic syndrome' is not recognised in aviation medicine."
References
External links
Aerotoxic Association website
Aerospace engineering organizations
Air pollution
Aviation organisations based in the United Kingdom
Environmental toxicology
Lobbying organisations in the United Kingdom
2007 establishments in the United Kingdom
Organizations established in 2007
Toxicology in the United Kingdom
Medical controversies
Medical controversies in the United Kingdom
Medical controversies in the United States | Aerotoxic Association | [
"Engineering",
"Environmental_science"
] | 456 | [
"Toxicology",
"Aerospace engineering organizations",
"Aeronautics organizations",
"Toxicology in the United Kingdom",
"Environmental toxicology",
"Aerospace engineering"
] |
23,307,981 | https://en.wikipedia.org/wiki/Exoskeletal%20engine | The exoskeletal engine (ESE) is a concept in turbomachinery design. Current gas turbine engines have central rotating shafts and fan-discs and are constructed mostly from heavy metals. They require lubricated bearings and need extensive cooling for hot components. They are also subject to severe imbalance (or vibrations) that could wipe out the whole rotor stage, are prone to high- and low-cycle fatigue, and subject to catastrophic failure due to disc bursts from high tensile loads, consequently requiring heavy containment devices. To address these limitations, the ESE concept turns the conventional configuration inside-out and utilizes a drum-type rotor design for the turbomachinery in which the rotor blades are attached to the inside of a rotating drum instead of radially outwards from a shaft and discs. Multiple drum rotors could be used in a multi-spool design.
Design
Fundamentally, the ESE drum-rotor configuration typically consists of four concentric open-ended drums or shells:
an outer shell (engine casing) that both supports the bearings for the drum-rotor shell and constrains it,
the drum-rotor shell that rotates within the bearings and carries the compressor- and turbine blades,
a static stator shell that supports the guide vanes,
a hollow static inner shell that provides a flow path through the centre of the engine.
In the ESE design, the rotating blades are primarily in radial compression as opposed to radial tension, which means that materials that do not possess high-tensile strength, such as ceramic materials, can be used for their construction. Ceramics behave well in compressive loading situations where brittle fracture is minimized, and would provide greater operating efficiency through higher operating temperatures and lighter engine weight when compared to the metal alloys that typically are used in turbomachinery components. The ESE design and the use of composite materials could also reduce the part count, reduce or eliminate cooling, and result in increased component life. The use of ceramics would also be a beneficial feature for hypersonic propulsion systems, where high stagnation temperatures can exceed the limits of traditional turbomachinery materials.
The cavity within the inner shell could be exploited in several different ways. In subsonic applications, venting the centre cavity with a free-stream flow could potentially contribute to a large noise reduction; while in supersonic-hypersonic applications it might be used to house a ramjet or scramjet (or other devices such as a pulse detonation engine) as part of a turbine-based combined-cycle engine. Such an arrangement could reduce the overall length of the propulsion system and thereby reduce weight and drag significantly.
Summarized potential advantages
From Chamis and Blankson:
Eliminate disk and bore stresses
Utilize low-stress bearings
Increase rotor speed
Reduce airfoil thickness
Increase flutter boundaries
Minimize/eliminate containment requirements
Increase high mass flow rate
Reduce weight by 50 percent
Decrease turbine temperature for same thrust
Decrease emissions
Provide higher thrust-to-weight ratio
Improve specific fuel consumption
Increase blade low-cycle and high-cycle fatigue lives
Reduce engine diameter
Reduce parts count
Decrease maintenance cost
Minimize/eliminate sealing and cooling requirements
Minimize/eliminate blade-flow losses, blade and case wear
Free core for combined turboram jet cycles
Reduce noise
Expedite aircraft/engine integration
Minimize/eliminate notch-sensitive material issues
Challenges
One of the major challenges is in bearing design as there are no known lubricated systems that can
handle the magnitude of velocity encountered in the ESE; foil- and magnetic bearings have been suggested as possible solutions to this problem.
Foil bearings are noncontacting and ride on a thin film of air, which is generated hydrodynamically by the rotational speed, to suspend and centre the shaft. Drawbacks for the foil system include the high start-up torque, the need for set-down/lift-off mechanical bearings and associated positioning hardware, and the high temperatures generated by this system.
For the large-diameter magnetic bearing system required in the ESE, stiffness and radial growth after spin-up are problems that would be encountered. Radial growth of sufficient magnitude would result in stability problems, and a magnet pole positioning system would be required to maintain the appropriate clearances for the operation of the system. This positioning system would require high-speed sensing and positioning. A passive magnetic laminate and its mounting hardware would require high structural integrity to resist the extremely high inertial forces and would most likely drive an increase in weight.
Although both bearing systems theoretically meet the requirements of the exoskeletal application, neither technology is currently ready for operation at practical sizes. Developments in foil bearing technology indicate it may take 20 years to achieve foil bearings for this diameter, and magnetic bearings appear to be too heavy for this application and would also face a lengthy technology
development programme.
See also
Index of aviation articles
Components of jet engines
List of aircraft engines
References
Engines
Aircraft engines
Jet engines
Hypothetical technology | Exoskeletal engine | [
"Physics",
"Technology"
] | 988 | [
"Machines",
"Engines",
"Physical systems",
"Jet engines",
"Aircraft engines"
] |
26,198,205 | https://en.wikipedia.org/wiki/C22H30O6 | {{DISPLAYTITLE:C22H30O6}}
The molecular formula C22H30O6 (molar mass: 390.47 g/mol, exact mass: 390.2042 u) may refer to:
Megaphone (molecule)
Pregomisin
Prostratin
Molecular formulas | C22H30O6 | [
"Physics",
"Chemistry"
] | 65 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
26,201,453 | https://en.wikipedia.org/wiki/Prepolymer | In polymer chemistry, the term prepolymer or pre-polymer, refers to a monomer or system of monomers that have been reacted to an intermediate-molecular mass state. This material is capable of further polymerization by reactive groups to a fully cured, high-molecular-mass state. As such, mixtures of reactive polymers with un-reacted monomers may also be referred to as pre-polymers. The term "pre-polymer" and "polymer precursor" may be interchanged.
Polyurethane and polyurea prepolymers
In polyurethane chemistry, prepolymers and oligomers are frequently produced and then further formulated into CASE applications - Coatings, Adhesives, Sealants, and Elastomers. An isocyanate (usually a diisocyanate) is reacted with a polyol. All types of polyol may in theory be used to produce polyurethane prepolymers. These then find use in CASE applications. When polyurethane dispersions are synthesized, a prepolymer is first produced usually modified with DMPA. In polyurea prepolymer production, instead of a polyol a polyamine is used.
Lactic acid as a polymer precursor
Two molecules of lactic acid can be dehydrated to the cyclic molecule lactide, a lactone. A variety of catalysts can polymerise lactide to either heterotactic or syndiotactic polylactide, which as biodegradable polyesters with valuable (inter alia) medical properties are currently attracting much attention.
Nowadays, lactic acid is used as a monomer for producing polylactic acid (PLA) which later has application as biodegradable plastic. This kind of plastic is a good option for substituting conventional plastic produced from petrochemicals because of low emission of carbon dioxide. The commonly used process in producing lactic acid is via fermentation; to obtain the polylactic acid, the polymerization process follows.
See also
Synthetic resin
Resin
References
Plastics
Polyamides
Polymers
Polyurethanes
Synthetic resins | Prepolymer | [
"Physics",
"Chemistry",
"Materials_science"
] | 442 | [
"Synthetic resins",
"Synthetic materials",
"Unsolved problems in physics",
"Polymer chemistry",
"Polymers",
"Amorphous solids",
"Plastics"
] |
26,201,825 | https://en.wikipedia.org/wiki/Population%20equivalent | Population equivalent (PE) or unit per capita loading, or equivalent person (EP), is a parameter for characterizing industrial wastewaters. It essentially compares the polluting potential of an industry (in terms of biodegradable organic matter) with a population (or certain number of people), which would produce the same polluting load. In other words, it is the number expressing the ratio of the sum of the pollution load produced during 24 hours by industrial facilities and services to the individual pollution load in household sewage produced by one person in the same time. This refers to the amount of oxygen-demanding substances in wastewater which will consume oxygen as it bio-degrades, usually as a result of bacterial activity.
Equation and base value
A value frequently used in the international literature for PE, which was based on a German publication, is 54 gram of BOD (Biochemical oxygen demand) per person (or per capita or per inhabitant) per day. This has been adopted by many countries for design purposes but other values are also in use. For example, a commonly used definition used in Europe is: 1 PE equates to 60 gram of BOD per person per day, and it also equals 200 liters of sewage per day. In the United States, a figure of 80 grams BOD per day is normally used.
If the base value is taken as 60 grams of BOD per person per day, then the equation to calculate PE from an industrial wastewater is:
Population equivalents for industrial wastewaters
{| class="wikitable sortable" border="1"
|+ BOD population equivalents of wastewater from some industries
|-
! Type
! Activity
!Unit of production
! BOD PE
[inhab/(unit/d)]
|-
| Food
| Canning (fruit/vegetables)
|1 ton processed
| 500
|-
||
| Pea processing
|1 ton processed
| 85-400
|-
||
| Tomato
|1 ton processed
| 50-185
|-
||
| Carrot
|1 ton processed
| 160-390
|-
||
| Potato
|1 ton processed
| 215-545
|-
||
| Citrus fruit
|1 ton processed
| 55
|-
||
| Chicken meat
|1 ton processed
| 70-1600
|-
||
| Beef
|1 ton processed
| 20-600
|-
||
| Fish
|1 ton processed
| 300-2300
|-
||
| Sweets/candies
|1 ton produced
| 40-150
|-
||
| Sugar cane
|1 ton produced
| 50
|-
||
| Dairy (without cheese)
|1000 L milk
| 20-100
|-
||
| Dairy (with cheese)
|1000 L milk
| 100-800
|-
||
| Margarine
|1 ton produced
| 500
|-
||
| Slaughter house
|1 cow / 2.5 pigs
| 10-100
|-
||
| Yeast production
|1 ton produced
| 21000
|-
| Confined animals breeding
| Pigs
|live t.d
| 35-100
|-
||
| Dairy cattle (milking room)
|live t.d
| 1-2
|-
||
| Cattle
|live t.d
| 65-150
|-
||
| Horses
|live t.d
| 65-150
|-
||
| Poultry
|live t.d
| 15-20
|-
| Sugar-alcohol
| Alcohol distillation
|1 ton cane processed
| 4000
|-
| Drinks
| Brewery
|1 m3 produced
| 150-350
|-
||
| Soft drinks
|1 m3 produced
| 50-100
|-
||
| Wine
|1 m3 produced
| 5
|-
| Textiles
| Cotton
|1 ton produced
| 2800
|-
||
| Wool
|1 ton produced
| 5600
|-
||
| Rayon
|1 ton produced
| 550
|-
||
| Nylon
|1 ton produced
| 800
|-
||
| Polyester
|1 ton produced
| 3700
|-
||
| Wool washing
|1 ton produced
| 2000-4500
|-
||
| Dyeing
|1 ton produced
| 2000-3500
|-
||
| Textile bleaching
|1 ton produced
| 250-350
|-
| Leather and tanneries
| Tanning
|1 ton hide processed
| 1000-3500
|-
||
| Shoes
|1000 pairs produced
| 300
|-
| Pulp and paper
| Pulp
|1 ton produced
| 600
|-
||
| Paper
|1 ton produced
| 100-300
|-
||
| Pulp and paper integrated
|1 ton produced
| 1000-10000
|-
| Chemical industrial
| Paint
|1 employee
| 20
|-
||
| Soap
|1 ton produced
| 1000
|-
||
| Petroleum refinery
|1 barrel (117 L)
| 1
|-
||
| PVC
|1 ton produced
| 200
|-
| Steelworks
| Foundry
|1 ton pig iron produced
| 12-30
|-
||
| Lamination
|1 ton produced
| 8-50
|}
See also
Sewage treatment
References
Environmental science
Waste treatment technology
Sewerage
Equivalent units | Population equivalent | [
"Chemistry",
"Mathematics",
"Engineering",
"Environmental_science"
] | 1,039 | [
"Equivalent quantities",
"Water treatment",
"Quantity",
"Water pollution",
"Sewerage",
"Equivalent units",
"nan",
"Environmental engineering",
"Waste treatment technology",
"Units of measurement"
] |
26,205,878 | https://en.wikipedia.org/wiki/Dicke%20effect | In spectroscopy, the Dicke effect, also known as Dicke narrowing or sometimes collisional narrowing, named after Robert H. Dicke, refers to narrowing of the Doppler broadening of a spectral line due to collisions the emitting species (usually an atom or a molecule) experiences with other particles.
Mechanism
When the mean free path of an atom is much smaller than the wavelength of the radiative transition, the atom changes velocity and direction many times during the emission or absorption of a photon. This causes an averaging over different Doppler states and results in an atomic linewidth that is narrower than the Doppler width.
See also
Mössbauer effect
Stark broadening
Motional narrowing
References
Spectroscopy | Dicke effect | [
"Physics",
"Chemistry",
"Astronomy"
] | 146 | [
"Spectroscopy stubs",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Astronomy stubs",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
26,207,504 | https://en.wikipedia.org/wiki/Single%20point%20of%20failure | A single point of failure (SPOF) is a part of a system that, if it fails, will stop the entire system from working. SPOFs are undesirable in any system with a goal of high availability or reliability, be it a business practice, software application, or other industrial system. If there is a SPOF present in a system, it produces a potential interruption to the system that is substantially more disruptive than an error would elsewhere in the system.
Overview
Systems can be made robust by adding redundancy in all potential SPOFs. Redundancy can be achieved at various levels.
The assessment of a potential SPOF involves identifying the critical components of a complex system that would provoke a total systems failure in case of malfunction. Highly reliable systems should not rely on any such individual component.
For instance, the owner of a small tree care company may only own one woodchipper. If the chipper breaks, they may be unable to complete their current job and may have to cancel future jobs until they can obtain a replacement. The owner could prepare for this in multiple ways. The owner of the tree care company may have spare parts ready for the repair of the wood chipper, in case it fails. At a higher level, they may have a second wood chipper that they can bring to the job site. Finally, at the highest level, they may have enough equipment available to completely replace everything at the work site in the case of multiple failures.
Computing
A fault-tolerant computer system can be achieved at the internal component level, at the system level (multiple machines), or site level (replication).
One would normally deploy a load balancer to ensure high availability for a server cluster at the system level. In a high-availability server cluster, each individual server may attain internal component redundancy by having multiple power supplies, hard drives, and other components. System-level redundancy could be obtained by having spare servers waiting to take on the work of another server if it fails.
Since a data center is often a support center for other operations such as business logic, it represents a potential SPOF in itself. Thus, at the site level, the entire cluster may be replicated at another location, where it can be accessed in case the primary location becomes unavailable. This is typically addressed as part of an IT disaster recovery program. While previously the solution to this SPOF was physical duplication of clusters, the high demand for this duplication led multiple businesses to outsource duplication to 3rd parties using cloud computing. It has been argued by scholars, however, that doing so simply moves the SPOF and may even increase the likelihood of a failure or cyberattack.
Paul Baran and Donald Davies developed packet switching, a key part of "survivable communications networks". Such networks including ARPANET and the Internet are designed to have no single point of failure. Multiple paths between any two points on the network allow those points to continue communicating with each other, the packets "routing around" damage, even after any single failure of any one particular path or any one intermediate node.
Software engineering
In software engineering, a bottleneck occurs when the capacity of an application or a computer system is limited by a single component. The bottleneck has lowest throughput of all parts of the transaction path. A common example is when a used programming language is capable of parallel processing, but a given snippet of code has several independent processes run sequentially rather than simultaneously.
Performance engineering
Tracking down bottlenecks (sometimes known as hot spots – sections of the code that execute most frequently – i.e., have the highest execution count) is called performance analysis. Reduction is usually achieved with the help of specialized tools, known as performance analyzers or profilers. The objective is to make those particular sections of code perform as fast as possible to improve overall algorithmic efficiency.
Computer security
A vulnerability or security exploit in just one component can compromise an entire system. One of the largest concerns in computer security is attempting to eliminate SPOFs without sacrificing too much convenience to the user. With the invention and popularization of the Internet, several systems became connected to the broader world through many difficult to secure connections. While companies have developed a number of solutions to this, the most consistent form of SPOFs in complex systems tends to remain user error, either by accidental mishandling by an operator or outside interference through phishing attacks.
Other fields
The concept of a single point of failure has also been applied to fields outside of engineering, computers, and networking, such as corporate supply chain management and transportation management.
Design structures that create single points of failure include bottlenecks and series circuits (in contrast to parallel circuits).
In transportation, some noted recent examples of the concept's application have included the Nipigon River Bridge in Canada, where a partial bridge failure in January 2016 entirely severed road traffic between Eastern Canada and Western Canada for several days because it is located along a portion of the Trans-Canada Highway where there is no alternate detour route for vehicles to take; and the Norwalk River Railroad Bridge in Norwalk, Connecticut, an aging swing bridge that sometimes gets stuck when opening or closing, disrupting rail traffic on the Northeast Corridor line.
The concept of a single point of failure has also been applied to the fields of intelligence. Edward Snowden talked of the dangers of being what he described as "the single point of failure" – the sole repository of information.
Life-support systems
A component of a life-support system that would constitute a single point of failure would be required to be extremely reliable.
See also
Concepts
Applications
In literature
References
Engineering failures
Systems engineering
Reliability engineering
Fault-tolerant computer systems
Network architecture | Single point of failure | [
"Technology",
"Engineering"
] | 1,165 | [
"Systems engineering",
"Reliability engineering",
"Network architecture",
"Technological failures",
"Computer networks engineering",
"Computer systems",
"Engineering failures",
"Fault-tolerant computer systems",
"Civil engineering"
] |
28,064,235 | https://en.wikipedia.org/wiki/Mechanical%20network | A mechanical network is an abstract interconnection of mechanical elements along the lines of an electrical circuit diagram. Elements include rigid bodies, springs, dampers, transmissions, and actuators.
Network symbols
The symbols from left to right are: stiffness element (e.g. spring), mass (rigid body), mechanical resistance (e.g. damper), force generator, velocity generator. The symbols for generators depend on which mechanical–electrical analogy is being used. The symbols shown relate to the impedance analogy. In the mobility analogy the symbols are reversed, being respectively velocity and force generators.
See also
Multibody system
Machines | Mechanical network | [
"Physics",
"Technology",
"Engineering"
] | 132 | [
"Physical systems",
"Machines",
"Mechanical engineering stubs",
"Mechanical engineering"
] |
28,066,105 | https://en.wikipedia.org/wiki/Atypical%20bacteria | Atypical bacteria are bacteria that do not get colored by gram-staining but rather remain colorless: they are neither Gram-positive nor Gram-negative. These include the Chlamydiaceae, Legionella and the Mycoplasmataceae (including mycoplasma and ureaplasma); the Spirochetes and Rickettsiaceae are also often considered atypical.
Gram-positive bacteria have a thick peptidoglycan layer in their cell wall, which retains the crystal violet during Gram staining, resulting in a purple color. Gram-negative bacteria have a thin peptidoglycan layer which does not retain the crystal violet, so when safranin is added during the process, they stain red.
The Mycoplasmataceae lack a peptidoglycan layer so do not retain crystal violet or safranin, resulting in no color. The Chlamydiaceae contain an extremely thin peptidoglycan layer, preventing visible staining. Ricketsiaceae are technically Gram-negative, but are too small to stain well, so are often considered atypical.
Peptidoglycans are the site of action of beta-lactam antibiotics such as penicillins and cephalosporins, so mycoplasma are naturally resistant to these drugs, which in this sense also makes them “atypical” in the treatment of their infections. Macrolides such as erythromycin however, are usually effective in treating atypical bacterial infections.
Finally, some of these bacteria can cause a specific type of pneumonia referred to as atypical pneumonia. That is not to say that atypical pneumonia is strictly caused by atypical bacteria, for this disease can also have a fungal, protozoan or viral cause.
Through a recent study on analyzing synergistic interactions between the influenza viruses and atypical bacteria, it was stated that there have been findings of interaction between the two most prominent strains C. Pneumoniae and M. Pneumoniae with the influenza virus. This was labeled and discussed as a coinfection in correlation to the influenza virus.
See also
Gram-negative bacteria
Gram-positive bacteria
References
Bacteriology
Bacteria organized by reaction to stain | Atypical bacteria | [
"Biology"
] | 460 | [
"Bacteria organized by reaction to stain",
"Bacteria"
] |
28,068,084 | https://en.wikipedia.org/wiki/Achievement%20%28video%20games%29 | In video gaming, an achievement (or a trophy) is a meta-goal defined outside a game's parameters, a digital reward that signifies a player's mastery of a specific task or challenge within a video game. Unlike the in-game systems of quests, tasks, and/or levels that usually define the goals of a video game and have a direct effect on further gameplay, the management of achievements usually takes place outside the confines of the game environment and architecture. Meeting the fulfillment conditions, and receiving recognition of fulfillment by the game, is sometimes referred to as unlocking the achievement.
Purpose and motivation
Achievements are included within games to extend the title's longevity and provide players with the impetus to do more than simply complete the game but to also find all of its secrets and complete all of its challenges. They are effectively arbitrary challenges laid out by the developer to be met by the player. These achievements may coincide with the inherent goals of the game itself, when completing a standard milestone in the game (such as achievements for beating each level of a game), with secondary goals such as finding secret power-ups or hidden levels, or may also be independent of the game's primary or secondary goals and earned via completing a game in an especially difficult or non-standard fashion (such as speedrunning a game [e.g., Braid] or playing without killing any enemies [e.g., Deus Ex: Human Revolution and Dishonored]), playing a certain number of times, viewing an in-game video, and/or beating a certain number of online opponents. Certain achievements may refer to other achievements—many games have one achievement that requires the player to have gained every other achievement.
Unlike secrets, which traditionally provided some kind of direct benefit to the player in the form of easier gameplay (such as the warp pipe in Super Mario Bros.) or additional gameplay features (such as hidden weapons or levels in first-person shooters like Doom) even though they might have criteria similar to achievements in order to unlock, the narrative-independent nature of achievements allows them to be fulfilled without needing to provide the player with any direct, in-game benefit or additional feature. In addition, the achievements used in modern gaming are usually visible outside the game environment (on the Internet) and form part of the online profile for the player. These profiles include: Gamertag for Microsoft's Live Anywhere network, combining Xbox 360/Xbox One/Xbox Series X|S titles, PC games using Games for Windows – Live and Xbox Live on Windows 8 and Windows 10, and Xbox Live-enabled games on other platforms; PSN ID for PlayStation Network (PSN); User Profile Achievement Showcases for Steam; Armory Profiles for World of Warcraft; and Lodestone Profiles for Final Fantasy XIV.
The motivation for the player to gain achievements lies in maximizing their own general cross-title score (known as Gamerscore on Live, Trophy Level on PSN, and the Achievement Showcase for Steam User Profiles) and obtaining recognition for their performance due to the publication of their achievement/trophy profiles. Some players pursue the unlocking of achievements as a goal in itself, without especially seeking to enjoy the game that awards them—this community of players typically refer to themselves as "achievement hunters".
Some implementations use a system of achievements that provide direct, in-game benefits to the gameplay, although the award is usually not congruent with the achievement itself. One example of such an implementation are "challenges" found in the multiplayer portions of the later Call of Duty titles. Challenges here may include a certain number of headshots or kills and are rewarded not only with the completion of the achievement but also a bonus item that can be equipped. Team Fortress 2 features 3 milestones for each of the nine classes. When a milestone is reached by obtaining a specific number of achievements for each class, the player will be awarded a non-tradable weapon unique to that class.
Origins and implementations
Single-game achievements
The idea for game achievements can be traced back to 1982, with Activision's patches for high scores. This was a system by which game manuals instructed players to achieve a particular high score, take a photo of score display on the television, and send in the photo to receive a physical, iron-on style patch in a fashion somewhat similar to the earning of a Scout badge. This system was set up across many Activision titles regardless of platform, and though most of their games were on the popular Atari 2600, games on the Intellivision, ColecoVision, Atari 5200, and at least one title on the Commodore 64 also included similar instructions with patches as a reward. Patches would be sent with a letter from the company, often written as if from a fictional character, like Pitfall Harry, congratulating the player on the achievement. By the end of 1983, Activision's new games no longer included these achievements, but the company would still honor the process for their older games.
The game E-Motion on the Amiga from 1990 was one of the earliest games that had some form of achievements programmed into the game itself. The game called these "secret bonuses". The game had five such bonuses, for achievements such as completing a level without rotating to the right, or completely failing certain levels.
A number of individual games have included their own in-game achievements system, separate from any overall platform. Most modern massively multiplayer online role-playing games have implemented their own in-game system of achievements; in some cases such as World of Warcraft and Final Fantasy XIV, these achievements are accessible outside the game when viewing user profiles on the game websites and the game may offer an API for achievement data to be pulled and used on other sites.
Platform (multi-game) achievement systems
Although many other individual games would develop their own "secret bonuses" and internal achievements, the first implementation of an easily accessible and multi-game achievement system is Microsoft's Xbox 360 Gamerscore system, introduced at E3 in 2005, and implemented on the 360's launch date (22/11/05). Microsoft extended Gamerscore support to the Games for Windows – Live scheme in 2007 by including support for Achievements in Halo 2.
In 2007, Valve became the second large publisher to release a platform-based, multi-game achievement system for their Steam platform, eventually capturing a wide number of Windows, Mac OS X, Linux, and SteamOS based games.
In 2008, Sony followed suit by offering Trophies for the PlayStation 3. There was no Trophy support for the PlayStation Portable, even though the device does have PSN connection capability. By 2011, the successor to the PlayStation Portable, the PlayStation Vita, and all PlayStation Vita games had universal support for the Trophy system, as well as the later PlayStation 4 and PlayStation 5 and their games.
Apple added achievements to Game Center on October 12, 2011, with the release of the iOS 4, for mobile platform for iPhone, iPad, and iPod Touch. Achievements are available on Android via Google Play Games.
Microsoft's mobile OSes, Windows Phone 7 and Windows Phone 8, included Xbox Live support, including Achievements when first launched worldwide on October 21, 2010.
Amazon Kindle provided the GameCircle service starting July 11, 2012, which tracks achievements and leaderboards for some games adapted to the Kindle platform.
Kongregate, a browser games hosting site, features Badges, which earn the user points, similar to Xbox Live's Gamerscore and PlayStation Network's Trophy system. Much like PSN's Trophies, points work towards increasing a player's level. The site FAQ explains, "Your level will automatically rise as you earn points. We're still working out the details of what kind of privileges and potential prizes that points and levels could be used to unlock."
In 2012, RetroAchievements started to retroactively add achievements to old game-systems for use in Emulation software like RetroArch. Users add indicators which trigger when a certain value changes in emulating the ROM.
Game achievements as satire
The advent of achievement-driven gaming was satirized in the Flash game Achievement Unlocked. The game is a simple platformer; it takes place on a single non-scrolling screen, and has only simple walking and jumping controls. It has no clearly defined victory condition aside from earning all 100 achievements, from the trivial ("move left", "click the play field") to the complex ("touch every square", "find and travel to three particular locations in order"). The game spawned two sequels.
Achievements as part of gamification
NSA information-gathering program XKeyscore uses achievements awarding "skilz" points to assist in training new analysts as a form of gamification of learning.
See also
Unlockable (video games)
New Game Plus
References
Video game terminology | Achievement (video games) | [
"Technology"
] | 1,803 | [
"Computing terminology",
"Video game terminology"
] |
28,068,786 | https://en.wikipedia.org/wiki/Neutrino%20theory%20of%20light | The neutrino theory of light is the proposal that the photon is a composite particle formed of a neutrino–antineutrino pair. It is based on the idea that emission and absorption of a photon corresponds to the creation and annihilation of a particle–antiparticle pair. The neutrino theory of light is not currently accepted as part of mainstream physics, as according to the Standard Model the photon is an elementary particle, a gauge boson.
History
In the past, many particles that were once thought to be elementary such as protons, neutrons, pions, and kaons have turned out to be composite particles. In 1932, Louis de Broglie suggested that the photon might be the combination of a neutrino and an antineutrino. During the 1930s there was great interest in the neutrino theory of light and Pascual Jordan, Ralph Kronig, Max Born, and others worked on the theory.
In 1938, Maurice Pryce brought work on the composite photon theory to a halt. He showed that the conditions imposed by Bose–Einstein commutation relations for the composite photon and the connection between its spin and polarization were incompatible. Pryce also pointed out other possible problems,
“In so far as the failure of the theory can be traced to any one cause it is fair to say that it lies in the fact that light waves are polarized transversely while neutrino ‘waves’ are polarized longitudinally,” and lack of rotational invariance. In 1966, V. S. Berezinskii reanalyzed Pryce's paper, giving a clearer picture of the problem that Pryce uncovered.
Starting in the 1960s, work on the neutrino theory of light resumed, and there continues to be some interest in recent years. Attempts have been made to solve the problem pointed out by Pryce, known as Pryce's theorem, and other problems with the composite photon theory. The incentive is seeing the natural way that many photon properties are generated from the theory and the knowledge that some problems exist with the current photon model. However, there is no experimental evidence that the photon has a composite structure.
Some of the problems for the neutrino theory of light are the non-existence for massless neutrinos with both spin parallel and antiparallel to their momentum outside the Einstein-Cartan torsion and the fact that composite photons are not bosons. Attempts to solve some of these problems will be discussed, but the lack of massless neutrinos outside the Einstein-Cartan torsion makes it impossible outside the Einstein-Cartan torsion to form a massless photon with this theory. The neutrino theory of light is not considered to be part of mainstream physics.
Forming photon from neutrinos
It is possible to obtain transversely polarized photons from neutrinos.
The neutrino field
The neutrino field satisfies the Dirac equation with the mass set to zero,
The gamma matrices in the Weyl basis are:
The matrix is Hermitian while is antihermitian. They satisfy the anticommutation relation,
where is the Minkowski metric with signature and is the unit matrix.
The neutrino field is given by,
where stands for .
and are the fermion annihilation operators for and respectively, while and are the annihilation operators for and .
is a right-handed neutrino and is a left-handed neutrino. The 's are spinors with the superscripts and subscripts referring to the energy and helicity states respectively. Spinor solutions for the Dirac equation are,
The neutrino spinors for negative momenta are related to those of positive momenta by,
The composite photon field
De Broglie and Kronig suggested the use of a local interaction to bind the neutrino–antineutrino pair. (Rosen and Singer have used a delta potential interaction in forming a composite photon.) Fermi and Yang used a local interaction to bind a fermion–antiferminon pair in attempting to form a pion. A four-vector field can be created from a fermion–antifermion pair,
Forming the photon field can be done simply by,
where .
The annihilation operators for right-handed and left-handed photons formed of fermion–antifermion pairs are defined as,
is a spectral function, normalized by
Photon polarization vectors
The polarization vectors corresponding to the combinations used in Eq. (1) are,
Carrying out the matrix multiplications results in,
where and have been placed on the right.
For massless fermions the polarization vectors depend only upon the direction of . Let .
These polarization vectors satisfy the normalization relation,
The Lorentz-invariant dot
products of the internal four-momentum with the polarization vectors are,
In three dimensions,
Composite photon satisfies Maxwell’s equations
In terms of the polarization vectors, becomes,
The electric field and magnetic field are given by,
Applying Eq. (6) to Eq. (5), results in,
Maxwell's equations for free space are obtained as follows:
Thus, contains terms of the form
which equate to zero by the first of Eq. (4). This gives,
as contains similar terms.
The expression contains terms of the form
while
contains terms of form . Thus, the last two equations of (4) can be used to show that,
Although the neutrino field violates parity and charge conjugation, and transform in the usual way,
satisfies the Lorenz condition,
which follows from Eq. (3).
Although many choices for gamma matrices can satisfy the Dirac equation, it is essential that one use the Weyl representation in order to get the correct photon polarization vectors and and that satisfy Maxwell's equations. Kronig first realized this. In the Weyl representation, the four-component spinors are describing two sets of two-component neutrinos. The connection between the photon antisymmetric tensor and the two-component Weyl equation was also noted by Sen. One can also produce the above results using a two-component neutrino theory.
To compute the commutation relations for the photon field, one needs the equation,
To obtain this equation, Kronig wrote a relation between the neutrino spinors that was not rotationally invariant as pointed out by Pryce. However, as Perkins showed, this equation follows directly from summing over the polarization vectors, Eq. (2), that were obtained by explicitly solving for the neutrino spinors.
If the momentum is along the third axis, and reduce to the usual polarization vectors for right and left circularly polarized photons respectively.
Inconsistencies
Although composite photons satisfy many properties of real photons, there are major problems with this theory.
Bose–Einstein commutation relations
It is known that a photon is a boson. Does the composite photon satisfy Bose–Einstein commutation relations? Fermions are defined as the particles whose creation and annihilation operators adhere to the anticommutation relations
while bosons are defined as the particles that adhere to the commutation relations
The creation and annihilation operators of composite particles formed of fermion pairs adhere to the commutation relations of the form
with
For Cooper electron pairs, "a" and "c" represent different spin directions. For nucleon pairs (the deuteron), "a" and "c" represent proton and neutron. For neutrino–antineutrino pairs, "a" and "c" represent neutrino and antineutrino. The size of the deviations from pure Bose behavior,
depends on the degree of overlap of the fermion wave functions and the constraints of the Pauli exclusion principle.
If the state has the form
then the expectation value of Eq. (9) vanishes for , and the expression for can be approximated by
Using the fermion number operators and , this can be written,
showing that it is the average number of fermions in a particular state averaged over all states with weighting factors and .
Jordan’s attempt to solve problem
De Broglie did not address the problem of statistics for the composite photon. However, "Jordan considered the essential part of the problem was to construct Bose–Einstein amplitudes from Fermi–Dirac amplitudes", as Pryce noted. Jordan "suggested that it is not the interaction between neutrinos and antineutrinos that binds them together into photons, but rather the manner in which they interact with charged particles that leads to the simplified description of light in terms of photons."
Jordan's hypothesis eliminated the need for theorizing an unknown interaction, but his hypothesis that the neutrino and antineutrino are emitted in exactly the same direction seems rather artificial as noted by Fock. His strong desire to obtain exact Bose–Einstein commutation relations for the composite photon led him to work with a scalar or longitudinally polarized photon. Greenberg and Wightman have pointed out why the one-dimensional case works, but the three-dimensional case does not.
In 1928, Jordan noticed that commutation relations for pairs of fermions were similar to those for bosons. Compare Eq. (7) with Eq. (8). From 1935 until 1937, Jordan, Kronig, and others tried to obtain exact Bose–Einstein commutation relations for the composite photon. Terms were added to the commutation relations to cancel out the delta term in Eq. (8). These terms corresponded to "simulated photons". For example, the absorption of a photon of momentum could be simulated by a Raman effect in which a neutrino with momentum is absorbed while another of another with opposite spin and momentum is emitted. (It is now known that single neutrinos or antineutrinos interact so weakly that they cannot simulate photons.)
Pryce’s theorem
In 1938, Pryce showed that one cannot obtain both Bose–Einstein statistics and transversely polarized photons from neutrino–antineutrino pairs. Construction of transversely polarized photons is not the problem. As Berezinski noted, "The only actual difficulty is that the construction of a transverse four-vector is incompatible with the requirement of statistics." In some ways Berezinski gives a clearer picture of the problem. A simple version of the proof is as follows:
The expectation values of the commutation relations for composite right and left-handed photons are:
where
The deviation from Bose–Einstein statistics is caused by and , which are functions of the neutrino numbers operators.
Linear polarization photon operators are defined by
A particularly interesting commutation relation is,
which follows from (10) and (12).
For the composite photon to obey Bose–Einstein commutation relations, at the very least,
Pryce noted. From Eq. (11) and Eq. (13) the requirement is that
gives zero when applied to any state vector. Thus, all the coefficients of
and , etc. must vanish separately. This means , and the composite photon does not exist, completing the proof.
Perkins’ attempt to solve problem
Perkins reasoned that the photon does not have to obey Bose–Einstein commutation relations, because the non-Bose terms are small and they may not cause any detectable effects. Perkins noted, "As presented in many quantum mechanics texts it may appear that Bose statistics follow from basic principles, but it is really from the classical canonical formalism. This is not a reliable procedure as evidenced by the fact that it gives the completely wrong result for spin-1/2 particles." Furthermore, "most integral spin particles (light mesons, strange mesons, etc.) are composite particles formed of quarks. Because of their underlying fermion structure, these integral spin particles are not fundamental bosons, but composite quasibosons. However, in the asymptotic limit, which generally applies, they are essentially bosons. For these particles, Bose commutation relations are just an approximation, albeit a very good one. There are some differences; bringing two of these composite particles close together will force their identical fermions to jump to excited states because of the Pauli exclusion principle."
Brzezinski in reaffirming Pryce's theorem argues that commutation relation (14) is necessary for the photon to be truly neutral. However, Perkins has shown that a neutral photon in the usual sense can be obtained without Bose–Einstein commutation relations.
The number operator for a composite photon is defined as
Lipkin suggested for a rough estimate to assume
that where is a constant equal to the number of states used to construct the wave packet.
Perkins showed that the effect of the composite photon's
number operator acting on a state of composite photons is,
using . This result differs from the usual one because of the second term which is small for large . Normalizing in the usual manner,
where is the state of composite photons having momentum which is created by applying on the vacuum times. Note that,
which is the same result as obtained with boson operators. The formulas in Eq. (15) are similar to the usual ones with correction factors that approach zero for large .
Blackbody radiation
The main evidence indicating that photons are bosons comes from the Blackbody radiation experiments which are in agreement with Planck's distribution. Perkins calculated the photon distribution for Blackbody radiation using the second quantization method, but with a composite photon.
The atoms in the walls of the cavity are taken to be a two-level system with photons emitted from the upper level β and absorbed at the lower level α. The transition probability for emission of a photon is enhanced when np photons are present,
where the first of (15) has been used. The absorption is enhanced less since the second of (15) is used,
Using the equality,
of the transition rates, Eqs. (16) and (17) are combined to give,
The probability of finding the system with energy E is proportional to e−E/kT according to Boltzmann's distribution law. Thus, the equilibrium between emission and absorption requires that,
with the photon energy . Combining the last two equations results in,
with . For , this reduces to
This equation differs from Planck's law because of the term. For the conditions used in the blackbody radiation experiments of W. W. Coblentz, Perkins estimates that , and the maximum deviation from Planck's law is less than one part in , which is too small to be detected.
Only left-handed neutrinos exist
Experimental results show that only left-handed neutrinos and right-handed antineutrinos exist. Three sets of neutrinos have been observed, one that is connected with electrons, one with muons, and one with tau leptons.
In the standard model the pion and muon decay modes are:
{|
| || → || || + ||
|----
| || → || || + || || + ||
|}
To form a photon, which satisfies parity and charge conjugation, two sets of two-component neutrinos (i.e., right-handed and left-handed neutrinos) are needed. Perkins (see Sec. VI of Ref.) attempted to solve this problem by noting that the needed two sets of two-component neutrinos would exist if the positive muon is identified as the particle and the negative muon as the antiparticle. The reasoning is as follows: let 1 be the right-handed neutrino and 2 the left-handed neutrino with their corresponding antineutrinos (with opposite helicity). The neutrinos involved in beta decay are 2 and 2, while those for π–μ decay are 1 and 1. With this scheme the pion and muon decay modes are:
{|
| || → || || + || 1
|----
| || → || || + || 2 || + || 1
|}
Absence of massless neutrinos
There is convincing evidence that neutrinos have mass. In experiments at the SuperKamiokande researchers have discovered neutrino oscillations in which one flavor of neutrino changed into another. This means that neutrinos have non-zero mass outside the Einstein-Cartan torsion. Since massless neutrinos are needed to form a massless photon, a composite photon is not possible outside the Einstein-Cartan torsion.
References
External links
L. de Broglie "A new conception of light" (English translation)
Light
Obsolete theories in physics | Neutrino theory of light | [
"Physics"
] | 3,511 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Theoretical physics",
"Electromagnetic spectrum",
"Waves",
"Light",
"Obsolete theories in physics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.