id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
21,653,550
https://en.wikipedia.org/wiki/Multiplicatively%20closed%20set
In abstract algebra, a multiplicatively closed set (or multiplicative set) is a subset S of a ring R such that the following two conditions hold: , for all . In other words, S is closed under taking finite products, including the empty product 1. Equivalently, a multiplicative set is a submonoid of the multiplicative monoid of a ring. Multiplicative sets are important especially in commutative algebra, where they are used to build localizations of commutative rings. A subset S of a ring R is called saturated if it is closed under taking divisors: i.e., whenever a product xy is in S, the elements x and y are in S too. Examples Examples of multiplicative sets include: the set-theoretic complement of a prime ideal in a commutative ring; the set , where x is an element of a ring; the set of units of a ring; the set of non-zero-divisors in a ring; for an ideal I; the Jordan–Pólya numbers, the multiplicative closure of the factorials. Properties An ideal P of a commutative ring R is prime if and only if its complement is multiplicatively closed. A subset S is both saturated and multiplicatively closed if and only if S is the complement of a union of prime ideals. In particular, the complement of a prime ideal is both saturated and multiplicatively closed. The intersection of a family of multiplicative sets is a multiplicative set. The intersection of a family of saturated sets is saturated. See also Localization of a ring Right denominator set Notes References M. F. Atiyah and I. G. Macdonald, Introduction to commutative algebra, Addison-Wesley, 1969. David Eisenbud, Commutative algebra with a view toward algebraic geometry, Springer, 1995. Serge Lang, Algebra 3rd ed., Springer, 2002. Commutative algebra
Multiplicatively closed set
Mathematics
414
40,294,935
https://en.wikipedia.org/wiki/Wallemia%20ichthyophaga
Wallemia ichthyophaga is one of the three species of fungi in the genus Wallemia, which in turn is the only genus of the class Wallemiomycetes. The phylogenetic origin of the lineage was placed to various parts of Basidiomycota, but according to the analysis of larger datasets it is a (495-million-years-old) sister group of Agaricomycotina. Although initially believed to be asexual, population genomics found evidence of recombination between strains and a mating type locus was identified in all sequenced genomes of the species. Only a limited number of strains of W. ichthyophaga have been isolated so far (from hypersaline water of solar salterns, bitterns (magnesium-rich residual solutions in salt production from sea water) and salted meat). W. ichthyophaga requires at least 1.5 M NaCl for in-vitro growth (or some other osmolyte for an equivalent water activity), and it thrives even in saturated NaCl solution. This makes it the most halophilic fungus known and distinguishes it from halotolerant (e.g. Aureobasidium pullulans) and extremely halotolerant fungi (e.g. Hortaea werneckii), which are able to grow well even in the absence of salt in the medium. Inability to grow without salt is an exception in the fungal kingdom, but is common in halophilic Archaea. The fungus grows in the form of sarcina-like structures, or compact multicellular clumps. These increase in size almost four-fold when exposed to high salinity, and the cell wall experiences a three-fold thickening. This results in a substantially decreased functional cell volume and is thought to be one of the halotolerance mechanisms of this species. The whole genome sequencing of W. ichthyophaga revealed that it has one of the smallest of all sequenced basidiomycetous genomes (9.6 Mbp, only 4884 predicted proteins). Contrary to what was observed for the extremely halotolerant H. werneckii, in W. ichthyophaga there are almost no expansions in metal cation transporter genes and their expression is not salt-responsive. On the other hand, there is a vast enrichment of hydrophobins (proteins of cell wall with diverse functions and many biotechnological uses), which contain an unusually high proportion of acidic amino acids. High proportion of acidic amino acids is thought to be an adaptation of proteins to high concentrations of salt. After sequencing the genomes of nearly all known strains of W. ichthyophaga, population genomic analysis showed that the species forms a single recombining population. References Wallemiales Fungi described in 1887 Fungus species
Wallemia ichthyophaga
Biology
598
8,190,694
https://en.wikipedia.org/wiki/Los%20Angeles%20Conservancy
The Los Angeles Conservancy is a historic preservation organization in Los Angeles, California. It works to document, rescue and revitalize historic buildings, places and neighborhoods in the city. The Conservancy is the largest membership based historic preservation organization in the country. The group was formed in 1978 to preserve Los Angeles Central Library, which was threatened with demolition. The organization has over 7000 members and 400 volunteers. There used to be a volunteer Modern Committee, dedicated to the preservation of postwar architecture as well as a Historic Theaters Committee that produces the annual "Last Remaining Seats" film series of classic films in the historic movie palaces in downtown Los Angeles. The executive director since 1992 has been Linda Dishman. The Conservancy hosts an annual preservation awards ceremony at the Millennium Biltmore Hotel and works closely with the business, political and development communities to find preservation solutions for historic buildings. Some of the Conservancy's biggest success stories have included Bullocks Wilshire, the Cathedral of Saint Vibiana, the Wiltern Theater and the oldest operating McDonald's in Downey, CA. In 2006, the L.A. Conservancy won the American Planning Association's Daniel Burnham award, its most prestigious National Planning award. References External links Modern Committee Non-profit organizations based in Los Angeles Historic preservation organizations in the United States Heritage organizations Architectural history Urban planning in California Buildings and structures in Los Angeles 1978 establishments in California Organizations established in 1978
Los Angeles Conservancy
Engineering
298
453,755
https://en.wikipedia.org/wiki/Mitchell%27s%20embedding%20theorem
Mitchell's embedding theorem, also known as the Freyd–Mitchell theorem or the full embedding theorem, is a result about abelian categories; it essentially states that these categories, while rather abstractly defined, are in fact concrete categories of modules. This allows one to use element-wise diagram chasing proofs in these categories. The theorem is named after Barry Mitchell and Peter Freyd. Details The precise statement is as follows: if A is a small abelian category, then there exists a ring R (with 1, not necessarily commutative) and a full, faithful and exact functor F: A → R-Mod (where the latter denotes the category of all left R-modules). The functor F yields an equivalence between A and a full subcategory of R-Mod in such a way that kernels and cokernels computed in A correspond to the ordinary kernels and cokernels computed in R-Mod. Such an equivalence is necessarily additive. The theorem thus essentially says that the objects of A can be thought of as R-modules, and the morphisms as R-linear maps, with kernels, cokernels, exact sequences and sums of morphisms being determined as in the case of modules. However, projective and injective objects in A do not necessarily correspond to projective and injective R-modules. Sketch of the proof Let be the category of left exact functors from the abelian category to the category of abelian groups . First we construct a contravariant embedding by for all , where is the covariant hom-functor, . The Yoneda Lemma states that is fully faithful and we also get the left exactness of very easily because is already left exact. The proof of the right exactness of is harder and can be read in Swan, Lecture Notes in Mathematics 76. After that we prove that is an abelian category by using localization theory (also Swan). This is the hard part of the proof. It is easy to check that the abelian category is an AB5 category with a generator . In other words it is a Grothendieck category and therefore has an injective cogenerator . The endomorphism ring is the ring we need for the category of R-modules. By we get another contravariant, exact and fully faithful embedding The composition is the desired covariant exact and fully faithful embedding. Note that the proof of the Gabriel–Quillen embedding theorem for exact categories is almost identical. References reprinted with a forward as Module theory Additive categories Theorems in algebra
Mitchell's embedding theorem
Mathematics
549
57,594,587
https://en.wikipedia.org/wiki/ESSA-5
ESSA-5 (or TOS-C) was a spin-stabilized operational meteorological satellite. Its name was derived from that of its oversight agency, the Environmental Science Services Administration (ESSA). Launch ESSA-5 was launched on April 20, 1967, at 11:17 UTC. It was launched atop a Delta rocket from Vandenberg Air Force Base, California, U.S.. The spacecraft had a mass of at the time of launch. ESSA-5 had an inclination of 101.9°, and an orbited the Earth once every 113.6 minutes. Its perigee was and its apogee was . References Spacecraft launched in 1967 Weather satellites of the United States Television Infrared Observation Satellites
ESSA-5
Astronomy
149
76,101,893
https://en.wikipedia.org/wiki/Altermagnetism
In condensed matter physics, altermagnetism is a type of persistent magnetic state in ideal crystals. Altermagnetic structures are collinear and crystal-symmetry compensated, resulting in zero net magnetisation. Unlike in an ordinary collinear antiferromagnet, another magnetic state with zero net magnetization, the electronic bands in an altermagnet are not Kramers degenerate, but instead depend on the wavevector in a spin-dependent way. Related to this feature, key experimental observations were published in 2024. It has been speculated that altermagnetism may have applications in the field of spintronics. Crystal structure and symmetry In altermagnetic materials, atoms form a regular pattern with alternating spin and spatial orientation at adjacent magnetic sites in the crystal. Atoms with opposite magnetic moment are in altermagnets coupled by crystal rotation or mirror symmetry. The spatial orientation of magnetic atoms may originate from the surrounding cages of non-magnetic atoms. The opposite spin sublattices in altermagnetic manganese telluride (MnTe) are related by spin rotation combined with six-fold crystal rotation and half-unit cell translation. In altermagnetic ruthenium dioxide (RuO2), the opposite spin sublattices are related by four-fold crystal rotation. Electronic structure One of the distinctive features of altermagnets is a specifically spin-split band structure which was first experimentally observed in work that was published in 2024. Altermagnetic band structure breaks time-reversal symmetry, Eks=E-ks (E is energy, k wavevector and s spin) as in ferromagnets, however unlike in ferromagnets, it does not generate net magnetization. The altermagnetic spin polarisation alternates in wavevector space and forms characteristic 2, 4, or 6 spin-degenerate nodes, respectively, which correspond to d-, g, or i-wave order parameters. A d-wave altermagnet can be regarded as the magnetic counterpart of a d-wave superconductor. The altermagnetic spin polarization in band structure (energy–wavevector diagram) is collinear and does not break inversion symmetry. The altermagnetic spin splitting is even in wavector, i.e. (kx2-ky2)sz. It is thus also distinct from noncollinear Rasba or Dresselhaus spin texture which break inversion symmetry in noncentrosymmetric nonmagnetic or antiferromagnetic materials due to the spin-orbit coupling. Unconventional time-reversal symmetry breaking, giant ~1eV spin splitting and anomalous Hall effect was first theoretically predicted and experimentally confirmed in RuO2. Materials Direct experimental evidence of altermagnetic band structure in semiconducting MnTe and metallic RuO2 was first published in 2024. Many more materials are predicted to be altermagnets – ranging from insulators, semiconductors, and metals to superconductors. Altermagnetism was predicted in 3D and 2D materials with both light as well as heavy elements and can be found in nonrelativistic as well as relativistic band structures. Properties Altermagnets exhibit an unusual combination of ferromagnetic and antiferromagnetic properties, which remarkably more closely resemble those of ferromagnets. Hallmarks of altermagnetic materials such as the anomalous Hall effect have been observed before (but this effect occurs also in other magnetically compensated systems such as non-collinear antiferromagnets). Altermagnets also exhibit unique properties such as anomalous and spin currents that can change sign as the crystal rotates. Experimental observations In December 2024, researchers from the University of Nottingham provided the first experimental imaging of altermagnetism, confirming its unique spin-symmetry properties. Using Nitrogen-vacancy center microscopy and X-ray magnetic linear dichroism (XMLD), they visualized spin-polarized currents arising from the crystal-symmetry-protected altermagnetic order. This order featured antiparallel spin alignment within distinct crystal sublattices, creating a compensating spin polarization without macroscopic magnetization. These findings validated theoretical predictions and demonstrated the potential of altermagnetic materials in high-speed, low-energy spintronic devices. References Magnetic ordering 2024 in science
Altermagnetism
Physics,Chemistry,Materials_science,Engineering
916
186,497
https://en.wikipedia.org/wiki/Mefloquine
Mefloquine, sold under the brand name Lariam among others, is a medication used to prevent or treat malaria. When used for prevention it is typically started before potential exposure and continued for several weeks after potential exposure. It can be used to treat mild or moderate malaria but is not recommended for severe malaria. It is taken by mouth. Common side effects include vomiting, diarrhea, headaches, sleep disorders, and a rash. Serious side effects include potentially long-term mental health problems such as depression, hallucinations, and anxiety and neurological side effects such as poor balance, seizures, and ringing in the ears. It is therefore not recommended in people with a history of mental health problems or epilepsy. It appears to be safe during pregnancy and breastfeeding. Mefloquine was developed by the United States Army in the 1970s and came into use in the mid-1980s. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. Medical uses Mefloquine is used to both prevent and treat certain forms of malaria. Malaria prevention Mefloquine is useful for the prevention of malaria in all areas except for those where parasites may have resistance to multiple medications, and is one of several anti-malarial medications recommended by the United States Centers for Disease Control and Prevention for this purpose. It is also recommended by the Infectious Disease Society of America for malaria prophylaxis as a first or second-line agent, depending on resistance patterns in the malaria found in the geographic region visited. It is typically taken for one to two weeks before entering an area with malaria. Doxycycline and atovaquone/proguanil provide protection within one to two days and may be better tolerated. If a person becomes ill with malaria despite prophylaxis with mefloquine, the use of halofantrine and quinine for treatment may be ineffective. Malaria treatment Mefloquine is used as a treatment for chloroquine-sensitive or resistant Plasmodium falciparum malaria, and is deemed a reasonable alternative for uncomplicated chloroquine-resistant Plasmodium vivax malaria. It is one of several drugs recommended by the United States' Centers for Disease Control and Prevention. It is not recommended for severe malaria infections, particularly infections from P. falciparum, which should be treated with intravenous antimalarials. Mefloquine does not eliminate parasites in the liver phase of the disease, and people with P. vivax malaria should be treated with a second drug that is effective for the liver phase, such as primaquine. Resistance to mefloquine Resistance to mefloquine is common around the west border in Cambodia and other parts of Southeast Asia. The mechanism of resistance is by increase in Pfmdr1 copy number. Adverse effects Common side effects include vomiting, diarrhea, headaches, and a rash. Severe side effects requiring hospitalization are rare, but include mental health problems such as depression, hallucinations, anxiety and neurological side effects such as poor balance, seizures, and ringing in the ears. Mefloquine is therefore not recommended in people with a history of psychiatric disorders or epilepsy. Neurologic and psychiatric In 2013, the U.S. Food and Drug Administration (FDA) added a boxed warning to the prescription label of mefloquine regarding the potential for neuropsychiatric side effects that may persist even after discontinuing administration of the medication. In 2013 the FDA stated "Neurologic side effects can occur at any time during drug use, and can last for months to years after the drug is stopped or can be permanent." Neurologic effects include dizziness, loss of balance, seizures, and tinnitus. Psychiatric effects include nightmares, visual hallucinations, auditory hallucinations, anxiety, depression, unusual behavior, and suicidal ideations. Central nervous system events requiring hospitalization occur in about one in 10,000 people taking mefloquine for malaria prevention, with milder events (e.g., dizziness, headache, insomnia, and vivid dreams) in up to 25%. When some measure of subjective severity is applied to the rating of adverse events, about 11–17% of travelers are incapacitated to some degree. Cardiac Mefloquine may cause abnormalities with heart rhythms that are visible on electrocardiograms. Combining mefloquine with other drugs that cause similar effects, such as quinine or quinidine, can increase these effects. Combining mefloquine with halofantrine can cause significant increases in QTc intervals. Contraindications Mefloquine is contraindicated in those with a previous history of seizures or a recent history of psychiatric disorders. Pregnancy and breastfeeding Available data suggests that mefloquine is safe and effective for use by pregnant women during all trimesters of pregnancy, and it is widely used for this indication. In pregnant women, mefloquine appears to pose minimal risk to the fetus, and is not associated with increased risk of birth defects or miscarriages. Compared to other malaria chemoprophylaxis regimens, however, mefloqinone may produce more side effects in non-pregnant travelers. Mefloquine is also safe and effective for use during breastfeeding, though it appears in breast milk in low concentrations. The World Health Organization (WHO) gives approval for the use of mefloquine in the second and third trimesters of pregnancy and use in the first trimester does not mandate termination of pregnancy. Pharmacology Elimination Mefloquine is metabolized primarily through the liver. Its elimination in persons with impaired liver function may be prolonged, resulting in higher plasma levels and an increased risk of adverse reactions. The mean elimination plasma half-life of mefloquine is between two and four weeks. Total clearance is through the liver, and the primary means of excretion is through the bile and feces, as opposed to only 4% to 9% excreted through the urine. During long-term use, the plasma half-life remains unchanged. Liver function tests should be performed during long-term administration of mefloquine. Alcohol use should be avoided during treatment with mefloquine. Chemistry Specifically it is used as mefloquine hydrochloride. Mefloquine is a chiral molecule with two asymmetric carbon centres, which means it has four different stereoisomers. The drug is currently manufactured and sold as a racemate of the (R,S)- and (S,R)-enantiomers by Hoffmann-La Roche, a Swiss pharmaceutical company. Essentially, it is two drugs in one. Plasma concentrations of the (–)-enantiomer are significantly higher than those for the (+)-enantiomer, and the pharmacokinetics between the two enantiomers are significantly different. The (+)-enantiomer has a shorter half-life than the (–)-enantiomer. History Mefloquine was formulated at Walter Reed Army Institute of Research (WRAIR) in the 1970s shortly after the end of the Vietnam war. Mefloquine was number 142,490 of a total of 250,000 antimalarial compounds screened during the study. Mefloquine was the first Public-Private Venture (PPV) between the US Department of Defense and a pharmaceutical company. WRAIR transferred all its phase I and phase II clinical trial data to Hoffman-LaRoche and Smith Kline. FDA approval as a treatment for malaria was swift. Most notably, phase III safety and tolerability trials were skipped. The drug was first approved in Switzerland in 1984 by Hoffmann-LaRoche, who brought it to market with the name Lariam. However, mefloquine was not approved by the FDA for prophylactic use until 1989. This approval was based primarily on compliance, while safety and tolerability were overlooked. Because of the drug's very long half-life, the Centers for Disease Control originally recommended a mefloquine dosage of 250 mg every two weeks; however, this caused an unacceptably high malaria rate in the Peace Corps volunteers who participated in the approval study, so the drug regimen was switched to once a week. By 1991, Hoffman was marketing the drug on a worldwide basis. By the 1992 UNITAF, Canadian soldiers were being prescribed the drug en masse. By 1994, medical professionals were noting "severe psychiatric side effects observed during prophylaxis and treatment with mefloquine", and recommending that "the absence of contraindications and minor side effects during an initial course of mefloquine should be confirmed before another course is prescribed." Other doctors at the University Hospital of Zurich noted in a case of "a 47-year-old, previously healthy Japanese tourist" who had severe neuropsychiatric side-effects from the drug that The first randomized, controlled trial on a mixed population was performed in 2001. Prophylaxis with mefloquine was compared to prophylaxis with atovaquone-proguanil. Roughly 67% of participants in the mefloquine arm reported greater than or equal to one adverse event, versus 71% in the atovaquone-proguanil arm. In the mefloquine arm, 5% of the users reported severe events requiring medical attention, versus 1.2% in the atovaquone-proguanil arm. In August 2009, Roche stopped marketing Lariam in the United States. Retired soldier Johnny Mercer, who was later appointed Minister for Veterans Affairs by Boris Johnson, told in 2015 that he had received "a letter about once or twice a week" about ill-effects from the drug. In July 2016, Roche took this brand off the market in Ireland. Military In 2006, the Australian military deemed mefloquine "a third-line drug" alternative, and over the five years from 2011 only 25 soldiers had been prescribed the drug, and only in cases of their intolerance for other alternatives. Between 2001 and 2012, 16,000 Canadian soldiers sent to Afghanistan were given the drug as a preventative measure. In 2013, the US Army banned mefloquine from use by its special forces such as the Green Berets. In autumn 2016, the UK military followed suit with their Australian peers after a parliamentary inquiry into the matter revealed that it can cause permanent side effects and brain damage. In early December 2016, the German defence ministry removed mefloquine from the list of medications it would provide to its soldiers. In autumn 2016, Canadian Surgeon General Brigadier General Hugh Colin MacKay told a parliamentary committee that faulty science supported the assertion that the drug has indelible noxious side effects. An expert from Health Canada named Barbara Raymond told the same committee that the evidence she had read failed to support the conclusion of indelible side effects. Canadian soldiers who took mefloquine when deployed overseas have claimed they have been left with ongoing mental health problems. In 2020 the UK Ministry of Defence (MoD) admitted to a breach of duty regarding the use of Mefloquine. by acknowledging numerous instances of failure to assess the risks and warn of potential side effects of the drug. Research In June 2010, the first case report appeared of a progressive multifocal leukoencephalopathy being successfully treated with mefloquine. Mefloquine can also act against the JC virus. Administration of mefloquine seemed to eliminate the virus from the patient's body and prevented further neurological deterioration. Mefloquine alters cholinergic synaptic transmission through both postsynaptic and presynaptic actions. The postsynaptic action to inhibit acetylcholinesterase changes transmission across synapses in the brain. References Further reading External links American inventions Antimalarial agents Chirality Drug safety Drugs developed by Hoffmann-La Roche Medical controversies Piperidines Quinolines Racemic mixtures Trifluoromethyl compounds World Health Organization essential medicines Wikipedia medicine articles ready to translate
Mefloquine
Physics,Chemistry,Biology
2,562
48,128,982
https://en.wikipedia.org/wiki/Kepler-453b
Kepler-453b is a transiting circumbinary exoplanet in the binary-star system Kepler-453. It orbits the binary system in the habitable zone every 240.5 days. The orbit of the planet is inclined relative to the binary orbit therefore precession of the orbit leads to it spending most of its time in a non-transiting configuration. By the time the TESS and PLATO spacecraft are available for follow up observations it will no longer be transiting. References Exoplanets discovered in 2015 Transiting exoplanets Giant planets in the habitable zone 453b Circumbinary planets Lyra
Kepler-453b
Astronomy
136
156,310
https://en.wikipedia.org/wiki/Radicle
In botany, the radicle is the first part of a seedling (a growing plant embryo) to emerge from the seed during the process of germination. The radicle is the embryonic root of the plant, and grows downward in the soil (the shoot emerges from the plumule). Above the radicle is the embryonic stem or hypocotyl, supporting the cotyledon(s). It is the embryonic root inside the seed. It is the first thing to emerge from a seed and down into the ground to allow the seed to suck up water and send out its leaves so that it can start photosynthesizing. The radicle emerges from a seed through the micropyle. Radicles in seedlings are classified into two main types. Those pointing away from the seed coat scar or hilum are classified as antitropous, and those pointing towards the hilum are syntropous. If the radicle begins to decay, the seedling undergoes pre-emergence damping off. This disease appears on the radicle as darkened spots. Eventually, it causes death of the seedling. The plumule is the baby shoot. It grows after the radicle. In 1880, Charles Darwin published a book about plants he had studied, The Power of Movement in Plants, where he mentions the radicle. See also Plant perception (physiology) References Plant anatomy Plant morphology Plant intelligence
Radicle
Biology
299
2,068,153
https://en.wikipedia.org/wiki/Tent%20map
In mathematics, the tent map with parameter μ is the real-valued function fμ defined by the name being due to the tent-like shape of the graph of fμ. For the values of the parameter μ within 0 and 2, fμ maps the unit interval [0, 1] into itself, thus defining a discrete-time dynamical system on it (equivalently, a recurrence relation). In particular, iterating a point x0 in [0, 1] gives rise to a sequence : where μ is a positive real constant. Choosing for instance the parameter μ = 2, the effect of the function fμ may be viewed as the result of the operation of folding the unit interval in two, then stretching the resulting interval [0, 1/2] to get again the interval [0, 1]. Iterating the procedure, any point x0 of the interval assumes new subsequent positions as described above, generating a sequence xn in [0, 1]. The case of the tent map is a non-linear transformation of both the bit shift map and the r = 4 case of the logistic map. Behaviour The tent map with parameter μ = 2 and the logistic map with parameter r = 4 are topologically conjugate, and thus the behaviours of the two maps are in this sense identical under iteration. Depending on the value of μ, the tent map demonstrates a range of dynamical behaviour ranging from predictable to chaotic. If μ is less than 1 the point x = 0 is an attractive fixed point of the system for all initial values of x i.e. the system will converge towards x = 0 from any initial value of x. If μ is 1 all values of x less than or equal to 1/2 are fixed points of the system. If μ is greater than 1 the system has two fixed points, one at 0, and the other at μ/(μ + 1). Both fixed points are unstable, i.e. a value of x close to either fixed point will move away from it, rather than towards it. For example, when μ is 1.5 there is a fixed point at x = 0.6 (since 1.5(1 − 0.6) = 0.6) but starting at x = 0.61 we get If μ is between 1 and the square root of 2 the system maps a set of intervals between μ − μ2/2 and μ/2 to themselves. This set of intervals is the Julia set of the map – that is, it is the smallest invariant subset of the real line under this map. If μ is greater than the square root of 2, these intervals merge, and the Julia set is the whole interval from μ − μ2/2 to μ/2 (see bifurcation diagram). If μ is between 1 and 2 the interval [μ − μ2/2, μ/2] contains both periodic and non-periodic points, although all of the orbits are unstable (i.e. nearby points move away from the orbits rather than towards them). Orbits with longer lengths appear as μ increases. For example: If μ equals 2 the system maps the interval [0, 1] onto itself. There are now periodic points with every orbit length within this interval, as well as non-periodic points. The periodic points are dense in [0, 1], so the map has become chaotic. In fact, the dynamics will be non-periodic if and only if is irrational. This can be seen by noting what the map does when is expressed in binary notation: It shifts the binary point one place to the right; then, if what appears to the left of the binary point is a "one" it changes all ones to zeroes and vice versa (with the exception of the final bit "one" in the case of a finite binary expansion); starting from an irrational number, this process goes on forever without repeating itself. The invariant measure for x is the uniform density over the unit interval. The autocorrelation function for a sufficiently long sequence {} will show zero autocorrelation at all non-zero lags. Thus cannot be distinguished from white noise using the autocorrelation function. Note that the r = 4 case of the logistic map and the case of the tent map are homeomorphic to each other: Denoting the logistically evolving variable as , the homeomorphism is If μ is greater than 2 the map's Julia set becomes disconnected, and breaks up into a Cantor set within the interval [0, 1]. The Julia set still contains an infinite number of both non-periodic and periodic points (including orbits for any orbit length) but almost every point within [0, 1] will now eventually diverge towards infinity. The canonical Cantor set (obtained by successively deleting middle thirds from subsets of the unit line) is the Julia set of the tent map for μ = 3. Numerical errors Magnifying the orbit diagram A closer look at the orbit diagram shows that there are 4 separated regions at μ ≈ 1. For further magnification, 2 reference lines (red) are drawn from the tip to suitable x at certain μ (e.g., 1.10) as shown. With distance measured from the corresponding reference lines, further detail appears in the upper and lower part of the map. (total 8 separated regions at some μ) Asymmetric tent map The asymmetric tent map is essentially a distorted, but still piecewise linear, version of the case of the tent map. It is defined by for parameter . The case of the tent map is the present case of . A sequence {} will have the same autocorrelation function as will data from the first-order autoregressive process with {} independently and identically distributed. Thus data from an asymmetric tent map cannot be distinguished, using the autocorrelation function, from data generated by a first-order autoregressive process. Applications The tent map has found applications in social cognitive optimization, chaos in economics, image encryption, on risk and market sentiments for pricing, etc. See also Shift space Gray code References External links ChaosBook.org Chaotic maps
Tent map
Mathematics
1,274
7,346
https://en.wikipedia.org/wiki/Centimetre%E2%80%93gram%E2%80%93second%20system%20of%20units
The centimetre–gram–second system of units (CGS or cgs) is a variant of the metric system based on the centimetre as the unit of length, the gram as the unit of mass, and the second as the unit of time. All CGS mechanical units are unambiguously derived from these three base units, but there are several different ways in which the CGS system was extended to cover electromagnetism. The CGS system has been largely supplanted by the MKS system based on the metre, kilogram, and second, which was in turn extended and replaced by the International System of Units (SI). In many fields of science and engineering, SI is the only system of units in use, but CGS is still prevalent in certain subfields. In measurements of purely mechanical systems (involving units of length, mass, force, energy, pressure, and so on), the differences between CGS and SI are straightforward: the unit-conversion factors are all powers of 10 as and . For example, the CGS unit of force is the dyne, which is defined as , so the SI unit of force, the newton (), is equal to . On the other hand, in measurements of electromagnetic phenomena (involving units of charge, electric and magnetic fields, voltage, and so on), converting between CGS and SI is less straightforward. Formulas for physical laws of electromagnetism (such as Maxwell's equations) take a form that depends on which system of units is being used, because the electromagnetic quantities are defined differently in SI and in CGS. Furthermore, within CGS, there are several plausible ways to define electromagnetic quantities, leading to different "sub-systems", including Gaussian units, "ESU", "EMU", and Heaviside–Lorentz units. Among these choices, Gaussian units are the most common today, and "CGS units" is often intended to refer to CGS-Gaussian units. History The CGS system goes back to a proposal in 1832 by the German mathematician Carl Friedrich Gauss to base a system of absolute units on the three fundamental units of length, mass and time. Gauss chose the units of millimetre, milligram and second. In 1873, a committee of the British Association for the Advancement of Science, including physicists James Clerk Maxwell and William Thomson, 1st Baron Kelvin recommended the general adoption of centimetre, gram and second as fundamental units, and to express all derived electromagnetic units in these fundamental units, using the prefix "C.G.S. unit of ...". The sizes of many CGS units turned out to be inconvenient for practical purposes. For example, many everyday objects are hundreds or thousands of centimetres long, such as humans, rooms and buildings. Thus the CGS system never gained wide use outside the field of science. Starting in the 1880s, and more significantly by the mid-20th century, CGS was gradually superseded internationally for scientific purposes by the MKS (metre–kilogram–second) system, which in turn developed into the modern SI standard. Since the international adoption of the MKS standard in the 1940s and the SI standard in the 1960s, the technical use of CGS units has gradually declined worldwide. CGS units have been deprecated in favor of SI units by NIST, as well as organizations such as the American Physical Society and the International Astronomical Union. SI units are predominantly used in engineering applications and physics education, while Gaussian CGS units are still commonly used in theoretical physics, describing microscopic systems, relativistic electrodynamics, and astrophysics. The units gram and centimetre remain useful as noncoherent units within the SI system, as with any other prefixed SI units. Definition of CGS units in mechanics In mechanics, the quantities in the CGS and SI systems are defined identically. The two systems differ only in the scale of the three base units (centimetre versus metre and gram versus kilogram, respectively), with the third unit (second) being the same in both systems. There is a direct correspondence between the base units of mechanics in CGS and SI. Since the formulae expressing the laws of mechanics are the same in both systems and since both systems are coherent, the definitions of all coherent derived units in terms of the base units are the same in both systems, and there is an unambiguous relationship between derived units:   (definition of velocity)   (Newton's second law of motion)   (energy defined in terms of work)   (pressure defined as force per unit area)   (dynamic viscosity defined as shear stress per unit velocity gradient). Thus, for example, the CGS unit of pressure, barye, is related to the CGS base units of length, mass, and time in the same way as the SI unit of pressure, pascal, is related to the SI base units of length, mass, and time: 1 unit of pressure = 1 unit of force / (1 unit of length)2 = 1 unit of mass / (1 unit of length × (1 unit of time)2) 1 Ba = 1 g/(cm⋅s2) 1 Pa = 1 kg/(m⋅s2). Expressing a CGS derived unit in terms of the SI base units, or vice versa, requires combining the scale factors that relate the two systems: 1 Ba = 1 g/(cm⋅s2) = 10−3 kg / (10−2 m⋅s2) = 10−1 kg/(m⋅s2) = 10−1 Pa. Definitions and conversion factors of CGS units in mechanics Derivation of CGS units in electromagnetism CGS approach to electromagnetic units The conversion factors relating electromagnetic units in the CGS and SI systems are made more complex by the differences in the formulas expressing physical laws of electromagnetism as assumed by each system of units, specifically in the nature of the constants that appear in these formulas. This illustrates the fundamental difference in the ways the two systems are built: In SI, the unit of electric current, the ampere (A), was historically defined such that the magnetic force exerted by two infinitely long, thin, parallel wires 1 metre apart and carrying a current of 1 ampere is exactly . This definition results in all SI electromagnetic units being numerically consistent (subject to factors of some integer powers of 10) with those of the CGS-EMU system described in further sections. The ampere is a base unit of the SI system, with the same status as the metre, kilogram, and second. Thus the relationship in the definition of the ampere with the metre and newton is disregarded, and the ampere is not treated as dimensionally equivalent to any combination of other base units. As a result, electromagnetic laws in SI require an additional constant of proportionality (see Vacuum permeability) to relate electromagnetic units to kinematic units. (This constant of proportionality is derivable directly from the above definition of the ampere.) All other electric and magnetic units are derived from these four base units using the most basic common definitions: for example, electric charge q is defined as current I multiplied by time t, resulting in the unit of electric charge, the coulomb (C), being defined as 1 C = 1 A⋅s. The CGS system variant avoids introducing new base quantities and units, and instead defines all electromagnetic quantities by expressing the physical laws that relate electromagnetic phenomena to mechanics with only dimensionless constants, and hence all units for these quantities are directly derived from the centimetre, gram, and second. In each of these systems the quantities called "charge" etc. may be a different quantity; they are distinguished here by a superscript. The corresponding quantities of each system are related through a proportionality constant. Maxwell's equations can be written in each of these systems as: Electrostatic units (ESU) In the electrostatic units variant of the CGS system, (CGS-ESU), charge is defined as the quantity that obeys a form of Coulomb's law without a multiplying constant (and current is then defined as charge per unit time): The ESU unit of charge, franklin (Fr), also known as statcoulomb or esu charge, is therefore defined as follows: Therefore, in CGS-ESU, a franklin is equal to a centimetre times square root of dyne: The unit of current is defined as: In the CGS-ESU system, charge q is therefore has the dimension to M1/2L3/2T−1. Other units in the CGS-ESU system include the statampere (1 statC/s) and statvolt (1 erg/statC). In CGS-ESU, all electric and magnetic quantities are dimensionally expressible in terms of length, mass, and time, and none has an independent dimension. Such a system of units of electromagnetism, in which the dimensions of all electric and magnetic quantities are expressible in terms of the mechanical dimensions of mass, length, and time, is traditionally called an 'absolute system'.:3 Unit symbols All electromagnetic units in the CGS-ESU system that have not been given names of their own are named as the corresponding SI name with an attached prefix "stat" or with a separate abbreviation "esu", and similarly with the corresponding symbols. Electromagnetic units (EMU) In another variant of the CGS system, electromagnetic units (EMU), current is defined via the force existing between two thin, parallel, infinitely long wires carrying it, and charge is then defined as current multiplied by time. (This approach was eventually used to define the SI unit of ampere as well). The EMU unit of current, biot (Bi), also known as abampere or emu current, is therefore defined as follows: Therefore, in electromagnetic CGS units, a biot is equal to a square root of dyne: The unit of charge in CGS EMU is: Dimensionally in the CGS-EMU system, charge q is therefore equivalent to M1/2L1/2. Hence, neither charge nor current is an independent physical quantity in the CGS-EMU system. EMU notation All electromagnetic units in the CGS-EMU system that do not have proper names are denoted by a corresponding SI name with an attached prefix "ab" or with a separate abbreviation "emu". Practical CGS units The practical CGS system is a hybrid system that uses the volt and the ampere as the units of voltage and current respectively. Doing this avoids the inconveniently large and small electrical units that arise in the esu and emu systems. This system was at one time widely used by electrical engineers because the volt and ampere had been adopted as international standard units by the International Electrical Congress of 1881. As well as the volt and ampere, the farad (capacitance), ohm (resistance), coulomb (electric charge), and henry (inductance) are consequently also used in the practical system and are the same as the SI units. The magnetic units are those of the emu system. The electrical units, other than the volt and ampere, are determined by the requirement that any equation involving only electrical and kinematical quantities that is valid in SI should also be valid in the system. For example, since electric field strength is voltage per unit length, its unit is the volt per centimetre, which is one hundred times the SI unit. The system is electrically rationalized and magnetically unrationalized; i.e., and , but the above formula for is invalid. A closely related system is the International System of Electric and Magnetic Units, which has a different unit of mass so that the formula for ′ is invalid. The unit of mass was chosen to remove powers of ten from contexts in which they were considered to be objectionable (e.g., and ). Inevitably, the powers of ten reappeared in other contexts, but the effect was to make the familiar joule and watt the units of work and power respectively. The ampere-turn system is constructed in a similar way by considering magnetomotive force and magnetic field strength to be electrical quantities and rationalizing the system by dividing the units of magnetic pole strength and magnetization by 4. The units of the first two quantities are the ampere and the ampere per centimetre respectively. The unit of magnetic permeability is that of the emu system, and the magnetic constitutive equations are and . Magnetic reluctance is given a hybrid unit to ensure the validity of Ohm's law for magnetic circuits. In all the practical systems ε0 = 8.8542 × 10−14 A⋅s/(V⋅cm), μ0 = 1 V⋅s/(A⋅cm), and c2 = 1/(4π × 10−9 ε0μ0). Other variants There were at various points in time about half a dozen systems of electromagnetic units in use, most based on the CGS system. These include the Gaussian units and the Heaviside–Lorentz units. Electromagnetic units in various CGS systems In this table, c = is the numeric value of the speed of light in vacuum when expressed in units of centimetres per second. The symbol "≘" is used instead of "=" as a reminder that the units are corresponding but not equal. For example, according to the capacitance row of the table, if a capacitor has a capacitance of 1 F in SI, then it has a capacitance of (10−9 c2) cm in ESU; but it is incorrect to replace "1 F" with "(10−9 c2) cm" within an equation or formula. (This warning is a special aspect of electromagnetism units. By contrast it is always correct to replace, e.g., "1 m" with "100 cm" within an equation or formula.) Physical constants in CGS units Advantages and disadvantages Lack of unique unit names leads to potential confusion: "15 emu" may mean either 15 abvolts, or 15 emu units of electric dipole moment, or 15 emu units of magnetic susceptibility, sometimes (but not always) per gram, or per mole. With its system of uniquely named units, the SI removes any confusion in usage: 1 ampere is a fixed value of a specified quantity, and so are 1 henry, 1 ohm, and 1 volt. In the CGS-Gaussian system, electric and magnetic fields have the same units, 40 is replaced by 1, and the only dimensional constant appearing in the Maxwell equations is c, the speed of light. The Heaviside–Lorentz system has these properties as well (with ε0 equaling 1). In SI, and other rationalized systems (for example, Heaviside–Lorentz), the unit of current was chosen such that electromagnetic equations concerning charged spheres contain 4, those concerning coils of current and straight wires contain 2 and those dealing with charged surfaces lack entirely, which was the most convenient choice for applications in electrical engineering and relates directly to the geometric symmetry of the system being described by the equation. Specialized unit systems are used to simplify formulas further than either SI or CGS do, by eliminating constants through a convention of normalizing quantities with respect to some system of natural units. For example, in particle physics a system is in use where every quantity is expressed by only one unit of energy, the electronvolt, with lengths, times, and so on all converted into units of energy by inserting factors of speed of light c and the reduced Planck constant ħ. This unit system is convenient for calculations in particle physics, but is impractical in other contexts. See also Outline of metrology and measurement International System of Units International System of Electrical and Magnetic Units List of metric units List of scientific units named after people Metre–tonne–second system of units United States customary units Foot–pound–second system of units References and notes General literature Metrology Systems of units Metric system British Science Association
Centimetre–gram–second system of units
Mathematics
3,376
29,051,398
https://en.wikipedia.org/wiki/Ilomastat
Ilomastat (INN), (codenamed GM6001, proprietary name Galardin®) is a broad-spectrum matrix metalloproteinase inhibitor. This chemotherapy agent is considered to have application in skincare products for its antiaging properties. Ilomastat is a member of the hydroxamic acid class of reversible metallopeptidase inhibitors. The anionic state of the hydroxamic acid group forms a bidentate complex with the active site zinc. Examples of enzymes that ilomastat inhibit include rabbit MMP9, thermolysin, peptide deformylase, and anthrax lethal factor endopeptidase (LF) produced by the bacterium Bacillus anthracis. References Hydroxamic acids Matrix metalloproteinase inhibitors Isobutyl compounds
Ilomastat
Chemistry
176
12,224,388
https://en.wikipedia.org/wiki/BHIE
BHIE (Bidirectional Health Information Exchange) is a series of communications protocols developed by the US Department of Veterans Affairs (VA). It is used to exchange healthcare information between VA healthcare facilities nationwide and between VA facilities and Department of Defense healthcare facilities. BHIE is one of the most widely used healthcare data exchange systems in routine healthcare use, and is used to facilitate healthcare data exchange associated with a patient's medical record. Types of data managed Outpatient pharmacy data, allergy data, patient identification correlation, laboratory result data (including surgical pathology reports, cytology and microbiology data, chemistry and hematology data), lab orders data, radiology reports, problem lists, encounters, procedures, and clinical notes are examples of the types of healthcare data that are exchanged using BHIE. Integration with Electronic Health Record systems BHIE is currently integrated into the VistA EMR (electronic medical record) system used nationwide in VA hospitals. VA Hospitals have regional specialized capabilities, and veterans often travel to receive specialized care. Their VistA medical records are able to be transmitted in their entirety using this protocol. History In response to 1998 Presidential Review Directive 5, the Department of Defense (DoD), the VA, and the US Indian Health Service (IHS) collaborated to create the first developmental instances of a secure data-sharing system for electronic patient record data. This was initially called the Government Computer-based Patient Record system, or GCPR. The development of GCPR used UML modeling tools to define the various expected use cases where medical Care Providers in any Medical Treatment Facility (MTF) would need to have access to patient records or other data from within another participating agency. The UML modeling design was selected for its ability to clearly define the business logic that would be required for the GCPR Framework and for its ability to provide detailed tracking of the iterative development of the Framework software. The UML model for the Framework is still used for the ongoing maintenance and support of the BHIE system. Early development of the GCPR system proved that it could meet the requirements of a robust interagency data sharing system, but details of implementation, policy, and security management issues caused delays in full implementation of the GCPR system as it was originally designed. As the project progressed, the IHS withdrew from GCPR participation, and agreements between the DoD and the VA led to the GCPR Near-Term Solution (GCPR-NTS) being managed principally by the VA, with support from the DoD. The VA installed the preliminary systems for GCPR-NTS in the VA Silver Spring, MD OIFO, where extensive testing took place between the DoD EI/DS and the VA CPRS developers. These teams worked together to finalize the needed infrastructure and security systems for one-way data transport of DoD Separatee data to the VA. The GCPR-NTS was structurally designed to house a static repository of this DoD Separatee data for use by VA Care providers. This one-way transfer of data from DoD to the VA repository continues to be one of the principal functions of the BHIE system. Upon completion of initial testing, the VA deployed another GCPR-NTS system into the Austin Automation Center, in Austin Texas. This system became the "production" environment, which came to be known as the GCPR-Mid-Term Solution (GCPR-MTS). As the use of the system grew within the VA, it was later renamed to become the Federal Healthcare Information Exchange, or FHIE. The previously constructed system in the Silver Spring OIFO was re-tasked to become an iterative testing environment for proofing planned changes prior to deployment in the FHIE production system in Austin. In 2004, interest in the system for use within the DoD was renewed, and further development was done to add a true bi-directional connection component to the FHIE system. Initially called the Data Sharing Initiative (DSI), adapters were added to the FHIE system using the Web Services XML-based protocol standard. A similar Web Services adapter was developed for the DoD to connect to their CHCS-I legacy patient record systems. In this way, both systems hosted a peer Web Services client that is accessible to the other with proper authentication, allowing bi-directional, query-based data exchanges between the disparate systems. Direct cross-Domain write capability and fully computable data storage and transfers are not supported at this time. With the addition of the DSI components to the FHIE system, the entire project was renamed the Federal Bi-Directional Healthcare Information Exchange, or BHIE. All references to FHIE (other than historical) are generally being phased out. BHIE represents the previous Framework System as was deployed for the VA, and all additional capabilities added to support near-real-time data exchange between the Framework and participating DoD Medical Treatment Facilities (MTFs). In short: FHIE + DSI = BHIE. The current BHIE Project participants are exclusively the DoD and the VA, though any number of additional domains will probably be added over time with proper development of adapters and policies. This project has the support of the VA Under-Secretary for Health, and the Acting Assistant Secretary of Defense/Health Affairs of DoD. There is also congressional interest in a successful outcome to this work. Since 2Q-FY05, the DoD is supporting the development of a separate DoD BHIE Domain, including dedicated hardware and infrastructure to support this new system within the DISA network. The details of the DoD system are still in development, as are the details of the expected interoperability with the existing VA BHIE system. Additional data types were added to the system during the 2005-2006 operational periods, including the provision of Discharge Summaries from selected DoD MTFs, and the inclusion of Pre-Post deployment form data availability. In March 2006, the usage of BHIE across the country was outlined before the House Committee on Veterans Affairs. In 2007 the DoD's AHLTA interface was connected to BHIE to allow AHLTA clinicians to see VA data and VA clinicians to see DoD data stored within the CDR. Additionally in 2007 the Theater Medical Data Store (TMDS) was connected to BHIE to allow VA and DoD clinicians to access medical records from combat theaters. In the 2007–2009 years, a parallel “two-pass” system for exchanging imaging metadata was added to the scope of BHIE. A special-purpose server, the BHIE Imaging Adapter (BIA) was added to the other BHIE systems. This BIA server takes the first pass of an Image Study query, obtains metadata about the images for a specific patient from the BHIE system, then presents a list of available images to the end-user, who can then select the images of interest from the list. The BIA then has variable functions as an intelligent proxy for retrieving and delivering the selected images. As of 2011, other additional functions related to images are being added to both the BIA and BHIE systems. From 2008 through 2011, the central focus of BHIE was to upgrade the system hardware and migrate all of the production functions onto the new hardware. The upgrades began in the spring of 2009, when the initial sets of hardware were delivered and development began to create a set of identical-hardware environments on which the BHIE systems' migration could occur. The migration to the new production BHIE location in Philadelphia, Pennsylvania, was accomplished in January 2011, and enhancements to all of the systems continue as an ongoing process. The Austin "Legacy" BHIE system remained in production operation until 2011, when the replacement BHIE hardware installed in Philadelphia, Pennsylvania, assumed all of those functions. The Austin systems went dark and were retired from service in April 2011. References External links 1998 Presidential Review Directive 5, Health care software United States Department of Veterans Affairs Electronic health records
BHIE
Technology
1,650
16,246,246
https://en.wikipedia.org/wiki/Digital%20perm
A digital perm is a perm that uses hot rods with the temperature regulated by a machine with a digital display, hence the name. The process is otherwise similar to that of a traditional perm. The name "digital perm" is trademarked by a Japanese company, Paimore Co. Hairstylists usually call it a "hot perm." A normal perm basically requires only the perm solution. A digital perm requires a (different) solution plus heat. This type of perm is popular in several countries, including South Korea and Japan. Difference between a normal perm and a digital perm The biggest difference between other perms and a digital perm is the shape and the texture of the wave created by the digital process. A normal perm, or "cold perm," makes the wave most prominent when the hair is wet, and loose when it is dry. The hair tends to look moist and as locks. A digital perm makes the wave most prominent when the hair is dry, and loose when it is wet. Therefore, the dry and curly look of the curl iron or the hot curler can be created. Digital perms thermally recondition the hair, though the chemicals and processing are similar to a straight perm. The hair often feels softer, smoother, and shinier after a digital perm. Cost and time of a digital perm The price depends on the hair salon, but a digital perm is usually a little more expensive than a cold perm. Also, some hair salons have systems where they can use the machine one at a time, in which case the price could be a lot higher. The time it takes to perm the hair also depends on the hair salon and the hair type, but it usually takes longer than a cold perm. In some cases, it takes about the same time, but different salons use different solutions and machines, so the time varies. Styling A cold perm makes the hair most wavy when it is wet, so adding styling gel/foam when it is wet and air-drying it makes the wave most prominent. A digital perm makes the hair wavy when it is dry, so it can be dried with a blow dryer, and a hand can be used to make the curl. Styling is very easy, and if the curl is set in the morning, at the end of the day when the wave loosens, the curls can be revived by curling around a finger. See also Haircut List of hairstyles References Further reading Liu, Christine, Le Gala Hair Group: Introducing the digital perm, Boston's Weekly Dig, Wednesday, January 31, 2007, Issue 9.5. Pastor, Pam, Hi-tech hair, Philippine Daily Inquirer Hairdressing Hairstyles 2000s neologisms 2000s in technology Temperature control Japanese inventions
Digital perm
Technology
585
34,265,825
https://en.wikipedia.org/wiki/Geroch%27s%20splitting%20theorem
In the theory of causal structure on Lorentzian manifolds, Geroch's theorem or Geroch's splitting theorem (first proved by Robert Geroch) gives a topological characterization of globally hyperbolic spacetimes. The theorem A Cauchy surface can possess corners, and thereby need not be a differentiable submanifold of the spacetime; it is however always continuous (and even Lipschitz continuous). By using the flow of a vector field chosen to be complete, smooth, and timelike, it is elementary to prove that if a Cauchy surface is -smooth then the spacetime is -diffeomorphic to the product , and that any two such Cauchy surfaces are -diffeomorphic. Robert Geroch proved in 1970 that every globally hyperbolic spacetime has a Cauchy surface , and that the homeomorphism (as a -diffeomorphism) to can be selected so that every surface of the form is a Cauchy surface and each curve of the form is a continuous timelike curve. Various foundational textbooks, such as George Ellis and Stephen Hawking's The Large Scale Structure of Space-Time and Robert Wald's General Relativity, asserted that smoothing techniques allow Geroch's result to be strengthened from a topological to a smooth context. However, this was not satisfactorily proved until work of Antonio Bernal and Miguel Sánchez in 2003. As a result of their work, it is known that every globally hyperbolic spacetime has a Cauchy surface which is smoothly embedded and spacelike. As they proved in 2005, the diffeomorphism to can be selected so that each surface of the form is a spacelike smooth Cauchy surface and that each curve of the form is a smooth timelike curve orthogonal to each surface . References Sources Theorems in general relativity Lorentzian manifolds Theorems in mathematical physics
Geroch's splitting theorem
Physics,Mathematics
396
5,177,345
https://en.wikipedia.org/wiki/Tsubakimoto%20Chain
() is a Japanese manufacturer of power transmission and roller chain products. It was founded in Osaka in 1917 as a bicycle chain manufacturer. Later it became the first roller chain manufacturer in Japan approved by Japanese Industrial Standards. Tsubakimoto Chain has the world's largest market share for steel chains for general industrial applications and enjoys the world's top market share for timing drive systems for automobiles. The company is headquartered in Osaka, with its main manufacturing base in Kyotanabe, Kyoto. History Tsubakimoto Chain was established in 1917 by Setsuzo Tsubakimoto in Kita-ku, Osaka as a private enterprise known as Tsubakimoto Shoten manufacturing bicycle chains. They soon moved to roller chain and conveyor equipment production, ceasing bicycle chain manufacture in 1928. The following year, they registered as Tsubakimoto Chain Manufacturing Company. With the completion of their Tsurumi Plant in Osaka in 1940, they launched as a joint-stock company with capital of three million yen in 1941. Setsuzo Tsubakimoto was appointed the company's first president. They changed their name to Tsubakimoto Chain Co. in 1970. In 2000, Tsubaki completed work on its new, larger Kyotanabe Plant to meet its increasing production levels. With nearly 100,000 m2 of building floor space, the plant is the world's largest chain manufacturing facility. Products Roller chain and sprockets, toothed belts and pulleys, hose and cable carrier systems, shaft coupling/locking, reducer/variable speed drives, motion control/clutch, overload protectors, linear actuators, automotive timing belt systems, conveyance, sorting, and storage systems, bulk handling systems, metalworking chips handling and coolant processing systems. Profile Corporate name: Headquarters: Nakanoshima Mitsui Building, 6F, 3-3-3, Nakanoshima, Kita-ku, Osaka, 530-0005 Japan Kyotanabe Plant: 1-1-3, Kannabidai, Kyotanabe, Kyoto 610-0380 Japan Saitama Plant: 20, Shinko, Hanno, Saitama 357-8510 Japan Kyoto Plant: 1-1, Kotari -Kuresumi, Nagaokakyo, Kyoto 617-0833 Japan Hyogo Plant: 1140, Asazuma-cho, Kasai, Hyogo 679-0181 Japan Principal Group Companies Japan Tsubakimoto Custom Chain Co. Tsubakimoto Sprocket Co. Tsubaki Yamakyu Chain Co. Tsubakimoto Iron Casting Co., Ltd. Tsubakimoto Machinery Co. Tsubakimoto Bulk Systems Corp. Tsubakimoto Mayfran Inc. Tsubaki Support Center Co. Overseas Americas U.S. Tsubaki Holdings, Inc. (headquarters) U.S. Tsubaki Power Transmission, LLC (manufacturing base) U.S. Tsubaki Automotive LLC (manufacturing base) U.S. Tsubaki Industrial LLC (manufacturing base) Tsubaki Kabelschlepp America, Inc. Tsubaki Brasil Equipamentos Industriais Ltda. Central Conveyor Company, LLC Central Process Engineering, LLC Electrical Insights, LLC KCI, Incorporated Tsubaki of Canada Limited (headquarters and manufacturing base) Mayfran International. Incorporated Conergics International LLC Press Room Techniques Co. Tsubakimoto Automotive Mexico S.A. de C.V. (manufacturing base) Europe Tsubakimoto Europe B.V. Tsubakimoto U.K. Ltd. Tsubaki Deutschland GmbH Tsubaki Automotive Czech Republic s.r.o. Tsubaki Ibérica Power Transmission S.L. Tsubaki KabelSchlepp GmbH KabelSchlepp GmbH - Hunsborn KabelSchlepp Italia S.A.R.L. Metool Products Limited KabelSchlepp France S.A.R.L. Kabelschlepp Systemtechnik spol. s r.o. OOO Tsubaki KabelSchlepp Schmidberger GmbH Mayfran U.K. Limited Mayfran GmbH Mayfran Limburg B.V. Mayfran International B.V. Mayfran France S.A.R.L. Mayfran CZ s.r.o. Indian Ocean Rim Tsubakimoto Singapore Pte. Ltd. PT Tsubaki Indonesia Manufacturing PT Tsubaki Indonesia Trading Tsubaki Power Transmission (Malaysia) Sdn. Bhd. Tsubakimoto (Thailand) Co., Ltd. Tsubaki India Power Transmission Private Limited Tsubakimoto Vietnam Co., Ltd. Tsubakimoto Philippines Corporation Tsubaki Australia Pty. Limited Tsubakimoto Automotive (Thailand) Co., Ltd. Tsubaki Motion Control (Thailand) Co., Ltd. Kabelschlepp India Private Limited Tsubaki Conveyor Systems India Private Limited China Tsubakimoto Chain (Shanghai) Co., Ltd. Tsubaki Motion Control (Shanghai) Co., Ltd. Tsubakimoto Automotive (Shanghai) Co., Ltd. Tsubaki Everbest Gear (Tianjin) Co., Ltd. Tsubakimoto Chain (Tianjin) Co., Ltd. Tsubakimoto Bulk Systems (Shanghai) Corp. Kabelschlepp China Co., Ltd. Tianjin Tsubakimoto Conveyor Systems Co., Ltd. Tsubakimoto Mayfran Conveyor (Shanghai) Co., Ltd. Tsubaki CAPT Power Transmission (Shijiazhuang) Co., Ltd. Korea and Taiwan Taiwan Tsubakimoto Co. (manufacturing base) Taiwan Tsubakimoto Trading Co., Ltd. Tsubakimoto Automotive Korea Co., Ltd. Tsubakimoto Korea Co., Ltd. News U.S. Tsubaki Power Transmission LLC Company Profile (a subsidiary of Tsubakimoto Chain Co.) 2022 Roller chain upgrade reduces costs for pizza making operation 2019 U.S. Tsubaki Holdings Inc.’s Conveyor Operations Division and U.S. Automotive LLC open a new manufacturing facility in Portland, TN 2018 Extracting the benefits of customised chain solutions Tsubakimoto Chain Installs PV System at Its New Manufacturing Plant New State Capital Sells Central Conveyor to U.S. Tsubaki Holdings, Inc. 2016 Maintenance-Free Chain Helps Provide Long Term Flood Control 2015 GM Announces 2014 Supplier of the Year Winners 2014 For When an Ordinary Chain Just Won’t Do Mahindra Conveyor Systems group firm forms joint venture with Japanese Tsubaki Patent Issued for Conveyor Chain Tsubakimoto Chain Co.: Patent Issued for Silent Chain Having Deformable Guide Plates Toyota Supplier Sees China Sales Doubling on Orders From VW, GM 2013 Toyota supplier considers China capacity boost on VW, GM orders 2012 U.S. Tsubaki Launches New Interactive Website and Centralized Product Platform to Better Serve Engineers (Packing Digest) Cam clutches meet safety requirements (Mining Weekly) Cable carriers with multiple band design master high additional loads (Materials Handling World Magazine) Tsubakimoto Expects Record Auto-Parts Sales on Carmakers Rebound (Bloomberg Businessweek) 2011 1st foreign plant completed in BJFEZ (Korea Times) Toyota Announces Supplier of the Year Awards (Reliable Plant) How do we break the cycle that swings from lowest cost to highest quality? (Industrial Technology) Getting more life from roller chain (Motion System Design) FlexLink and Tsubakimoto Chain Co. signed JV (Flexlink) Tsubaki is named IADA supplier of the year in its first term of partnership (Process and Control Today) 2010 Tsubaki buys KabelSchlepp to lead its cable-carrier division (Drives & Controls) Taking Aim at 21st Century "Korean Special Procurement Demand" (Nikkei Business) 2009 Electric Cars Push Japan Engine Parts Makers to Crisis Mode (Bloomberg) Energy-saving conveyor chains from Tsubaki (European Design Engineer Magazine) Japan's Tsubakimoto Chain to build South Korean Plant (Nikkei) Tsubakimoto Chain Co. ranks 24th overall in the latest Patent Scorecard (Wall Street Journal - Market Data Center) Roller chain drives offer longer life, even in harsh environments (engineerlive) 2008 Chains offer better grip for packaging (engineerlive) Tsubakimoto Chain 9-mth group results (Reuters) 2007 Specialty chains meet underground conveying demands (Mining Weekly) Lube-free chain another link in product portfolio (Engineering News) 2006 Motion Industries Recognizes 26 Suppliers with Operational Excellence Supplier Partnership Awards Clarion, NGK, Tsubakimoto, Tokyo Steel: Japanese Equity Preview (Bloomberg article tracking the company's stock) 220 summer jobs lined up for youths (Article about U.S. Tsubaki involvement in a local jobs program) Tsubaki Corrosion Resistant DP Series Chain Lasts Twice as Long as Standard Chain and is Still Going Strong at Scottish Water (Spotlight article in Process and Control Today) 2005 Tsubaki Low Noise Chain Gives the 'Silent Treatment' to Industrial Laundry Machines (Spotlight article in Process and Control Today) Lady Godiva Rides In Coventry City Centre Again Thanks to Tsubaki's Weather Proof PC Chain (Spotlight article in Process and Control Today) See also List of companies of Japan List of automobile manufacturers of Japan References External links https://tsubakimoto.com/ - Corporate web site of Tsubakimoto Chain Co. in English https://tsubakimoto.jp/ - Corporate web site of Tsubakimoto Chain Co. in Japanese http://tsubaki.cn/ - Corporate web site of Tsubakimoto Chain Co. in Chinese Companies listed on the Tokyo Stock Exchange Manufacturing companies of Japan Manufacturing companies based in Osaka Engineering companies of Japan Automotive companies of Japan Japanese brands Manufacturing companies established in 1917 Industrial machine manufacturers Mining equipment companies Manufacturers of industrial automation Electrical equipment manufacturers Machine manufacturers Wire and cable manufacturers Japanese companies established in 1917
Tsubakimoto Chain
Engineering
2,066
18,019,595
https://en.wikipedia.org/wiki/Rs1805054
In genetics, Rs1805054, also called C267T, is a name used for a specific genetic variation, a single nucleotide polymorphism (SNP), in the HTR6 gene. It is one of the few investigated polymorphisms of its gene. C267T is a synonymous polymorphism. As of 2008 meta-analysis of the polymorphism and Alzheimer's disease indicates that there probably is no association between the two, though individual studies report such an association, e.g., a Chinese study found an association with late-onset Alzheimer's disease. Another reported association in neuropsychiatry disorders is with treatment response in depression. C267T has also been examined in relation to personality traits, with a Korean study finding some evidence for an association with the trait self-transcendence. A Japanese study reported no association with personality traits using the NEO PI-R personality inventory. References SNPs on chromosome 1
Rs1805054
Biology
200
6,049,638
https://en.wikipedia.org/wiki/Mezlocillin
Mezlocillin is a broad-spectrum penicillin antibiotic. It is active against both Gram-negative and some Gram-positive bacteria. Unlike most other extended spectrum penicillins, it is excreted by the liver, therefore it is useful for biliary tract infections, such as ascending cholangitis. Mechanism of action Like all other beta-lactam antibiotics, mezlocillin inhibits the third and last stage of bacterial cell wall synthesis by binding to penicillin binding proteins. This ultimately leads to cell lysis. Susceptible organisms Gram-negative Bacteroides spp., including B. fragilis Enterobacter spp. Escherichia coli Haemophilus influenzae Klebsiella species Morganella morganii Neisseria gonorrhoeae Proteus mirabilis Proteus vulgaris Providencia rettgeri Pseudomonas spp., including P. aeruginosa Serratia marcescens Gram-positive Enterococcus faecalis Peptococcus spp. Peptostreptococcus spp. Synthesis Mezlocillin can be made in a variety of ways including reaction of ampicillin with chlorocarbamate 1 in the presence of triethylamine. Chlorocarbamate 1 itself is made from ethylenediamine by reaction with phosgene to form the cyclic urea followed by monoamide formation with methanesulfonyl chloride and then reaction of the other nitrogen atom with phosgene and trimethylsilylchloride. The closely related analogue azlocillin is made in essentially the same manner as mezlocillin. but with omission of the methylation step. References Further reading External links Duke Penicillins Enantiopure drugs Imidazolidinones
Mezlocillin
Chemistry
378
2,773,028
https://en.wikipedia.org/wiki/Very%20Large%20Hadron%20Collider
The Very Large Hadron Collider (VLHC) was a proposed future hadron collider planned to be located at Fermilab. The VLHC was planned to be located in a ring, using the Tevatron as an injector. The VLHC would run in two stages, initially the Stage-1 VLHC would have a collision energy of 40 TeV, and a luminosity of at least 1⋅1034 cm−2⋅s−1 (matching or surpassing the LHC design luminosity, however the LHC has now surpassed this). After running at Stage-1 for a period of time the VLHC was planned to run at Stage-2, with the quadrupole magnets used for bending the beam being replaced by magnets that can reach higher peak magnetic fields, allowing a collision energy of up to 175 TeV and other improvements, including raising the luminosity to at least 2⋅1034 cm−2⋅s−1. Given that such a performance increase necessitates a correspondingly large increase in size, cost, and power requirements, a significant amount of international collaboration over a period of decades would be required to construct such a collider. See also Particle physics Superconducting Super Collider - planned ring circumference of . Canceled after of tunnel had been bored and about billion spent. High Luminosity Large Hadron Collider Future Circular Collider References External links VLHC Design Materials Particle physics facilities Proposed particle accelerators Fermilab
Very Large Hadron Collider
Physics
320
733,241
https://en.wikipedia.org/wiki/Polyaniline
Polyaniline (PANI) is a conducting polymer and organic semiconductor of the semi-flexible rod polymer family. The compound has been of interest since the 1980s because of its electrical conductivity and mechanical properties. Polyaniline is one of the most studied conducting polymers. Historical development Polyaniline was discovered in the 19th century by F. Ferdinand Runge (1794–1867), Carl Fritzsche (1808–1871), John Lightfoot (1831–1872), and Henry Letheby (1816–1876). Lightfoot studied the oxidation of aniline, which had been isolated only 20 years previously. He developed the first commercially successful route to the dye called Aniline black. The first definitive report of polyaniline did not occur until 1862, which included an electrochemical method for the determination of small quantities of aniline. From the early 20th century on, occasional reports about the structure of PANI were published. Polymerized from the inexpensive aniline, polyaniline can be found in one of three idealized oxidation states: leucoemeraldine – white/clear & colorless (C6H4NH)n emeraldine – green for the emeraldine salt, blue for the emeraldine base ([C6H4NH]2[C6H4N]2)n (per)nigraniline – blue/violet (C6H4N)n In the figure, x equals half the degree of polymerization (DP). Leucoemeraldine with n = 1, m = 0 is the fully reduced state. Pernigraniline is the fully oxidized state (n = 0, m = 1) with imine links instead of amine links. Studies have shown that most forms of polyaniline are one of the three states or physical mixtures of these components. The emeraldine (n = m = 0.5) form of polyaniline, often referred to as emeraldine base (EB), is neutral, if doped (protonated) it is called emeraldine salt (ES), with the imine nitrogens protonated by an acid. Protonation helps to delocalize the otherwise trapped diiminoquinone-diaminobenzene state. Emeraldine base is regarded as the most useful form of polyaniline due to its high stability at room temperature and the fact that, upon doping with acid, the resulting emeraldine salt form of polyaniline is highly electrically conducting. Leucoemeraldine and pernigraniline are poor conductors, even when doped with an acid. The colour change associated with polyaniline in different oxidation states can be used in sensors and electrochromic devices. Polyaniline sensors typically exploit changes in electrical conductivity between the different oxidation states or doping levels. Treatment of emeraldine with acids increases the electrical conductivity by up to ten orders of magnitude. Undoped polyaniline has a conductivity of S/m, whereas conductivities of S/m can be achieved by doping to 4% HBr. The same material can be prepared by oxidation of leucoemeraldine. Synthesis Although the synthetic methods to produce polyaniline are quite simple, the mechanism of polymerization is probably complex. The formation of leucoemeraldine can be described as follows, where [O] is a generic oxidant: n C6H5NH2 + [O] → [C6H4NH]n + H2O A common oxidant is ammonium persulfate in 1 M hydrochloric acid (other acids can be used). The polymer precipitates as an unstable dispersion with micrometer-scale particulates. (Per)nigraniline is prepared by oxidation of the emeraldine base with a peracid: {[C6H4NH]2[C6H4N]2}n + RCO3H → [C6H4N]n + H2O + RCO2H Processing The synthesis of polyaniline nanostructures is facile. Using surfactant dopants, the polyaniline can be made dispersible and hence useful for practical applications. Bulk synthesis of polyaniline nanofibers has been researched extensively. A multi-stage model for the formation of emeraldine base is proposed. In the first stage of the reaction the pernigraniline PS salt oxidation state is formed. In the second stage pernigraniline is reduced to the emeraldine salt as aniline monomer gets oxidized to the radical cation. In the third stage this radical cation couples with ES salt. This process can be followed by light scattering analysis which allows the determination of the absolute molar mass. According to one study in the first step a DP of 265 is reached with the DP of the final polymer at 319. Approximately 19% of the final polymer is made up of the aniline radical cation which is formed during the reaction. Polyaniline is typically produced in the form of long-chain polymer aggregates, surfactant (or dopant) stabilized nanoparticle dispersions, or stabilizer-free nanofiber dispersions depending on the supplier and synthetic route. Surfactant or dopant stabilized polyaniline dispersions have been available for commercial sale since the late 1990s. Potential applications The major applications are printed circuit board manufacturing: final finishes, used in millions of m2 every year, antistatic and ESD coatings, and corrosion protection. Polyaniline and its derivatives are also used as the precursor for the production of N-doped carbon materials through high-temperature heat treatment. Printed emeraldine polyaniline-based sensors have also gained much attention for widespread applications where devices are typically fabricated via screen, inkjet or aerosol jet printing. References Organic polymers Polyamines Molecular electronics Organic semiconductors Polyelectrolytes Conductive polymers
Polyaniline
Chemistry,Materials_science
1,230
7,690,175
https://en.wikipedia.org/wiki/RNA-dependent%20RNA%20polymerase
RNA-dependent RNA polymerase (RdRp) or RNA replicase is an enzyme that catalyzes the replication of RNA from an RNA template. Specifically, it catalyzes synthesis of the RNA strand complementary to a given RNA template. This is in contrast to typical DNA-dependent RNA polymerases, which all organisms use to catalyze the transcription of RNA from a DNA template. RdRp is an essential protein encoded in the genomes of most RNA-containing viruses that lack a DNA stage, including SARS-CoV-2. Some eukaryotes also contain RdRps, which are involved in RNA interference and differ structurally from viral RdRps. History Viral RdRps were discovered in the early 1960s from studies on mengovirus and polio virus when it was observed that these viruses were not sensitive to actinomycin D, a drug that inhibits cellular DNA-directed RNA synthesis. This lack of sensitivity suggested the action of a virus-specific enzyme that could copy RNA from an RNA template. Distribution RdRps are highly conserved in viruses and are related to telomerase, though the reason for this was an ongoing question as of 2009. The similarity led to speculation that viral RdRps are ancestral to human telomerase. The most famous example of RdRp is in the polio virus. The viral genome is composed of RNA, which enters the cell through receptor-mediated endocytosis. From there, the RNA acts as a template for complementary RNA synthesis. The complementary strand acts as a template for the production of new viral genomes that are packaged and released from the cell ready to infect more host cells. The advantage of this method of replication is that no DNA stage complicates replication. The disadvantage is that no 'back-up' DNA copy is available. Many RdRps associate tightly with membranes making them difficult to study. The best-known RdRps are polioviral 3Dpol, vesicular stomatitis virus L, and hepatitis C virus NS5B protein. Many eukaryotes have RdRps that are involved in RNA interference: these amplify microRNAs and small temporal RNAs and produce double-stranded RNA using small interfering RNAs as primers. These RdRps are used in the defense mechanisms and can be appropriated by RNA viruses. Their evolutionary history predates the divergence of major eukaryotic groups. Replication RdRp differs from DNA dependent RNA polymerase as it catalyzes RNA synthesis of strands complementary to a given RNA template. The RNA replication process is a four-step mechanism: Nucleoside triphosphate (NTP) binding – initially, the RdRp presents with a vacant active site in which an NTP binds, complementary to the corresponding nucleotide on the template strand. Correct NTP binding causes the RdRp to undergo a conformational change. Active site closure – the conformational change, initiated by the correct NTP binding, results in the restriction of active site access and produces a catalytically competent state. Phosphodiester bond formation – two Mg2+ ions are present in the catalytically active state and arrange themselves around the newly synthesized RNA chain such that the substrate NTP undergoes a phosphatidyl transfer and forms a phosphodiester bond with the new chain. Without the use of these Mg2+ ions, the active site is no longer catalytically stable and the RdRp complex changes to an open conformation. Translocation – once the active site is open, the RNA template strand moves by one position through the RdRp protein complex and continues chain elongation by binding a new NTP, unless otherwise specified by the template. RNA synthesis can be performed by a primer-independent (de novo) or a primer-dependent mechanism that utilizes a viral protein genome-linked (VPg) primer. The de novo initiation consists in the addition of a NTP to the 3'-OH of the first initiating NTP. During the following elongation phase, this nucleotidyl transfer reaction is repeated with subsequent NTPs to generate the complementary RNA product. Termination of the nascent RNA chain produced by RdRp is not completely known, however, RdRp termination is sequence-independent. One major drawback of RNA-dependent RNA polymerase replication is the transcription error rate. RdRps lack fidelity on the order of 104 nucleotides, which is thought to be a direct result of inadequate proofreading. This variation rate is favored in viral genomes as it allows for the pathogen to overcome host defenses trying to avoid infection, allowing for evolutionary growth. Structure Viral/prokaryotic RdRp, along with many single-subunit DdRp, employ a fold whose organization has been linked to the shape of a right hand with three subdomains termed fingers, palm, and thumb. Only the palm subdomain, composed of a four-stranded antiparallel beta sheet with two alpha helices, is well conserved. In RdRp, the palm subdomain comprises three well-conserved motifs (A, B, and C). Motif A (D-x(4,5)-D) and motif C (GDD) are spatially juxtaposed; the aspartic acid residues of these motifs are implied in the binding of Mg2+ and/or Mn2+. The asparagine residue of motif B is involved in selection of ribonucleoside triphosphates over dNTPs and, thus, determines whether RNA rather than DNA is synthesized. The domain organization and the 3D structure of the catalytic centre of a wide range of RdRps, even those with a low overall sequence homology, are conserved. The catalytic center is formed by several motifs containing conserved amino acid residues. Eukaryotic RNA interference requires a cellular RdRp (c RdRp). Unlike the "hand" polymerases, they resemble simplified multi-subunit DdRPs, specifically in the catalytic β/β' subunits, in that they use two sets of double-psi β-barrels in the active site. QDE1 () in Neurospora crassa, which has both barrels in the same chain, is an example of such a c RdRp enzyme. Bacteriophage homologs of c RdRp, including the similarly single-chain DdRp yonO (), appear to be closer to c RdRps than DdRPs are. Viruses Four superfamilies of viruses cover all RNA-containing viruses with no DNA stage: Viruses containing positive-strand RNA or double-strand RNA, except retroviruses and Birnaviridae All positive-strand RNA eukaryotic viruses with no DNA stage, such as Coronaviridae All RNA-containing bacteriophages; the two families of RNA-containing bacteriophages are Fiersviridae (positive ssRNA phages) and Cystoviridae (dsRNA phages) dsRNA virus family Reoviridae, Totiviridae, Hypoviridae, Partitiviridae Mononegavirales (negative-strand RNA viruses with non-segmented genomes; ) Negative-strand RNA viruses with segmented genomes (), such as orthomyxoviruses and bunyaviruses dsRNA virus family Birnaviridae () Flaviviruses produce a polyprotein from the ssRNA genome. The polyprotein is cleaved to a number of products, one of which is NS5, an RdRp. It possesses short regions and motifs homologous to other RdRps. RNA replicase found in positive-strand ssRNA viruses are related to each other, forming three large superfamilies. Birnaviral RNA replicase is unique in that it lacks motif C (GDD) in the palm. Mononegaviral RdRp (PDB 5A22) has been automatically classified as similar to (+)−ssRNA RdRps, specifically one from Pestivirus and one from Leviviridae. Bunyaviral RdRp monomer (PDB 5AMQ) resembles the heterotrimeric complex of Orthomyxoviral (Influenza; PDB 4WSB) RdRp. Since it is a protein universal to RNA-containing viruses, RdRp is a useful marker for understanding their evolution. Recombination When replicating its (+)ssRNA genome, the poliovirus RdRp is able to carry out recombination. Recombination appears to occur by a copy choice mechanism in which the RdRp switches (+)ssRNA templates during negative strand synthesis. Recombination frequency is determined in part by the fidelity of RdRp replication. RdRp variants with high replication fidelity show reduced recombination, and low fidelity RdRps exhibit increased recombination. Recombination by RdRp strand switching occurs frequently during replication in the (+)ssRNA plant carmoviruses and tombusviruses. Intragenic complementation Sendai virus (family Paramyxoviridae) has a linear, single-stranded, negative-sense, nonsegmented RNA genome. The viral RdRp consists of two virus-encoded subunits, a smaller one P and a larger one L. Testing different inactive RdRp mutants with defects throughout the length of the L subunit in pairwise combinations, restoration of viral RNA synthesis was observed in some combinations. This positive L–L interaction is referred to as intragenic complementation and indicates that the L protein is an oligomer in the viral RNA polymerase complex. Drug therapies RdRps can be used as drug targets for viral pathogens as their function is not necessary for eukaryotic survival. By inhibiting RdRp function, new RNAs cannot be replicated from an RNA template strand, however, DNA-dependent RNA polymerase remains functional. Some antiviral drugs against Hepatitis C and COVID-19 specifically target RdRp. These include Sofosbuvir and Ribavirin against Hepatitis C and remdesivir, an FDA approved drug against COVID-19 GS-441524 triphosphate is a substrate for RdRp, but not mammalian polymerases. It results in premature chain termination and inhibition of viral replication. GS-441524 triphosphate is the biologically active form of remdesivir. Remdesivir is classified as a nucleotide analog that inhibits RdRp function by covalently binding to and interrupting termination of the nascent RNA through early or delayed termination or preventing further elongation of the RNA polynucleotide. This early termination leads to nonfunctional RNA that gets degraded through normal cellular processes. RNA interference The use of RdRp plays a major role in RNA interference in eukaryotes, a process used to silence gene expression via small interfering RNAs (siRNAs) binding to mRNA rendering them inactive. Eukaryotic RdRp becomes active in the presence of dsRNA, and is less widely distributed than other RNAi components as it lost in some animals, though still found in C. elegans, P. tetraurelia, and plants. This presence of dsRNA triggers the activation of RdRp and RNAi processes by priming the initiation of RNA transcription through the introduction of siRNAs. In C. elegans, siRNAs are integrated into the RNA-induced silencing complex, RISC, which works alongside mRNAs targeted for interference to recruit more RdRps to synthesize more secondary siRNAs and repress gene expression. See also Spiegelman's Monster NS5B inhibitor Notes References External links Gene expression RNA EC 2.7.7
RNA-dependent RNA polymerase
Chemistry,Biology
2,429
197,129
https://en.wikipedia.org/wiki/Nanocrystalline%20silicon
Nanocrystalline silicon (nc-Si), sometimes also known as microcrystalline silicon (μc-Si), is a form of porous silicon. It is an allotropic form of silicon with paracrystalline structure—is similar to amorphous silicon (a-Si), in that it has an amorphous phase. Where they differ, however, is that nc-Si has small grains of crystalline silicon within the amorphous phase. This is in contrast to polycrystalline silicon (poly-Si) which consists solely of crystalline silicon grains, separated by grain boundaries. The difference comes solely from the grain size of the crystalline grains. Most materials with grains in the micrometre range are actually fine-grained polysilicon, so nanocrystalline silicon is a better term. The term Nanocrystalline silicon refers to a range of materials around the transition region from amorphous to microcrystalline phase in the silicon thin film. The crystalline volume fraction (as measured from Raman spectroscopy) is another criterion to describe the materials in this transition zone. nc-Si has many useful advantages over a-Si, one being that if grown properly it can have a higher electron mobility, due to the presence of the silicon crystallites. It also shows increased absorption in the red and infrared wavelengths, which make it an important material for use in a-Si solar cells. One of the most important advantages of nanocrystalline silicon, however, is that it has increased stability over a-Si, one of the reasons being because of its lower hydrogen concentration. Although it currently cannot attain the mobility that poly-Si can, it has the advantage over poly-Si that it is easier to fabricate, as it can be deposited using conventional low temperature a-Si deposition techniques, such as PECVD, as opposed to laser annealing or high temperature CVD processes, in the case of poly-Si. Uses The main application of this novel material is in the field of silicon thin film solar cells. As nc-Si has about the same bandgap as crystalline silicon, which is ~1.12 eV, it can be combined in thin layers with a-Si, creating a layered, multi-junction cell called a tandem cell. The top cell in a-Si absorbs the visible light and leaves the infrared part of the spectrum for the bottom cell in nanocrystalline Si. A few companies are on the verge of commercializing silicon inks based on nanocrystalline silicon or on other silicon compounds. The semiconductor industry is also investigating the potential for nanocrystalline silicon, especially in the memory area. Thin-film silicon Nanocrystalline silicon and small-grained polycrystalline silicon are considered thin-film silicon. See also Amorphous silicon Conductive ink Nanoparticle Printed electronics Protocrystalline Quantum dot References External links Thin-film silicon solar cells. Allotropes of silicon Silicon solar cells Silicon, Nanocrystalline Thin-film cells Nanomaterials
Nanocrystalline silicon
Chemistry,Materials_science,Mathematics
643
55,794,265
https://en.wikipedia.org/wiki/Hexafluoroisobutylene
Hexafluoroisobutylene is an organofluorine compound with the formula (CF3)2C=CH2. This colorless gas is structurally similar to isobutylene. It is used as a comonomer in the production of modified polyvinylidene fluoride. It is produced in a multistep process starting with the reaction of acetic anhydride with hexafluoroacetone. It is oxidized by sodium hypochlorite to hexafluoroisobutylene oxide. As expected, it is a potent dienophile. See also Perfluoroisobutene References Trifluoromethyl compounds Fluoroalkenes Gases Vinylidene compounds Hydrofluoroolefins
Hexafluoroisobutylene
Physics,Chemistry
166
51,423
https://en.wikipedia.org/wiki/P-adic%20number
In number theory, given a prime number , the -adic numbers form an extension of the rational numbers which is distinct from the real numbers, though with some similar properties; -adic numbers can be written in a form similar to (possibly infinite) decimals, but with digits based on a prime number rather than ten, and extending to the left rather than to the right. For example, comparing the expansion of the rational number in base vs. the -adic expansion, Formally, given a prime number , a -adic number can be defined as a series where is an integer (possibly negative), and each is an integer such that A -adic integer is a -adic number such that In general the series that represents a -adic number is not convergent in the usual sense, but it is convergent for the -adic absolute value where is the least integer such that (if all are zero, one has the zero -adic number, which has as its -adic absolute value). Every rational number can be uniquely expressed as the sum of a series as above, with respect to the -adic absolute value. This allows considering rational numbers as special -adic numbers, and alternatively defining the -adic numbers as the completion of the rational numbers for the -adic absolute value, exactly as the real numbers are the completion of the rational numbers for the usual absolute value. -adic numbers were first described by Kurt Hensel in 1897, though, with hindsight, some of Ernst Kummer's earlier work can be interpreted as implicitly using -adic numbers. Motivation Roughly speaking, modular arithmetic modulo a positive integer consists of "approximating" every integer by the remainder of its division by , called its residue modulo . The main property of modular arithmetic is that the residue modulo of the result of a succession of operations on integers is the same as the result of the same succession of operations on residues modulo . If one knows that the absolute value of the result is less than , this allows a computation of the result which does not involve any integer larger than . For larger results, an old method, still in common use, consists of using several small moduli that are pairwise coprime, and applying the Chinese remainder theorem for recovering the result modulo the product of the moduli. Another method discovered by Kurt Hensel consists of using a prime modulus , and applying Hensel's lemma for recovering iteratively the result modulo If the process is continued infinitely, this provides eventually a result which is a -adic number. Basic lemmas The theory of -adic numbers is fundamentally based on the two following lemmas Every nonzero rational number can be written where , , and are integers and neither nor is divisible by . The exponent is uniquely determined by the rational number and is called its -adic valuation (this definition is a particular case of a more general definition, given below). The proof of the lemma results directly from the fundamental theorem of arithmetic. Every nonzero rational number of valuation can be uniquely written where is a rational number of valuation greater than , and is an integer such that The proof of this lemma results from modular arithmetic: By the above lemma, where and are integers coprime with . By Bezout lemma, there exist integers and , with , such that Setting (hence ), we have To show the uniqueness of this representation, observe that if with and , there holds by difference with and . Write , where is coprime to ; then , which is possible only if and . Hence and . The above process can be iterated starting from instead of , giving the following. Given a nonzero rational number of valuation and a positive integer , there are a rational number of nonnegative valuation and uniquely defined nonnegative integers less than such that and The -adic numbers are essentially obtained by continuing this infinitely to produce an infinite series. p-adic series The -adic numbers are commonly defined by means of -adic series. A -adic series is a formal power series of the form where is an integer and the are rational numbers that either are zero or have a nonnegative valuation (that is, the denominator of is not divisible by ). Every rational number may be viewed as a -adic series with a single nonzero term, consisting of its factorization of the form with and both coprime with . Two -adic series and are equivalent if there is an integer such that, for every integer the rational number is zero or has a -adic valuation greater than . A -adic series is normalized if either all are integers such that and or all are zero. In the latter case, the series is called the zero series. Every -adic series is equivalent to exactly one normalized series. This normalized series is obtained by a sequence of transformations, which are equivalences of series; see § Normalization of a -adic series, below. In other words, the equivalence of -adic series is an equivalence relation, and each equivalence class contains exactly one normalized -adic series. The usual operations of series (addition, subtraction, multiplication, division) are compatible with equivalence of -adic series. That is, denoting the equivalence with , if , and are nonzero -adic series such that one has The -adic numbers are often defined as the equivalence classes of -adic series, in a similar way as the definition of the real numbers as equivalence classes of Cauchy sequences. The uniqueness property of normalization, allows uniquely representing any -adic number by the corresponding normalized -adic series. The compatibility of the series equivalence leads almost immediately to basic properties of -adic numbers: Addition, multiplication and multiplicative inverse of -adic numbers are defined as for formal power series, followed by the normalization of the result. With these operations, the -adic numbers form a field, which is an extension field of the rational numbers. The valuation of a nonzero -adic number , commonly denoted is the exponent of in the first non zero term of the corresponding normalized series; the valuation of zero is The -adic absolute value of a nonzero -adic number , is for the zero -adic number, one has Normalization of a p-adic series Starting with the series the first above lemma allows getting an equivalent series such that the -adic valuation of is zero. For that, one considers the first nonzero If its -adic valuation is zero, it suffices to change into , that is to start the summation from . Otherwise, the -adic valuation of is and where the valuation of is zero; so, one gets an equivalent series by changing to and to Iterating this process, one gets eventually, possibly after infinitely many steps, an equivalent series that either is the zero series or is a series such that the valuation of is zero. Then, if the series is not normalized, consider the first nonzero that is not an integer in the interval The second above lemma allows writing it one gets n equivalent series by replacing with and adding to Iterating this process, possibly infinitely many times, provides eventually the desired normalized -adic series. Definition There are several equivalent definitions of -adic numbers. The one that is given here is relatively elementary, since it does not involve any other mathematical concepts than those introduced in the preceding sections. Other equivalent definitions use completion of a discrete valuation ring (see ), completion of a metric space (see ), or inverse limits (see ). A -adic number can be defined as a normalized -adic series. Since there are other equivalent definitions that are commonly used, one says often that a normalized -adic series represents a -adic number, instead of saying that it is a -adic number. One can say also that any -adic series represents a -adic number, since every -adic series is equivalent to a unique normalized -adic series. This is useful for defining operations (addition, subtraction, multiplication, division) of -adic numbers: the result of such an operation is obtained by normalizing the result of the corresponding operation on series. This well defines operations on -adic numbers, since the series operations are compatible with equivalence of -adic series. With these operations, -adic numbers form a field called the field of -adic numbers and denoted or There is a unique field homomorphism from the rational numbers into the -adic numbers, which maps a rational number to its -adic expansion. The image of this homomorphism is commonly identified with the field of rational numbers. This allows considering the -adic numbers as an extension field of the rational numbers, and the rational numbers as a subfield of the -adic numbers. The valuation of a nonzero -adic number , commonly denoted is the exponent of in the first nonzero term of every -adic series that represents . By convention, that is, the valuation of zero is This valuation is a discrete valuation. The restriction of this valuation to the rational numbers is the -adic valuation of that is, the exponent in the factorization of a rational number as with both and coprime with . p-adic integers The -adic integers are the -adic numbers with a nonnegative valuation. A -adic integer can be represented as a sequence of residues mod for each integer , satisfying the compatibility relations for . Every integer is a -adic integer (including zero, since ). The rational numbers of the form with coprime with and are also -adic integers (for the reason that has an inverse mod for every ). The -adic integers form a commutative ring, denoted or , that has the following properties. It is an integral domain, since it is a subring of a field, or since the first term of the series representation of the product of two non zero -adic series is the product of their first terms. The units (invertible elements) of are the -adic numbers of valuation zero. It is a principal ideal domain, such that each ideal is generated by a power of . It is a local ring of Krull dimension one, since its only prime ideals are the zero ideal and the ideal generated by , the unique maximal ideal. It is a discrete valuation ring, since this results from the preceding properties. It is the completion of the local ring which is the localization of at the prime ideal The last property provides a definition of the -adic numbers that is equivalent to the above one: the field of the -adic numbers is the field of fractions of the completion of the localization of the integers at the prime ideal generated by . Topological properties The -adic valuation allows defining an absolute value on -adic numbers: the -adic absolute value of a nonzero -adic number is where is the -adic valuation of . The -adic absolute value of is This is an absolute value that satisfies the strong triangle inequality since, for every and one has if and only if Moreover, if one has This makes the -adic numbers a metric space, and even an ultrametric space, with the -adic distance defined by As a metric space, the -adic numbers form the completion of the rational numbers equipped with the -adic absolute value. This provides another way for defining the -adic numbers. However, the general construction of a completion can be simplified in this case, because the metric is defined by a discrete valuation (in short, one can extract from every Cauchy sequence a subsequence such that the differences between two consecutive terms have strictly decreasing absolute values; such a subsequence is the sequence of the partial sums of a -adic series, and thus a unique normalized -adic series can be associated to every equivalence class of Cauchy sequences; so, for building the completion, it suffices to consider normalized -adic series instead of equivalence classes of Cauchy sequences). As the metric is defined from a discrete valuation, every open ball is also closed. More precisely, the open ball equals the closed ball where is the least integer such that Similarly, where is the greatest integer such that This implies that the -adic numbers form a locally compact space, and the -adic integers—that is, the ball —form a compact space. p-adic expansion of rational numbers The decimal expansion of a positive rational number is its representation as a series where is an integer and each is also an integer such that This expansion can be computed by long division of the numerator by the denominator, which is itself based on the following theorem: If is a rational number such that there is an integer such that and with The decimal expansion is obtained by repeatedly applying this result to the remainder which in the iteration assumes the role of the original rational number . The -adic expansion of a rational number is defined similarly, but with a different division step. More precisely, given a fixed prime number , every nonzero rational number can be uniquely written as where is a (possibly negative) integer, and are coprime integers both coprime with , and is positive. The integer is the -adic valuation of , denoted and is its -adic absolute value, denoted (the absolute value is small when the valuation is large). The division step consists of writing where is an integer such that and is either zero, or a rational number such that (that is, ). The -adic expansion of is the formal power series obtained by repeating indefinitely the above division step on successive remainders. In a -adic expansion, all are integers such that If with , the process stops eventually with a zero remainder; in this case, the series is completed by trailing terms with a zero coefficient, and is the representation of in base-. The existence and the computation of the -adic expansion of a rational number results from Bézout's identity in the following way. If, as above, and and are coprime, there exist integers and such that So Then, the Euclidean division of by gives with This gives the division step as so that in the iteration is the new rational number. The uniqueness of the division step and of the whole -adic expansion is easy: if one has This means divides Since and the following must be true: and Thus, one gets and since divides it must be that The -adic expansion of a rational number is a series that converges to the rational number, if one applies the definition of a convergent series with the -adic absolute value. In the standard -adic notation, the digits are written in the same order as in a standard base- system, namely with the powers of the base increasing to the left. This means that the production of the digits is reversed and the limit happens on the left hand side. The -adic expansion of a rational number is eventually periodic. Conversely, a series with converges (for the -adic absolute value) to a rational number if and only if it is eventually periodic; in this case, the series is the -adic expansion of that rational number. The proof is similar to that of the similar result for repeating decimals. Example Let us compute the 5-adic expansion of Bézout's identity for 5 and the denominator 3 is (for larger examples, this can be computed with the extended Euclidean algorithm). Thus For the next step, one has to expand (the factor 5 has to be viewed as a "shift" of the -adic valuation, similar to the basis of any number expansion, and thus it should not be itself expanded). To expand , we start from the same Bézout's identity and multiply it by , giving The "integer part" is not in the right interval. So, one has to use Euclidean division by for getting giving and the expansion in the first step becomes Similarly, one has and As the "remainder" has already been found, the process can be continued easily, giving coefficients for odd powers of five, and for even powers. Or in the standard 5-adic notation with the ellipsis on the left hand side. Positional notation It is possible to use a positional notation similar to that which is used to represent numbers in base . Let be a normalized -adic series, i.e. each is an integer in the interval One can suppose that by setting for (if ), and adding the resulting zero terms to the series. If the positional notation consists of writing the consecutively, ordered by decreasing values of , often with appearing on the right as an index: So, the computation of the example above shows that and When a separating dot is added before the digits with negative index, and, if the index is present, it appears just after the separating dot. For example, and If a -adic representation is finite on the left (that is, for large values of ), then it has the value of a nonnegative rational number of the form with integers. These rational numbers are exactly the nonnegative rational numbers that have a finite representation in base . For these rational numbers, the two representations are the same. Modular properties The quotient ring may be identified with the ring of the integers modulo This can be shown by remarking that every -adic integer, represented by its normalized -adic series, is congruent modulo with its partial sum whose value is an integer in the interval A straightforward verification shows that this defines a ring isomorphism from to The inverse limit of the rings is defined as the ring formed by the sequences such that and for every . The mapping that maps a normalized -adic series to the sequence of its partial sums is a ring isomorphism from to the inverse limit of the This provides another way for defining -adic integers (up to an isomorphism). This definition of -adic integers is specially useful for practical computations, as allowing building -adic integers by successive approximations. For example, for computing the -adic (multiplicative) inverse of an integer, one can use Newton's method, starting from the inverse modulo ; then, each Newton step computes the inverse modulo from the inverse modulo The same method can be used for computing the -adic square root of an integer that is a quadratic residue modulo . This seems to be the fastest known method for testing whether a large integer is a square: it suffices to test whether the given integer is the square of the value found in . Applying Newton's method to find the square root requires to be larger than twice the given integer, which is quickly satisfied. Hensel lifting is a similar method that allows to "lift" the factorization modulo of a polynomial with integer coefficients to a factorization modulo for large values of . This is commonly used by polynomial factorization algorithms. Notation There are several different conventions for writing -adic expansions. So far this article has used a notation for -adic expansions in which powers of increase from right to left. With this right-to-left notation the 3-adic expansion of for example, is written as When performing arithmetic in this notation, digits are carried to the left. It is also possible to write -adic expansions so that the powers of increase from left to right, and digits are carried to the right. With this left-to-right notation the 3-adic expansion of is -adic expansions may be written with other sets of digits instead of  }. For example, the -adic expansion of can be written using balanced ternary digits }, with representing negative one, as In fact any set of integers which are in distinct residue classes modulo may be used as -adic digits. In number theory, Teichmüller representatives are sometimes used as digits. is a variant of the -adic representation of rational numbers that was proposed in 1979 by Eric Hehner and Nigel Horspool for implementing on computers the (exact) arithmetic with these numbers. Cardinality Both and are uncountable and have the cardinality of the continuum. For this results from the -adic representation, which defines a bijection of on the power set For this results from its expression as a countably infinite union of copies of : Algebraic closure contains and is a field of characteristic . Because can be written as sum of squares, cannot be turned into an ordered field. The field of real numbers has only a single proper algebraic extension: the complex numbers . In other words, this quadratic extension is already algebraically closed. By contrast, the algebraic closure of , denoted has infinite degree, that is, has infinitely many inequivalent algebraic extensions. Also contrasting the case of real numbers, although there is a unique extension of the -adic valuation to the latter is not (metrically) complete. Its (metric) completion is called or . Here an end is reached, as is algebraically closed. However unlike this field is not locally compact. and are isomorphic as rings, so we may regard as endowed with an exotic metric. The proof of existence of such a field isomorphism relies on the axiom of choice, and does not provide an explicit example of such an isomorphism (that is, it is not constructive). If is any finite Galois extension of , the Galois group is solvable. Thus, the Galois group is prosolvable. Multiplicative group contains the -th cyclotomic field () if and only if . For instance, the -th cyclotomic field is a subfield of if and only if , or . In particular, there is no multiplicative -torsion in if . Also, is the only non-trivial torsion element in . Given a natural number , the index of the multiplicative group of the -th powers of the non-zero elements of in is finite. The number , defined as the sum of reciprocals of factorials, is not a member of any -adic field; but for . For one must take at least the fourth power. (Thus a number with similar properties as — namely a -th root of — is a member of for all .) Local–global principle Helmut Hasse's local–global principle is said to hold for an equation if it can be solved over the rational numbers if and only if it can be solved over the real numbers and over the -adic numbers for every prime . This principle holds, for example, for equations given by quadratic forms, but fails for higher polynomials in several indeterminates. Rational arithmetic with Hensel lifting Generalizations and related concepts The reals and the -adic numbers are the completions of the rationals; it is also possible to complete other fields, for instance general algebraic number fields, in an analogous way. This will be described now. Suppose D is a Dedekind domain and E is its field of fractions. Pick a non-zero prime ideal P of D. If x is a non-zero element of E, then xD is a fractional ideal and can be uniquely factored as a product of positive and negative powers of non-zero prime ideals of D. We write ordP(x) for the exponent of P in this factorization, and for any choice of number c greater than 1 we can set Completing with respect to this absolute value |⋅|P yields a field EP, the proper generalization of the field of p-adic numbers to this setting. The choice of c does not change the completion (different choices yield the same concept of Cauchy sequence, so the same completion). It is convenient, when the residue field D/P is finite, to take for c the size of D/P. For example, when E is a number field, Ostrowski's theorem says that every non-trivial non-Archimedean absolute value on E arises as some |⋅|P. The remaining non-trivial absolute values on E arise from the different embeddings of E into the real or complex numbers. (In fact, the non-Archimedean absolute values can be considered as simply the different embeddings of E into the fields Cp, thus putting the description of all the non-trivial absolute values of a number field on a common footing.) Often, one needs to simultaneously keep track of all the above-mentioned completions when E is a number field (or more generally a global field), which are seen as encoding "local" information. This is accomplished by adele rings and idele groups. p-adic integers can be extended to p-adic solenoids . There is a map from to the circle group whose fibers are the p-adic integers , in analogy to how there is a map from to the circle whose fibers are . See also Non-Archimedean p-adic quantum mechanics p-adic Hodge theory p-adic Teichmuller theory p-adic analysis p-adic valuation 1 + 2 + 4 + 8 + ... k-adic notation C-minimal theory Hensel's lemma Locally compact field Mahler's theorem Profinite integer Volkenborn integral Two's complement Footnotes Notes Citations References . — Translation into English by John Stillwell of Theorie der algebraischen Functionen einer Veränderlichen (1882). Further reading External links p-adic number at Springer On-line Encyclopaedia of Mathematics Field (mathematics) Number theory
P-adic number
Mathematics
5,322
58,622,214
https://en.wikipedia.org/wiki/Aspergillus%20spathulatus
Aspergillus spathulatus is a species of fungus in the genus Aspergillus. It is from the Fumigati section. Several fungi from this section produce heat-resistant ascospores, and the isolates from this section are frequently obtained from locations where natural fires have previously occurred. The species was first described in 1985. It has been reported to produce aszonalenins and xanthocillins. Growth and morphology A. spathulatus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References spathulatus Fungi described in 1985 Fungus species
Aspergillus spathulatus
Biology
157
19,737
https://en.wikipedia.org/wiki/Maxwell%27s%20equations
Maxwell's equations, or Maxwell–Heaviside equations, are a set of coupled partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, electric and magnetic circuits. The equations provide a mathematical model for electric, optical, and radio technologies, such as power generation, electric motors, wireless communication, lenses, radar, etc. They describe how electric and magnetic fields are generated by charges, currents, and changes of the fields. The equations are named after the physicist and mathematician James Clerk Maxwell, who, in 1861 and 1862, published an early form of the equations that included the Lorentz force law. Maxwell first used the equations to propose that light is an electromagnetic phenomenon. The modern form of the equations in their most common formulation is credited to Oliver Heaviside. Maxwell's equations may be combined to demonstrate how fluctuations in electromagnetic fields (waves) propagate at a constant speed in vacuum, c (). Known as electromagnetic radiation, these waves occur at various wavelengths to produce a spectrum of radiation from radio waves to gamma rays. In partial differential equation form and a coherent system of units, Maxwell's microscopic equations can be written as With the electric field, the magnetic field, the electric charge density and the current density. is the vacuum permittivity and the vacuum permeability. The equations have two major variants: The microscopic equations have universal applicability but are unwieldy for common calculations. They relate the electric and magnetic fields to total charge and total current, including the complicated charges and currents in materials at the atomic scale. The macroscopic equations define two new auxiliary fields that describe the large-scale behaviour of matter without having to consider atomic-scale charges and quantum phenomena like spins. However, their use requires experimentally determined parameters for a phenomenological description of the electromagnetic response of materials. The term "Maxwell's equations" is often also used for equivalent alternative formulations. Versions of Maxwell's equations based on the electric and magnetic scalar potentials are preferred for explicitly solving the equations as a boundary value problem, analytical mechanics, or for use in quantum mechanics. The covariant formulation (on spacetime rather than space and time separately) makes the compatibility of Maxwell's equations with special relativity manifest. Maxwell's equations in curved spacetime, commonly used in high-energy and gravitational physics, are compatible with general relativity. In fact, Albert Einstein developed special and general relativity to accommodate the invariant speed of light, a consequence of Maxwell's equations, with the principle that only relative movement has physical consequences. The publication of the equations marked the unification of a theory for previously separately described phenomena: magnetism, electricity, light, and associated radiation. Since the mid-20th century, it has been understood that Maxwell's equations do not give an exact description of electromagnetic phenomena, but are instead a classical limit of the more precise theory of quantum electrodynamics. History of the equations Conceptual descriptions Gauss's law Gauss's law describes the relationship between an electric field and electric charges: an electric field points away from positive charges and towards negative charges, and the net outflow of the electric field through a closed surface is proportional to the enclosed charge, including bound charge due to polarization of material. The coefficient of the proportion is the permittivity of free space. Gauss's law for magnetism Gauss's law for magnetism states that electric charges have no magnetic analogues, called magnetic monopoles; no north or south magnetic poles exist in isolation. Instead, the magnetic field of a material is attributed to a dipole, and the net outflow of the magnetic field through a closed surface is zero. Magnetic dipoles may be represented as loops of current or inseparable pairs of equal and opposite "magnetic charges". Precisely, the total magnetic flux through a Gaussian surface is zero, and the magnetic field is a solenoidal vector field. Faraday's law The Maxwell–Faraday version of Faraday's law of induction describes how a time-varying magnetic field corresponds to curl of an electric field. In integral form, it states that the work per unit charge required to move a charge around a closed loop equals the rate of change of the magnetic flux through the enclosed surface. The electromagnetic induction is the operating principle behind many electric generators: for example, a rotating bar magnet creates a changing magnetic field and generates an electric field in a nearby wire. Ampère–Maxwell law The original law of Ampère states that magnetic fields relate to electric current. Maxwell's addition states that magnetic fields also relate to changing electric fields, which Maxwell called displacement current. The integral form states that electric and displacement currents are associated with a proportional magnetic field along any enclosing curve. Maxwell's modification of Ampère's circuital law is important because the laws of Ampère and Gauss must otherwise be adjusted for static fields. As a consequence, it predicts that a rotating magnetic field occurs with a changing electric field. A further consequence is the existence of self-sustaining electromagnetic waves which travel through empty space. The speed calculated for electromagnetic waves, which could be predicted from experiments on charges and currents, matches the speed of light; indeed, light is one form of electromagnetic radiation (as are X-rays, radio waves, and others). Maxwell understood the connection between electromagnetic waves and light in 1861, thereby unifying the theories of electromagnetism and optics. Formulation in terms of electric and magnetic fields (microscopic or in vacuum version) In the electric and magnetic field formulation there are four equations that determine the fields for given charge and current distribution. A separate law of nature, the Lorentz force law, describes how the electric and magnetic fields act on charged particles and currents. By convention, a version of this law in the original equations by Maxwell is no longer included. The vector calculus formalism below, the work of Oliver Heaviside, has become standard. It is rotationally invariant, and therefore mathematically more transparent than Maxwell's original 20 equations in x, y and z components. The relativistic formulations are more symmetric and Lorentz invariant. For the same equations expressed using tensor calculus or differential forms (see ). The differential and integral formulations are mathematically equivalent; both are useful. The integral formulation relates fields within a region of space to fields on the boundary and can often be used to simplify and directly calculate fields from symmetric distributions of charges and currents. On the other hand, the differential equations are purely local and are a more natural starting point for calculating the fields in more complicated (less symmetric) situations, for example using finite element analysis. Key to the notation Symbols in bold represent vector quantities, and symbols in italics represent scalar quantities, unless otherwise indicated. The equations introduce the electric field, , a vector field, and the magnetic field, , a pseudovector field, each generally having a time and location dependence. The sources are the total electric charge density (total charge per unit volume), , and the total electric current density (total current per unit area), . The universal constants appearing in the equations (the first two ones explicitly only in the SI formulation) are: the permittivity of free space, , and the permeability of free space, , and the speed of light, Differential equations In the differential equations, the nabla symbol, , denotes the three-dimensional gradient operator, del, the symbol (pronounced "del dot") denotes the divergence operator, the symbol (pronounced "del cross") denotes the curl operator. Integral equations In the integral equations, is any volume with closed boundary surface , and is any surface with closed boundary curve , The equations are a little easier to interpret with time-independent surfaces and volumes. Time-independent surfaces and volumes are "fixed" and do not change over a given time interval. For example, since the surface is time-independent, we can bring the differentiation under the integral sign in Faraday's law: Maxwell's equations can be formulated with possibly time-dependent surfaces and volumes by using the differential version and using Gauss' and Stokes' theorems as appropriate. is a surface integral over the boundary surface , with the loop indicating the surface is closed is a volume integral over the volume , is a line integral around the boundary curve , with the loop indicating the curve is closed. is a surface integral over the surface , The total electric charge enclosed in is the volume integral over of the charge density (see the "macroscopic formulation" section below): where is the volume element. The net magnetic flux is the surface integral of the magnetic field passing through a fixed surface, : The net electric flux is the surface integral of the electric field passing through : The net electric current is the surface integral of the electric current density passing through : where denotes the differential vector element of surface area , normal to surface . (Vector area is sometimes denoted by rather than , but this conflicts with the notation for magnetic vector potential). Formulation in the SI Formulation in the Gaussian system The definitions of charge, electric field, and magnetic field can be altered to simplify theoretical calculation, by absorbing dimensioned factors of and into the units (and thus redefining these). With a corresponding change in the values of the quantities for the Lorentz force law this yields the same physics, i.e. trajectories of charged particles, or work done by an electric motor. These definitions are often preferred in theoretical and high energy physics where it is natural to take the electric and magnetic field with the same units, to simplify the appearance of the electromagnetic tensor: the Lorentz covariant object unifying electric and magnetic field would then contain components with uniform unit and dimension. Such modified definitions are conventionally used with the Gaussian (CGS) units. Using these definitions, colloquially "in Gaussian units", the Maxwell equations become: The equations simplify slightly when a system of quantities is chosen in the speed of light, c, is used for nondimensionalization, so that, for example, seconds and lightseconds are interchangeable, and c = 1. Further changes are possible by absorbing factors of . This process, called rationalization, affects whether Coulomb's law or Gauss's law includes such a factor (see Heaviside–Lorentz units, used mainly in particle physics). Relationship between differential and integral formulations The equivalence of the differential and integral formulations are a consequence of the Gauss divergence theorem and the Kelvin–Stokes theorem. Flux and divergence According to the (purely mathematical) Gauss divergence theorem, the electric flux through the boundary surface can be rewritten as The integral version of Gauss's equation can thus be rewritten as Since is arbitrary (e.g. an arbitrary small ball with arbitrary center), this is satisfied if and only if the integrand is zero everywhere. This is the differential equations formulation of Gauss equation up to a trivial rearrangement. Similarly rewriting the magnetic flux in Gauss's law for magnetism in integral form gives which is satisfied for all if and only if everywhere. Circulation and curl By the Kelvin–Stokes theorem we can rewrite the line integrals of the fields around the closed boundary curve to an integral of the "circulation of the fields" (i.e. their curls) over a surface it bounds, i.e. Hence the Ampère–Maxwell law, the modified version of Ampère's circuital law, in integral form can be rewritten as Since can be chosen arbitrarily, e.g. as an arbitrary small, arbitrary oriented, and arbitrary centered disk, we conclude that the integrand is zero if and only if the Ampère–Maxwell law in differential equations form is satisfied. The equivalence of Faraday's law in differential and integral form follows likewise. The line integrals and curls are analogous to quantities in classical fluid dynamics: the circulation of a fluid is the line integral of the fluid's flow velocity field around a closed loop, and the vorticity of the fluid is the curl of the velocity field. Charge conservation The invariance of charge can be derived as a corollary of Maxwell's equations. The left-hand side of the Ampère–Maxwell law has zero divergence by the div–curl identity. Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives: i.e., By the Gauss divergence theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary: In particular, in an isolated system the total charge is conserved. Vacuum equations, electromagnetic waves and speed of light In a region with no charges () and no currents (), such as in vacuum, Maxwell's equations reduce to: Taking the curl of the curl equations, and using the curl of the curl identity we obtain The quantity has the dimension (T/L)2. Defining , the equations above have the form of the standard wave equations Already during Maxwell's lifetime, it was found that the known values for and give , then already known to be the speed of light in free space. This led him to propose that light and radio waves were propagating electromagnetic waves, since amply confirmed. In the old SI system of units, the values of and are defined constants, (which means that by definition ) that define the ampere and the metre. In the new SI system, only c keeps its defined value, and the electron charge gets a defined value. In materials with relative permittivity, , and relative permeability, , the phase velocity of light becomes which is usually less than . In addition, and are perpendicular to each other and to the direction of wave propagation, and are in phase with each other. A sinusoidal plane wave is one special solution of these equations. Maxwell's equations explain how these waves can physically propagate through space. The changing magnetic field creates a changing electric field through Faraday's law. In turn, that electric field creates a changing magnetic field through Maxwell's modification of Ampère's circuital law. This perpetual cycle allows these waves, now known as electromagnetic radiation, to move through space at velocity . Macroscopic formulation The above equations are the microscopic version of Maxwell's equations, expressing the electric and the magnetic fields in terms of the (possibly atomic-level) charges and currents present. This is sometimes called the "general" form, but the macroscopic version below is equally general, the difference being one of bookkeeping. The microscopic version is sometimes called "Maxwell's equations in vacuum": this refers to the fact that the material medium is not built into the structure of the equations, but appears only in the charge and current terms. The microscopic version was introduced by Lorentz, who tried to use it to derive the macroscopic properties of bulk matter from its microscopic constituents. "Maxwell's macroscopic equations", also known as Maxwell's equations in matter, are more similar to those that Maxwell introduced himself. In the macroscopic equations, the influence of bound charge and bound current is incorporated into the displacement field and the magnetizing field , while the equations depend only on the free charges and free currents . This reflects a splitting of the total electric charge Q and current I (and their densities and J) into free and bound parts: The cost of this splitting is that the additional fields and need to be determined through phenomenological constituent equations relating these fields to the electric field and the magnetic field , together with the bound charge and current. See below for a detailed description of the differences between the microscopic equations, dealing with total charge and current including material contributions, useful in air/vacuum; and the macroscopic equations, dealing with free charge and current, practical to use within materials. Bound charge and current When an electric field is applied to a dielectric material its molecules respond by forming microscopic electric dipoles – their atomic nuclei move a tiny distance in the direction of the field, while their electrons move a tiny distance in the opposite direction. This produces a macroscopic bound charge in the material even though all of the charges involved are bound to individual molecules. For example, if every molecule responds the same, similar to that shown in the figure, these tiny movements of charge combine to produce a layer of positive bound charge on one side of the material and a layer of negative charge on the other side. The bound charge is most conveniently described in terms of the polarization of the material, its dipole moment per unit volume. If is uniform, a macroscopic separation of charge is produced only at the surfaces where enters and leaves the material. For non-uniform , a charge is also produced in the bulk. Somewhat similarly, in all materials the constituent atoms exhibit magnetic moments that are intrinsically linked to the angular momentum of the components of the atoms, most notably their electrons. The connection to angular momentum suggests the picture of an assembly of microscopic current loops. Outside the material, an assembly of such microscopic current loops is not different from a macroscopic current circulating around the material's surface, despite the fact that no individual charge is traveling a large distance. These bound currents can be described using the magnetization . The very complicated and granular bound charges and bound currents, therefore, can be represented on the macroscopic scale in terms of and , which average these charges and currents on a sufficiently large scale so as not to see the granularity of individual atoms, but also sufficiently small that they vary with location in the material. As such, Maxwell's macroscopic equations ignore many details on a fine scale that can be unimportant to understanding matters on a gross scale by calculating fields that are averaged over some suitable volume. Auxiliary fields, polarization and magnetization The definitions of the auxiliary fields are: where is the polarization field and is the magnetization field, which are defined in terms of microscopic bound charges and bound currents respectively. The macroscopic bound charge density and bound current density in terms of polarization and magnetization are then defined as If we define the total, bound, and free charge and current density by and use the defining relations above to eliminate , and , the "macroscopic" Maxwell's equations reproduce the "microscopic" equations. Constitutive relations In order to apply 'Maxwell's macroscopic equations', it is necessary to specify the relations between displacement field and the electric field , as well as the magnetizing field and the magnetic field . Equivalently, we have to specify the dependence of the polarization (hence the bound charge) and the magnetization (hence the bound current) on the applied electric and magnetic field. The equations specifying this response are called constitutive relations. For real-world materials, the constitutive relations are rarely simple, except approximately, and usually determined by experiment. See the main article on constitutive relations for a fuller description. For materials without polarization and magnetization, the constitutive relations are (by definition) where is the permittivity of free space and the permeability of free space. Since there is no bound charge, the total and the free charge and current are equal. An alternative viewpoint on the microscopic equations is that they are the macroscopic equations together with the statement that vacuum behaves like a perfect linear "material" without additional polarization and magnetization. More generally, for linear materials the constitutive relations are where is the permittivity and the permeability of the material. For the displacement field the linear approximation is usually excellent because for all but the most extreme electric fields or temperatures obtainable in the laboratory (high power pulsed lasers) the interatomic electric fields of materials of the order of 1011 V/m are much higher than the external field. For the magnetizing field , however, the linear approximation can break down in common materials like iron leading to phenomena like hysteresis. Even the linear case can have various complications, however. For homogeneous materials, and are constant throughout the material, while for inhomogeneous materials they depend on location within the material (and perhaps time). For isotropic materials, and are scalars, while for anisotropic materials (e.g. due to crystal structure) they are tensors. Materials are generally dispersive, so and depend on the frequency of any incident EM waves. Even more generally, in the case of non-linear materials (see for example nonlinear optics), and are not necessarily proportional to , similarly or is not necessarily proportional to . In general and depend on both and , on location and time, and possibly other physical quantities. In applications one also has to describe how the free currents and charge density behave in terms of and possibly coupled to other physical quantities like pressure, and the mass, number density, and velocity of charge-carrying particles. E.g., the original equations given by Maxwell (see History of Maxwell's equations) included Ohm's law in the form Alternative formulations Following are some of the several other mathematical formalisms of Maxwell's equations, with the columns separating the two homogeneous Maxwell equations from the two inhomogeneous ones. Each formulation has versions directly in terms of the electric and magnetic fields, and indirectly in terms of the electrical potential and the vector potential . Potentials were introduced as a convenient way to solve the homogeneous equations, but it was thought that all observable physics was contained in the electric and magnetic fields (or relativistically, the Faraday tensor). The potentials play a central role in quantum mechanics, however, and act quantum mechanically with observable consequences even when the electric and magnetic fields vanish (Aharonov–Bohm effect). Each table describes one formalism. See the main article for details of each formulation. The direct spacetime formulations make manifest that the Maxwell equations are relativistically invariant, where space and time are treated on equal footing. Because of this symmetry, the electric and magnetic fields are treated on equal footing and are recognized as components of the Faraday tensor. This reduces the four Maxwell equations to two, which simplifies the equations, although we can no longer use the familiar vector formulation. Maxwell equations in formulation that do not treat space and time manifestly on the same footing have Lorentz invariance as a hidden symmetry. This was a major source of inspiration for the development of relativity theory. Indeed, even the formulation that treats space and time separately is not a non-relativistic approximation and describes the same physics by simply renaming variables. For this reason the relativistic invariant equations are usually called the Maxwell equations as well. Each table below describes one formalism. In the tensor calculus formulation, the electromagnetic tensor is an antisymmetric covariant order 2 tensor; the four-potential, , is a covariant vector; the current, , is a vector; the square brackets, , denote antisymmetrization of indices; is the partial derivative with respect to the coordinate, . In Minkowski space coordinates are chosen with respect to an inertial frame; , so that the metric tensor used to raise and lower indices is . The d'Alembert operator on Minkowski space is as in the vector formulation. In general spacetimes, the coordinate system is arbitrary, the covariant derivative , the Ricci tensor, and raising and lowering of indices are defined by the Lorentzian metric, and the d'Alembert operator is defined as . The topological restriction is that the second real cohomology group of the space vanishes (see the differential form formulation for an explanation). This is violated for Minkowski space with a line removed, which can model a (flat) spacetime with a point-like monopole on the complement of the line. In the differential form formulation on arbitrary space times, is the electromagnetic tensor considered as a 2-form, is the potential 1-form, is the current 3-form, is the exterior derivative, and is the Hodge star on forms defined (up to its orientation, i.e. its sign) by the Lorentzian metric of spacetime. In the special case of 2-forms such as F, the Hodge star depends on the metric tensor only for its local scale. This means that, as formulated, the differential form field equations are conformally invariant, but the Lorenz gauge condition breaks conformal invariance. The operator is the d'Alembert–Laplace–Beltrami operator on 1-forms on an arbitrary Lorentzian spacetime. The topological condition is again that the second real cohomology group is 'trivial' (meaning that its form follows from a definition). By the isomorphism with the second de Rham cohomology this condition means that every closed 2-form is exact. Other formalisms include the geometric algebra formulation and a matrix representation of Maxwell's equations. Historically, a quaternionic formulation was used. Solutions Maxwell's equations are partial differential equations that relate the electric and magnetic fields to each other and to the electric charges and currents. Often, the charges and currents are themselves dependent on the electric and magnetic fields via the Lorentz force equation and the constitutive relations. These all form a set of coupled partial differential equations which are often very difficult to solve: the solutions encompass all the diverse phenomena of classical electromagnetism. Some general remarks follow. As for any differential equation, boundary conditions and initial conditions are necessary for a unique solution. For example, even with no charges and no currents anywhere in spacetime, there are the obvious solutions for which E and B are zero or constant, but there are also non-trivial solutions corresponding to electromagnetic waves. In some cases, Maxwell's equations are solved over the whole of space, and boundary conditions are given as asymptotic limits at infinity. In other cases, Maxwell's equations are solved in a finite region of space, with appropriate conditions on the boundary of that region, for example an artificial absorbing boundary representing the rest of the universe, or periodic boundary conditions, or walls that isolate a small region from the outside world (as with a waveguide or cavity resonator). Jefimenko's equations (or the closely related Liénard–Wiechert potentials) are the explicit solution to Maxwell's equations for the electric and magnetic fields created by any given distribution of charges and currents. It assumes specific initial conditions to obtain the so-called "retarded solution", where the only fields present are the ones created by the charges. However, Jefimenko's equations are unhelpful in situations when the charges and currents are themselves affected by the fields they create. Numerical methods for differential equations can be used to compute approximate solutions of Maxwell's equations when exact solutions are impossible. These include the finite element method and finite-difference time-domain method. For more details, see Computational electromagnetics. Overdetermination of Maxwell's equations Maxwell's equations seem overdetermined, in that they involve six unknowns (the three components of and ) but eight equations (one for each of the two Gauss's laws, three vector components each for Faraday's and Ampère's circuital laws). (The currents and charges are not unknowns, being freely specifiable subject to charge conservation.) This is related to a certain limited kind of redundancy in Maxwell's equations: It can be proven that any system satisfying Faraday's law and Ampère's circuital law automatically also satisfies the two Gauss's laws, as long as the system's initial condition does, and assuming conservation of charge and the nonexistence of magnetic monopoles. This explanation was first introduced by Julius Adams Stratton in 1941. Although it is possible to simply ignore the two Gauss's laws in a numerical algorithm (apart from the initial conditions), the imperfect precision of the calculations can lead to ever-increasing violations of those laws. By introducing dummy variables characterizing these violations, the four equations become not overdetermined after all. The resulting formulation can lead to more accurate algorithms that take all four laws into account. Both identities , which reduce eight equations to six independent ones, are the true reason of overdetermination. Equivalently, the overdetermination can be viewed as implying conservation of electric and magnetic charge, as they are required in the derivation described above but implied by the two Gauss's laws. For linear algebraic equations, one can make 'nice' rules to rewrite the equations and unknowns. The equations can be linearly dependent. But in differential equations, and especially partial differential equations (PDEs), one needs appropriate boundary conditions, which depend in not so obvious ways on the equations. Even more, if one rewrites them in terms of vector and scalar potential, then the equations are underdetermined because of gauge fixing. Maxwell's equations as the classical limit of QED Maxwell's equations and the Lorentz force law (along with the rest of classical electromagnetism) are extraordinarily successful at explaining and predicting a variety of phenomena. However they do not account for quantum effects and so their domain of applicability is limited. Maxwell's equations are thought of as the classical limit of quantum electrodynamics (QED). Some observed electromagnetic phenomena are incompatible with Maxwell's equations. These include photon–photon scattering and many other phenomena related to photons or virtual photons, "nonclassical light" and quantum entanglement of electromagnetic fields (see Quantum optics). E.g. quantum cryptography cannot be described by Maxwell theory, not even approximately. The approximate nature of Maxwell's equations becomes more and more apparent when going into the extremely strong field regime (see Euler–Heisenberg Lagrangian) or to extremely small distances. Finally, Maxwell's equations cannot explain any phenomenon involving individual photons interacting with quantum matter, such as the photoelectric effect, Planck's law, the Duane–Hunt law, and single-photon light detectors. However, many such phenomena may be approximated using a halfway theory of quantum matter coupled to a classical electromagnetic field, either as external field or with the expected value of the charge current and density on the right hand side of Maxwell's equations. Variations Popular variations on the Maxwell equations as a classical theory of electromagnetic fields are relatively scarce because the standard equations have stood the test of time remarkably well. Magnetic monopoles Maxwell's equations posit that there is electric charge, but no magnetic charge (also called magnetic monopoles), in the universe. Indeed, magnetic charge has never been observed, despite extensive searches, and may not exist. If they did exist, both Gauss's law for magnetism and Faraday's law would need to be modified, and the resulting four equations would be fully symmetric under the interchange of electric and magnetic fields. See also Explanatory notes References Further reading Historical publications On Faraday's Lines of Force – 1855/56. Maxwell's first paper (Part 1 & 2) – Compiled by Blaze Labs Research (PDF). On Physical Lines of Force – 1861. Maxwell's 1861 paper describing magnetic lines of force – Predecessor to 1873 Treatise. James Clerk Maxwell, "A Dynamical Theory of the Electromagnetic Field", Philosophical Transactions of the Royal Society of London 155, 459–512 (1865). (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.) A Dynamical Theory Of The Electromagnetic Field – 1865. Maxwell's 1865 paper describing his 20 equations, link from Google Books. J. Clerk Maxwell (1873), "A Treatise on Electricity and Magnetism": Maxwell, J. C., "A Treatise on Electricity And Magnetism" – Volume 1 – 1873 – Posner Memorial Collection – Carnegie Mellon University. Maxwell, J. C., "A Treatise on Electricity And Magnetism" – Volume 2 – 1873 – Posner Memorial Collection – Carnegie Mellon University. Developments before the theory of relativity Henri Poincaré (1900) "La théorie de Lorentz et le Principe de Réaction" , Archives Néerlandaises, V, 253–278. Henri Poincaré (1902) "La Science et l'Hypothèse" . Henri Poincaré (1905) "Sur la dynamique de l'électron" , Comptes Rendus de l'Académie des Sciences, 140, 1504–1508. Catt, Walton and Davidson. "The History of Displacement Current" . Wireless World, March 1979. External links maxwells-equations.com — An intuitive tutorial of Maxwell's equations. The Feynman Lectures on Physics Vol. II Ch. 18: The Maxwell Equations Wikiversity Page on Maxwell's Equations Modern treatments Electromagnetism (ch. 11), B. Crowell, Fullerton College Lecture series: Relativity and electromagnetism, R. Fitzpatrick, University of Texas at Austin Electromagnetic waves from Maxwell's equations on Project PHYSNET. MIT Video Lecture Series (36 × 50 minute lectures) (in .mp4 format) – Electricity and Magnetism Taught by Professor Walter Lewin. Other Nature Milestones: Photons – Milestone 2 (1861) Maxwell's equations Electromagnetism Equations of physics Functions of space and time James Clerk Maxwell Partial differential equations Scientific laws
Maxwell's equations
Physics,Mathematics
6,907
17,555,222
https://en.wikipedia.org/wiki/Cyamemazine
Cyamemazine (Tercian), also known as cyamepromazine, is a typical antipsychotic drug of the phenothiazine class which was introduced by Theraplix in France in 1972 and later in Portugal as well. Medical use It is used for the treatment of schizophrenia and, especially, for psychosis-associated anxiety, due to its unique anxiolytic efficacy. It is also used to reduce anxiety associated with benzodiazepine withdrawal syndrome and anxiety in depression with suicidal tendency. Side effects Here are some of the most common side effects and related incidence: Sedation (20%) Vertigo (7.9%) Constipation (4%) Dyskinesia (4.4%) Dryness of mouth (5.9%) Hypotension (7.4%) Tachycardia (3.2%) Mechanism Cyamemazine differs from other phenothiazine neuroleptics in that aside from the usual profile of dopamine, α1-adrenergic, H1, and mACh receptor antagonism, it additionally produces potent blockade of several serotonin receptors, including 5-HT2A, 5-HT2C, and 5-HT7. These actions have been implicated in cyamemazine's anxiolytic effects (5-HT2C) and lack of extrapyramidal side effects (5-HT2A), and despite being classified as a typical antipsychotic, it actually behaves like an atypical antipsychotic. Synthesis 2-Cyanophenothiazine [38642-74-9] (1) 3-Chloro-2-methylpropyl(dimethyl)amine [23349-86-2] (2) References Alpha-1 blockers Dimethylamino compounds Dopamine antagonists H1 receptor antagonists M1 receptor antagonists M2 receptor antagonists M3 receptor antagonists M4 receptor antagonists M5 receptor antagonists Nitriles Phenothiazines Serotonin receptor antagonists Typical antipsychotics
Cyamemazine
Chemistry
453
3,852,710
https://en.wikipedia.org/wiki/Extractor%20%28firearms%29
In breechloading firearms, an extractor is an action component that serves to remove spent casings of previously fired cartridges from the chamber, in order to vacate the chamber for loading a fresh round of ammunition. In repeating firearms with moving bolts, the extractor is often one or a set of hook-like flanges on the bolt head that grabs onto the casing's rim, so when the bolt moves rearwards the casing is pulled out of the chamber. It is typically aided by a protruding ejector in the receiver or the bolt, which provides an opposite counter-push that couples with the extractor pull to expel the casing entirely out of the gun. In modern dropping block, break-action (e.g. double-barrel shotguns) and revolver firearms, the extractor is a protrusible piece with flanges on the barrel/cylinder side, which pushes rearwards on the casing's rim and slides it out of the chambers. Some such extractors can push hard and far enough that they completely clear the cases out of the gun, thereby also performing the function of an ejector. Use Extractors are a hallmark feature of repeating firearms and can be found on bolt-action, lever-action, pump-action, semi-automatic, and fully automatic firearms. Extractors are also found on revolvers, removing cases either in succession (as in a fixed-cylinder single-action revolver) or simultaneously (as in a double-action revolver with a swing-out or top-break cylinder). For rimmed cases, the protruded rim serves as the grabbing point from which the extractor works. For rimless cases, the groove at the base serves as the grabbing point from which the extractor works. Not all single-shot firearms have extractors, though many do. Break-action shotguns, double rifles, and combination guns typically have an extractor that pushes out the casings when the action is flexed open. Most modern extractors are forceful enough to completely eject the casing from the gun (i.e. integrating the function of an ejector), but some require the user to manually remove spent cartridges. In this situation, the extractor loosens and moves the case out of the chamber just far enough to allow the user to grab and pull out the casing, but not far enough to remove the case entirely from the chamber. This situation is encountered on some single-shot rifles, single-shot pistols (such as the break-action Thompson/Center Contender), and on some break-action single- and double-barrel shotguns. In bolt-action, lever-action, pump-action, semi-automatic, and fully automatic firearms, the extractor typically works in conjunction with a separate ejector to remove completely a fired, empty cartridge case from the weapon. The extractor moves with the bolt to pull the cartridge case rearwards out of the chamber, and at some point, the ejector eccentrically exerts a frontal push (from the case's frame of reference), which torques and "flicks" the case out of a side opening on the receiver known as the ejector port. Another example of extractor exists in the form of a pivoting lever attached to the bolt head that lacks a separate rammer, but works by ejecting first the rear of the cartridge to exit through the ejection port instead of the case head/bottleneck. This type of component does the job of both extractor/rammer but is mostly found on firearms using simple blowback, lever/tilting bolt locking/delay, etc. rather than rotating bolts. History Some very early blowback pistols used ammunition with no rim or extractor groove on the cartridge cases (e.g., 5mm Bergmann), and such pistols, therefore, lacked extractors. The spent case was forced out of the chamber by recoil and was subsequently ejected. As this system did not provide for easy clearance of misfires, it was not very successful, especially for self-defense handguns needing to be cleared quickly and reloaded in the event of a cartridge primer malfunction. Nonetheless, there are examples of contemporary modern semi-automatic pistols that do not have extractors even to this day, such as the Beretta Bobcat, Beretta Model 21A, and clones of the Beretta designs such as the Taurus PT22 that are successful. These modern pistols typically have flip-up barrels, to permit easy loading without necessarily cycling a slide against a strong recoil spring, making these pistols suitable for use by people with minimal hand strength. The trade off made is that in the event of a cartridge primer malfunction, the gun is rendered useless until the action can be cycled by someone with full hand strength. Still, for someone without the hand strength to handle a semi-automatic firearm with a slide against a strong recoil spring, the trade is often made. An extractor also performs the function of an ejector in revolvers. When the striking force applied to the ejector rod is hard and fast enough, the extractor will typically eject the empty case(s) from the cylinder. Some break-action shotguns are also designed to eject empty shells completely out of the chamber when the barrel is opened. See also Loaded chamber indicator, sometimes work in conjunction with the extractor References Firearm components
Extractor (firearms)
Technology
1,120
46,641,459
https://en.wikipedia.org/wiki/Sikhae
Sikhae () is a salted fermented food in Korean cuisine prepared with fish and grains. Sikhae is made in the east coast regions of Korea, namely Gwanbuk, Gwandong, and Yeongnam. Ingredients and preparation Righteye flounders are typically used for sikhae. Other commonly used fish include Alaska pollock, chub mackerel, sailfin sandfish, and Japanese anchovy. Sometimes, dried fish such as bugeo (dried Alaska pollock) may also be used to make sikhae. Grain-wise, cooked foxtail millet is used in the Gwanbuk region, while cooked rice is used in other regions. Sometimes, millet, quinoa, or other grains may also be used. For salting, coarse sea salt is used. Other ingredients include chili powder, garlic, and ginger. Gajami-sikhae The Hamgyŏng Province is famous for its gajami-sikhae (fermented flounder). Righteye flounders—preferably yellow-striped ones harvested during December to early March— are washed, drained, and salted with coarse sea salt for about ten days. The salted fish are then rinsed, cut into bite-size pieces, mixed with cooked foxtail millet and chili powder, and let to age. After four days, thickly julienned and salted radish slices mixed with chili powder are added, and the sikhae can be eaten after another ten days of aging. See also Jeotgal Fermented fish List of fermented foods References Further reading Korean cuisine Fermented foods
Sikhae
Biology
336
40,798,385
https://en.wikipedia.org/wiki/Flash%20reactor
As an extension of the fluidized bed family of separation processes, the flash reactor (FR) (or transport reactor) employs turbulent fluid introduced at high velocities to encourage chemical reactions with feeds and subsequently achieve separation through the chemical conversion of desired substances to different phases and streams. A flash reactor consists of a main reaction chamber and an outlet for separated products to enter downstream processes. FR vessels facilitate a low gas and solid retention (and hence reactant contact time) for industrial applications which give rise to a high throughput, pure product and less than ideal thermal distribution when compared to other fluidized bed reactors. Due to these properties as well as its relative simplicity FRs have the potential for use for pre-treatment and post-treatment processes where these strengths of the FR are prioritized the most. Various designs of a FR (e.g. pipeline FR, centrifugal FR, vessel FR) exist and are currently used in pilot industrial plants for further development. These designs allow for a wide range of current and future applications, including water treatment sterilization, recovery and recycling of steel mill dust, pre-treatment and roasting of metals, chemical looping combustion as well as hydrogen production from biomass. Properties The vessel flash reactor is a design commonly used and is shown in the figure to the right. Gas is introduced from the bottom at an elevated temperature and high velocity, with a slight drop in velocity experienced at the central part of the vessel. Chamber A is designed to be "egg shaped", with a relatively narrow bottom cross sectional area and a wide upper cross sectional area. This configuration is designed to increase the fluid's velocity at the chamber's bottom, allowing for heavy feed particles to be in a continuous circulation that promotes a reaction site for separation processes. The method of feed delivery varies depending on its phase. Solids may be delivered using a conveyor B, whilst fluids are vaporized and sprayed directly into the FR. It is then contacted with a continuously circulating hot gas that was introduced in section C. This continuously circulating gas interacts throughout the chamber with the incoming feed, with the surfaces of the particles generating insoluble salts as a result of reactions. Product mixture is then separated through E, where an exhaust vent emits gaseous products. Temperature of this stream is controlled by a coolant emitted by the vessel's spray nozzles D. Design characteristics and heuristics Whilst a variety of applications are available for a flash reactor, they follow a general set of operating parameters/heuristics that are similar. The following lists the important parameters to consider when designing a FR: Fluid velocity and flow configuration A relatively fast fluid velocity (10–30 m/s) is usually required in FR operations to encourage a continuous particle distribution throughout the reactor's vessel. This minimizes the column's slip velocity (average velocity difference of different fluids in a pipe), providing a positive impact on heat and mass transfer rates and allowing for the use of smaller diameter vessels which can lower operating costs. Also, the use of a vertical fluid flow configuration will result in a lack of feed particle mixing in the horizontal and vertical direction, as such, discouraging particle interactions that would decrease product impurity. Solid retention time The use of a fast fluid velocity, as described above, also ensures a short solid feed retention time. This would cater for reactions that require a purer product and higher throughput. However, if the operating condition for a certain application requires an extended reaction time, this can be implemented by introducing a cyclical operation. By employing a backflow line, the fluid in the FR can be recirculated with the feed to allow for additional contact time. Refractory lining material Due to the high temperature requirements for FR operations, a refractory lining is required to reinforce and maintain vessel integrity over time. Also, a refractory lining serves to isolate the chamber's high temperature from ambient temperature. For example, in the Reco-Dust process, the FR is lined with two separate refractory materials: aluminum oxide bricks for the combustion chamber, and silicon carbide bricks for the conical outlet part. In addition, design of the vessel can vary in shapes and sizes (i.e. from pipeline to an egg-like shape) that aims to promote the vertical circulation of the gases and particulate matter. Feed and fluid type To minimize hold-up of material in the reactor, a dense gas with light solids are recommended for the operation of the FR. The solid feed fed into the reactor can only consist of heat-resistant materials and will be at best when a short retention time is only required. It is also desired to for a solid feed to be dry, pourable and with a well-defined grain size. Flash reactor types Centrifugal flash reactor Unlike other FR designs, the powdered feed is contacted on a solid heat carrier rather than a gaseous carrier. It involves the use of a heated rotating plate that disperses the feed powder particles for a short duration. This is achieved by the use of centrifugal forces, where it compresses the powder onto the plate's surface, allowing for direct contact between the particles and hot metal, which enables a higher heat transfer rate. the figure on the right illustrates the TSE-FLAR set up, with the arrows illustrating the direction of the feed traveling from the feed tank, to the metering unit, to the rotating plate, and finally to the cooling water unit. Pipeline flash reactor A pipeline flash reactor (PFR) is a relatively new device developed through the principles of a FR thus possessing most of its characteristics, functions and properties. As inferred from its name, the shape of the pipeline reactor takes the form of a pipe. Even though it is a new derivative product of an older technology, it is being trialed in industrial size operations. Pipeline flash reactors are used as a tertiary or post treatment step in waste water treatment, either integrated in new plants or retrofitted in existing developments. The PFR's shape allows it to be easily integrated into new process systems and be retrofitted into older existing systems to improve the overall system's efficiency. Due to its shape, modifications and extensions can be easily added to the PFR to accommodate the requirements of certain processes. In the PFR, the reactants come into contact with each other in the pipe rather than a mixing vessel in conventional mixing systems, such as a continuously stirred tank reactor. This eliminates the need for extra mixing tanks which saves space but as a trade-off, the actual reaction site will be dependent on the pipe specifications and velocity of the fluid. The PFR also eliminates the need of bulky cascade systems or tanks used by other technologies in existing developments which can reduce maintenance costs. Due to the nature of the device, the reactants processed in PFRs will have short retention times, however, adding backflows into the system is a technique which can increase retention time if required. Unlike conventional mixing systems, a turbulent mixing chamber can be realized without producing pressure drops. Also, PFRs, like most flash reactors, are highly efficient with a small footprint. Applications The versatility of flash/transport reactors are suitable for a wide range of quality sensitive separation processes. The following describes the main applications for the flash reactor, note that most flash reactor applications do not require any post-treatment or pre-treatment systems due to a lack of waste generated. Ozone injection for water treatment sterilization The (PFR) is a growing technology with applications in improving the efficiency of certain processes such as the waste-water treatment. A pilot reactor was installed in California as part of the [Castaic Lake Water Agency] (CLWA) Expansion plan. The PFR serves as an auxiliary mixing and contact device to promote the ozone absorption in treated water. The PFR used customized nozzles to inject the ozone/water mixture at high velocities back into the bulk of the treated fluid. The use of PFRs, such as the reactor in the CLWA expansion, in water treatments is becoming more popular since the PFRs eliminates the need for additional tanks that would have been required for processes such as chlorination. Smaller basins are sufficient in providing the contact time between reactants for microbial inactivation thus reducing installation footprints in new developments. Also, the reactants will leave the PFRs quicker due to a shorter retention time; it was found that effective dispersion of the side stream into the bulk fluid was accomplished in as short as 1 second. Treatment of steel mill dust to recover zinc Since 2010, a flash-reactor pilot plant was successfully operating at the Montanuniversität in Leoben, Austria. Known as the RecoDust process, such a setup was designed to recover zinc from the dust collected in steel operations. Whilst tests have proven the functionality of this process, further research and implementation of this process in industry was halted due to the steel industry's uncertain economic outlook. Nonetheless, research has shown a great potential for the use of the FR in recovering zinc from steel mill dust as it provides a strong oxidizing and reducing condition in the reaction vessel, with no waste materials produced. The large reaction surface area of the dust material input as well as not having an inner Zn-cycle and not requiring pretreatment processes has proven the effectiveness and efficiency of the RecoDust process. A typical RecoDust process will often require temperatures from 1600-1650 °C with a dry, pourable, and well-defined grain sized raw material input of approximately 300 kg/h. In one experiment, 94% of chlorine, 93% of fluorine and 92% of lead was eliminated from the steel mill dust with a 97% recovery of zinc. Rapid thermal treatment of powdered materials The use of a rapid thermal heating process followed by their quenching/cooling is essential in many chemical engineering fields. For example, the aluminum hydroxide powder (i.e. gibbsite) used for the preparation of an alumina-based catalyst goes through the process of thermochemical activation (TCA) to form a thermally activated product, Al2O3∙nH2O. A centrifugal FR, TSEFLAR can be employed to heat the powder up to 400-900 K with a plate temperature of 1000 K and a speed of 90-250 turns per minute. Such settings have shown to produce a product output of 40 dm3/hr with a thermal treatment of less than 1.5 s. Metallurgy Flash reactors have enormous potential for replacing or assisting existing primary ore oxidation, reduction or other pre-treatment conditioning processes (e.g. calcining) in metal refinery. The simplicity and throughput of a flash reactor can provide a cost-effective solution to ease the use of existing, expensive rigorous processes. Preheating Preheating of crushed or fine ores can be carried out within a FR, utilising the short retention times to most quickly increase temperatures to reach conditions required in later processes. In iron and ilmenite ores high FR throughputs allows for substantial overall reduction in operating energy consumption, as well as provide a mixing site with other reactants such as hydrogen for briquetting in the main refining process. Roasting The oxidation of crushed particulate ores and the removal of sulfide, arsenic or other contaminants is a crucial separation process in the purification of metals which can be carried out within a FR. The oxidation of sulfide ores result in a conversion of small sized solid sulfide ore to oxides and residual sulfur dioxide gas culminating in a separation by converting unwanted sulfides into a gaseous phase. These contaminants can then go under post-treatment to create useful products from the waste stream, such as sulfuric acid using the contact process. The equation below displays some examples of roasting oxidation reactions used in refining zinc from sphalerite and other ores. 2AS (s) + 3O2 (g) 2MO(s) + 2SO2 (g) where A=Cu, Zn, Pb In ilmenite roasting to produce synthetic, the magnetic properties of the ore are changed at high temperatures as ferrite compounds within the ore are oxidized. This results in the separation of oxidised ferric compounds from paramagnetic chromite components within the ore at the reactor outlet where the product may be further refined to synthesize iron or rutile downstream. In roasting gold-bearing sulfide ores, sulfur or arsenic diffusion gradients encourage the migration of gold towards mineral pores. Hence, continual roasting and volatilisation of sulfur and arsenic allows for the coalescence of gold at the surface of mineral particles which can then be separated efficiently by downstream processes such as leaching. In a FR, the high throughput implies a high particle concentration per unit volume of gas and hence a large contact area of reaction for mass transfer. Further, the tolerance for this reaction to short retention times make this process ideal for carrying out industrial roasting. This allows for lower-grade feed materials to be utilised to improve both product capacity as well as quality compared to conventional treatment. Hence, the simplicity of FR implementation and its high product output optimizes costs of the roasting pre-treatment. Advantages and limitations over competitive processes Future developments Chemical looping combustion Chemical Looping Combustion or CLC is a method where using a combination of CFB and Flash reactors to remove nitrogen and impurities from the air before the oxidation of the fuel using an oxidation and reduction cycle of a metal such as nickel. In CLC, hot air is injected into a metal which acts as a catalyst and an oxygen carrier such as Fe2O3 or metallic nickel or copper. A flash reactor is used in the air injection process in the beginning of the loop. The use of flash reactors in this scenario allows the use of lower-grade feed materials and a substantial increase in capacity as well as product purity compared to conventional processing. CLC can theoretically also be used to recover hydrogen from biomass during syngas synthesis and is explained in hydrogen production below. Hydrogen production from biomass Hydrogen production is an emerging technology in the field of renewable energy. As hydrogen demand is expected to grow exponentially, in the chemical, hydrocarbon, semiconductor industry, new sources for hydrogen must be found. Flash reactors in tandem with steam methane reforming and gasification, uses waste biomass such as a mixture of cellulose, lignin and other plant material organics to produce hydrogen gas. Most commonly used biomass waste is oil palm waste as a result of the palm oil industry. Flash reactors can also be used in the drying section to quickly remove water content from the biomass by injecting high velocity heated air which acts as a pretreatment to the actual pyrolysis reaction which also occurs in a flash reactor. also shows that a flash reactor is used, after the grinding of the biomass, with the addition of extreme heat, into a mixture of bio-oil, char and ash. The ash and char produced from this reaction is later removed due to their catalytic properties which would interfere with the steam reformation. References Chemical reactors
Flash reactor
Chemistry,Engineering
3,106
1,637,959
https://en.wikipedia.org/wiki/Skylon%20%28Festival%20of%20Britain%29
The Skylon was a futuristic-looking, slender, vertical, cigar-shaped steel tensegrity structure located by the Thames in London, that gave the illusion of floating above the ground, built in 1951 for the Festival of Britain. A popular joke of the period was that, like the British economy of 1951, "It had no visible means of support". Construction The Skylon was the "Vertical Feature" that was an abiding symbol of the Festival of Britain. It was designed by Hidalgo Moya, Philip Powell and Felix Samuely, and fabricated by Painter Brothers of Hereford, England, on London's South Bank between Westminster Bridge and Hungerford Bridge. The Skylon consisted of a steel latticework frame, pointed at both ends and supported on cables slung between three steel beams. The partially constructed Skylon was rigged vertically, then grew taller in situ. The architects' design was made structurally feasible by the engineer Felix Samuely who, at the time, was a lecturer at the Architectural Association School of Architecture in Bedford Square, Bloomsbury. The base was nearly 15 metres (50 feet) from the ground, with the top nearly 90 metres (300 feet) high. The frame was clad in aluminium louvres lit from within at night. Questions were asked in Parliament regarding the danger to visitors from lightning-strikes to the Skylon, and the papers reported that it was duly roped off at one point, in anticipation of a forecast thunderstorm. Name Both the name and form of the Skylon most likely referenced the Trylon feature of the 1939 New York World's Fair. The name was suggested by Mrs A. G. S. Fidler, wife of the chief architect of the Crawley Development Corporation. Moya wrote, "We were unimpressed at first but soon came to accept that, by combining the suggestions of Pylon, Sky and Nylon (a fascinating new material in 1951), it was wonderfully descriptive name which has lasted forty years, considerably longer than the structure itself." Incidents A few days before the King and Queen visited the exhibition in May 1951, Skylon was climbed at midnight by Philip Gurdon, a student at Birkbeck College, who attached a University of London Air Squadron scarf near the top. Police constable Frederick Hicks was sent up to retrieve the scarf the following morning. Demolition In spite of its popularity with the public, the £30,000 cost of dismantling and re-erecting the Skylon elsewhere (equivalent to £ as of ) was deemed too much for a government struggling with post-war austerity. Skylon was removed in 1952 when the rest of the exhibition was dismantled, on the orders of Winston Churchill, who saw the Festival and its architectural structures as a symbol of the preceding Labour Government's vision of a new socialist Britain. Speculation as to the Skylon's fate included theories from Jude Kelly, artistic director of the Southbank Centre, that it was thrown into the River Lea in east London, dumped into the Thames, buried under Jubilee Gardens, made into souvenirs or sold as scrap. The base is preserved in the Museum of London and the wind cups are held in a private collection. An investigation was carried out by the Front Row programme on BBC Radio 4 and the result was broadcast on 8 March 2011. It was revealed that the Skylon and the roof of the Dome of Discovery had been sold to George Cohen, Sons and Company scrap metal dealers of Wood Lane, Hammersmith, and dismantled at their works in Bidder Street, Canning Town, on the banks of the River Lea. Some of the metal fragments were then made into a series of commemorative paper-knives and other artefacts. The inscriptions on the paper-knives read "600" and "Made from the aluminium alloy roof sheets which covered the Dome of Discovery at the Festival of Britain, South Bank. The Dome, Skylon and 10 other buildings on the site, were dismantled by George Cohen and Sons and Company Ltd during six months of 1952." The former location of the Skylon is the riverside promenade between the London Eye and Hungerford Bridge, alongside the Jubilee Gardens (the former site of the Dome of Discovery). 2007 Skylon restaurant In May 2007 D&D London (formerly Conran Restaurants) opened a new restaurant named Skylon on the third floor of the Royal Festival Hall. This restaurant had previously been named The Peoples Palace. See also Dome of Discovery Skylon (spacecraft) Blaw-Knox tower Notes References Articles from The Times between 1951 and 1952 External links Skylon spire may return to London skyline (The Guardian) The Skylon Museum of London colour photo of the Skylon Festival of Britain Tensile architecture Towers completed in 1951 Towers in London Former buildings and structures in the London Borough of Lambeth Demolished buildings and structures in London 1951 in London World's fair architecture in London
Skylon (Festival of Britain)
Technology
980
60,999,736
https://en.wikipedia.org/wiki/Thioreductor
Thioreductor is a Gram-negative, mesophilic, hydrogen-oxidizing, sulfur-reducing and motile genus of bacteria from the phylum Campylobacterota with one known species (Thioreductor micantisoli). Thioreductor micantisoli has been isolated from hydrothermal sediments from the Iheya North from the Mid-Okinawa Trough in Japan. See also List of bacterial orders List of bacteria genera References Bacteria Bacteria genera Monotypic bacteria genera
Thioreductor
Biology
109
38,569,585
https://en.wikipedia.org/wiki/Molybdenum%20trisulfide
Molybdenum trisulfide is an inorganic compound with the formula MoS3. References Molybdenum(VI) compounds Sulfides
Molybdenum trisulfide
Chemistry
34
72,097,781
https://en.wikipedia.org/wiki/Chiral%20thin-layer%20chromatography
Chiral thin-layer chromatography is a variant of liquid chromatography that is employed for the separation of enantiomers. It is necessary to use either a chiral stationary phase or a chiral additive in the mobile phase. The chiral stationary phase can be prepared by mixing chirally pure reagents such as L-amino acid, or brucine, or a chiral ligand exchange reagent with silica gel slurry, or by impregnation of the TLC plate in the solution of a chiral reagent. The principle can also be applied to chemically modify the stationary phase before making the plate via bonding of the chiral moieties of interest to the reactive groups of the layer material. See also Chiral column chromatography References Chromatography Stereochemistry
Chiral thin-layer chromatography
Physics,Chemistry
175
9,793,263
https://en.wikipedia.org/wiki/Covariance%20function
In probability theory and statistics, the covariance function describes how much two random variables change together (their covariance) with varying spatial or temporal separation. For a random field or stochastic process Z(x) on a domain D, a covariance function C(x, y) gives the covariance of the values of the random field at the two locations x and y: The same C(x, y) is called the autocovariance function in two instances: in time series (to denote exactly the same concept except that x and y refer to locations in time rather than in space), and in multivariate random fields (to refer to the covariance of a variable with itself, as opposed to the cross covariance between two different variables at different locations, Cov(Z(x1), Y(x2))). Admissibility For locations x1, x2, ..., xN ∈ D the variance of every linear combination can be computed as A function is a valid covariance function if and only if this variance is non-negative for all possible choices of N and weights w1, ..., wN. A function with this property is called positive semidefinite. Simplifications with stationarity In case of a weakly stationary random field, where for any lag h, the covariance function can be represented by a one-parameter function which is called a covariogram and also a covariance function. Implicitly the C(xi, xj) can be computed from Cs(h) by: The positive definiteness of this single-argument version of the covariance function can be checked by Bochner's theorem. Parametric families of covariance functions For a given variance , a simple stationary parametric covariance function is the "exponential covariance function" where V is a scaling parameter (correlation length), and d = d(x,y) is the distance between two points. Sample paths of a Gaussian process with the exponential covariance function are not smooth. The "squared exponential" (or "Gaussian") covariance function: is a stationary covariance function with smooth sample paths. The Matérn covariance function and rational quadratic covariance function are two parametric families of stationary covariance functions. The Matérn family includes the exponential and squared exponential covariance functions as special cases. See also Autocorrelation function Correlation function Covariance matrix Kriging Positive-definite kernel Random field Stochastic process Variogram References Geostatistics Spatial analysis Covariance and correlation
Covariance function
Physics
555
34,597,671
https://en.wikipedia.org/wiki/International%20Society%20for%20Nanoscale%20Science%2C%20Computation%2C%20and%20Engineering
The International Society for Nanoscale Science, Computation, and Engineering (ISNSCE, pronounced like "essence") is a scientific society specializing in nanotechnology and DNA computing. It was started in 2004 by Nadrian Seeman, founder of the field of DNA nanotechnology. According to the society, its purpose is "to promote the study of the control of the arrangement of the atoms in matter, examine the principles that lead to such control, to develop tools and methods to increase such control, and to investigate the use of these principles for molecular computation, and for engineering on the finest possible scales." ISNSCE sponsors two academic conferences each year: the first is Foundations of Nanoscience (FNANO), and the second is the International Conference on DNA Computing and Molecular Computation (DNA Computing). The FNANO conference has been held in Snowbird, Utah each year in April since 2004, and focuses on molecular self-assembly of nanoscale materials and devices. DNA Computing focuses on biomolecular computing and DNA nanotechnology, and has been held annually since 1995. The proceedings of DNA Computing are published as part of the Lecture Notes in Computer Science book series. Awards ISNSCE sponsors two awards annually. The ISNSCE Nanoscience Prize recognizes research in any area of nanoscience, and has been presented at FNANO each year since 2007. The Tulip Award in DNA Computing is specific to the fields of biomolecular computing and molecular programming, and has been presented at the DNA Computing conference since 2000. ISNSCE also sponsors two student awards for papers presented at the DNA Computing conference each year. The Tulip Award was first given at the sixth DNA Computing conference, in Leiden, the Netherlands, whose botanical garden is known as the birthplace of the tulip culture in the Netherlands. In April 2015, ISNCSE established the Robert Dirks Molecular Programming Prize to recognize early-career scientists for molecular programming research. The award was established in memory of Dirks, who was one of the six fatalities of the February 2015 Valhalla train crash. ISNSCE Nanoscience Prize The following are recipients of the ISNSCE Nanoscience Prize: Tulip Award in DNA Computing The following are recipients of the Tulip Award in DNA Computing: See also Kavli Prize in Nanoscience Foresight Institute Feynman Prize in Nanotechnology IEEE Pioneer Award in Nanotechnology References External links International Society for Nanoscale Science, Computation, and Engineering Foundations of Nanoscience conference International Conference on DNA Computing and Molecular Programming International scientific organizations Nanotechnology institutions
International Society for Nanoscale Science, Computation, and Engineering
Materials_science
532
2,035,296
https://en.wikipedia.org/wiki/Battery%20pack
A battery pack is a set of any number of (preferably) identical batteries or individual battery cells. They may be configured in a series, parallel or a mixture of both to deliver the desired voltage and current. The term battery pack is often used in reference to cordless tools, radio-controlled hobby toys, and battery electric vehicles. Components of battery packs include the individual batteries or cells, and the interconnects which provide electrical conductivity between them. Rechargeable battery packs often contain voltage and temperature sensors, which the battery charger uses to detect the end of charging. Interconnects are also found in batteries as they are the part which connects each cell, though batteries are most often only arranged in series strings. When a pack contains groups of cells in parallel there are differing wiring configurations which take into consideration the electrical balance of the circuit. Battery Management System are sometimes used for balancing cells in order to keep their voltages below a maximum value during charging so as to allow the weaker batteries to become fully charged, bringing the whole pack back into balance. Active balancing can also be performed by battery balancer devices which can shuttle energy from strong cells to weaker ones in real time for better balance. A well-balanced pack lasts longer and delivers better performance. For an inline package, cells are selected and stacked with solder in between them. The cells are pressed together and a current pulse generates heat to solder them together and to weld all connections internal to the cell. Calculating state of charge SOC, or state of charge, is the equivalent of a fuel quantity remaining. SOC cannot be determined by a simple voltage measurement, because the terminal voltage of a battery may stay substantially constant until it is completely discharged. In some types of battery, electrolyte specific gravity may be related to state of charge but this is not measurable on typical battery pack cells, and is not related to state of charge on most battery types. Most SOC methods take into account voltage and current as well as temperature and other aspects of the discharge and charge process to in essence count up or down within a pre-defined capacity of a pack. More complex state of charge estimation systems take into account the Peukert effect which relates the capacity of the battery to the discharge rate. Advantages An advantage of a battery pack is the ease with which it can be swapped into or out of a device. This allows multiple packs to deliver extended runtimes, freeing up the device for continued use while charging the removed pack separately. Another advantage is the flexibility of their design and implementation, allowing the use of cheaper high-production cells or batteries to be combined into a pack for nearly any application. At the end of product life, batteries can be removed and recycled separately, reducing the total volume of hazardous waste. Disadvantages Packs are often simpler for end users to repair or tamper with than a sealed non-serviceable battery or cell. Though some might consider this an advantage it is important to take safety precautions when servicing a battery pack as they pose a danger as potential chemical, electrical, and fire risks. Power bank A power bank is a portable device consisting of a battery, a charger to interface battery with charging power source and an output interface to provide desired output voltage. Power banks are made in various sizes and typically based on lithium-ion batteries. A power bank contains battery cells and a voltage converter circuitry. The internal DC-DC converter manages battery charging and converts the battery stack's voltage to the desired output voltage. The advertised capacity on the product in many instances is based on the capacity of the internal cells, however the theoretical mAh available to output depends on the output voltage. The conversion circuit has some energy losses, so the actual output is less than theoretical. The theoretical mAh of a 3.7 V battery power bank with 5 V output is 74% of the battery mAh rating. The RavPower RP-PB41 with advertised capacity of 26,800 mAh that was evaluated in the journal has a theoretical capacity is 19,832 mAh, although the delivered capacity was 15,682 mAh, 78% of theoretical value. Authors attributed the difference to internal resistance in battery and converter losses. The circuit board can contain additional features such as over discharge protection, automatic shut off and charge level indication LEDs. Power banks may be able to detect a connection and power on automatically. If the current load is under a model-specific threshold for a specific duration, a power bank may power down automatically. The average power bank in 2023 transferred around 2/3, or 67% of the power bank's battery energy into the battery of the device being charged. Some power banks are able to deliver power wirelessly, some are equipped with an LED flashlight for casual near-distance illumination when necessary, and some have a pass-through charging feature which allows providing power through their USB ports while being charged themselves simultaneously. Some larger power banks have DC connectors (or barrel connectors) for higher power demands such as laptop computers. Battery cases Battery cases are small power banks attached to the rear side of a mobile phone like a case. Power may be delivered through the USB charging ports, or wirelessly. Battery cases also exist in the form of a camera grip accessory, as was for the Nokia Lumia 1020. For mobile phones with removable rear cover, extended batteries exist. These are larger internal batteries attached with a dedicated, more spacious rear cover replacing the default one. A disadvantage is incompatibility with other phone cases while attached. Prong cases included fold-out prongs integrated into the case itself. Rental/exchange In some parts of the world, there are kiosk based power bank rental or subscription services. Customers pay for the use of power bank for a specified period of time and return the depleted power bank to the kiosk. In one case with a brand called FuelRod, it was sold at an elevated price at various amusement parks with the understanding that they get a perk of free exchange at participating locations. FuelRod moved to discontinue the free exchange in 2019 and resulted in a class-action lawsuit reaching a settlement that early adopters would be grandfathered to free exchange privileges. Air travel restrictions Per US Federal Aviation Administration regulations, power banks in the United States are not allowed in checked-in luggage. Power banks up to 100 Wh are allowed as carry-on and those 101 Wh to 160 Wh are allowed with airline approval. See also Battery balancer Battery charger Battery management system Battery monitoring Battery regulator List of battery types Smart battery References Automotive electrics Battery types Portable electronics
Battery pack
Engineering
1,348
28,443,310
https://en.wikipedia.org/wiki/NAP%20of%20the%20Americas
Network Access Point (NAP) of the Americas (called MI1 by Equinix) is a massive, six-story, 750,000 square foot data center and Internet exchange point in Miami, Florida, operated by Equinix. It is one of the world's largest data centers and among the 10 most interconnected data centers in the United States. It is located at 50 NE 9th Street in downtown Miami. The facility is home to 160 network carriers and is a pathway for data traffic from the Caribbean and South and Central America to more than 150 countries. It is also home to one of the K-roots of the Domain Name System. The NAP of the Americas is built above sea level and is designed to withstand Category 5 hurricane winds. It provides access to 15 subsea cable landings and serves as a relay for the U.S. Department of State's Diplomatic Telecommunications Service. History The NAP of the Americas was built to serve as a major hub for network traffic between the United States and Latin America. It was also known as Verizon Terremark and was operated by Terremark Worldwide (TRMK), a subsidiary of Verizon Communications. In 2016, the building was purchased by Equinix, Inc. for $3.6 billion. Tenants The center is Equinix Miami International Business Exchange (IBX) data facility (Equinix MI1 IBX), offering direct peering access to more than 600 Equinix business and enterprise customers, including more than 160 enterprises and 135 networks, cloud and IT services. Peering networks include AWS, Microsoft Azure, Google Cloud Platform, IBM Cloud, Oracle, Voxility, INAP. See also List of Internet exchange points References External links NAP of the Americas Data centers Internet exchange points in the United States Buildings and structures in Miami Buildings and structures completed in 2001 Telecommunications buildings in the United States Verizon 2001 establishments in Florida
NAP of the Americas
Technology
387
12,256,611
https://en.wikipedia.org/wiki/Chlorphenoxamine
Chlorphenoxamine (Phenoxene) is an antihistamine and anticholinergic used as an antipruritic and antiparkinsonian agent. It is an analog of diphenhydramine. References H1 receptor antagonists 4-Chlorophenyl compounds Ethers Dimethylamino compounds
Chlorphenoxamine
Chemistry
72
61,357,438
https://en.wikipedia.org/wiki/CSX%2B%20Indic%20character%20set
The CSX+ Indic character set, or the Classical Sanskrit eXtended Plus Indic Character Set, is used by LaTeX to represent text used in the Romanization of Sanskrit. It is an extension of the CSX Indic character set (but removes ÿ and the punctuation marks ¢, £, ¥, «, and »), which in turn is an extension of the CS Indic character set, and is based on Code Page 437. It fixes an issue with Windows programs, by moving á from code point 160 (0xA0) (which is problematic because it displays a regular space on Windows), to code point 158 (0x9E). Code page layout References Character encoding DOS code pages
CSX+ Indic character set
Technology
150
21,647,836
https://en.wikipedia.org/wiki/Federal%20Engineer%20of%20the%20Year%20Award
The Federal Engineer of the Year Award is an annual award sponsored by the National Society of Professional Engineers and the Professional Engineers in Government advocacy group of the NSPE. The 2009 Awards were the 30th annual award. The award recognizes technical excellence, publications, leadership, and community service. Each major subgroup of the federal government that employs 50 or more professional engineers selects and nominates an agency winner. From these a list of the top ten are selected and announced. The ultimate winner of the Federal Engineer of the Year is announced at a luncheon award ceremony during National Engineers Week. See also List of engineering awards References External links http://www.nspe.org/InterestGroups/PEG/Resources/AwardsAndScholarships/feya.html Engineering awards American science and technology awards
Federal Engineer of the Year Award
Technology
160
4,506,804
https://en.wikipedia.org/wiki/Marshall%20Islands%20stick%20chart
Stick charts were made and used by the Marshallese to navigate the Pacific Ocean by canoe off the coast of the Marshall Islands. The charts represented major ocean swell patterns and the ways the islands disrupted those patterns, typically determined by sensing disruptions in ocean swells by islanders during sea navigation. Most stick charts were made from the midribs of coconut fronds that were tied together to form an open framework. Island locations were represented by shells tied to the framework, or by the lashed junction of two or more sticks. The threads represented prevailing ocean surface wave-crests and directions they took as they approached islands and met other similar wave-crests formed by the ebb and flow of breakers. Individual charts varied so much in form and interpretation that the individual navigator who made the chart was the only person who could fully interpret and use it. The use of stick charts ended after World War II when new electronic technologies made navigation more accessible and travel among islands by canoe lessened. Significance to the history of cartography The stick charts are a significant contribution to the history of cartography because they represent a system of mapping ocean swells, which was never before accomplished. They also use different materials from those common in other parts of the world. They are an indication that ancient maps may have looked very different, and encoded different features from the earth, from the maps that we use today. The charts, unlike traditional maps, were studied and memorized prior to a voyage and were not consulted during a trip, as compared to traditional navigation techniques where consultation of a map is frequent and points and courses are plotted out both before and during navigation. Marshallese navigators used their senses and memory to guide them on voyages by crouching down or lying prone in the canoe to feel how the canoe was being pitched and rolled by underlying swells. Ocean swells recognized by Marshallese The Marshallese recognized four main ocean swells: the rilib, kaelib, bungdockerik and bundockeing. Navigators focused on effects of islands in blocking swells and generating counterswells to some degree, but they mainly concentrated on refraction of swells as they came in contact with undersea slopes of islands and the bending of swells around islands as they interacted with swells coming from opposite directions. The four types of ocean swells were represented in many stick charts by curved sticks and threads. Rilib swells Rilib swells are the strongest of the four ocean swells and were referred to as "backbone" swells. They are generated by the northeast trade winds and are present during the entire year, even when they do not penetrate as far south as the Marshall Islands. Marshallese considered the rilib swells to come from the east, even though the angle of the winds as well as the impact of the ocean currents varied the swell direction. Kaelib swells The kaelib swell is weaker than the rilib and could only be detected by knowledgeable persons, but it is also present year round. Bungdockerik swells The bungdockerik is present year round as well and arises in the southwest. This swell is often as strong as the rilib in the southern islands. Bundockeing swells The bundockeing swell is the weakest of the four swells, and is mainly felt in the northern islands. Stick chart categories The stick charts typically fall into three main categories: mattang, meddo (or medo), and rebbelib (or rebbelith). Mattang charts The mattang stick chart was an abstract chart used for instruction and for teaching principles of reading how islands disrupt swells. Meddo charts The meddo chart showed actual islands and their relative or exact positions. Meddo charts also showed the direction of main deep ocean swells, the way the swells curved around specific islands and intersected with one another, and distance from a canoe at which an island could be detected. The meddo chart portrayed only a section of one of the two main island chains. Rebbelib charts Rebbelib charts portrayed the same information as a meddo chart, but the difference lies in inclusiveness of the islands. Rebbelib charts, unlike meddo charts, included all or most of one or both chains of islands. Knowledge transfer Stick charts were not made and used by all Marshall Islanders. Only a select few rulers knew the method of making the maps, and the knowledge was only passed on from father to son. So that others could utilize the expertise of the navigator, fifteen or more canoes sailed together in a squadron, accompanied by a leader pilot skilled in use of the charts. It was not until 1862 that this unique piloting system was revealed in a public notice prepared by a resident missionary. It was not until the 1890s that it was comprehensively described by a naval officer, Captain Winkler of the Imperial German Navy. Winkler had been the commander of the , stationed in 1896 in the Marshall Islands which, during that period, were under German rule; he subsequently described the system in an 1898 publication. Winkler became so intrigued by the stick charts that he made a major effort to determine navigational principles behind them and 'convinced' the navigators to share how the stick charts were used. See also Weriyeng Ammassalik wooden maps Notes References Ascher, Marcia Models and maps from the Marshall Islands. A case in etnomathematics, Historia Mathematica, 22(1995) 347-370. Mathematics Elsewhere. An Exploration of Ideas across Cultures, Princeton University Press, 2002, pp89, 95-97, 101-125. Bagrow, L. History of Cartography. Second Edition. Chicago, Precedent Publishing, Inc., 1966. Genz, J., Aucan, J., Merrifeld, M. , Finney, B., Joel, K., and Kelen, Alson, Wave Navigation in the Marshall Islands, Oceanography 22(2009), No. 2., 234–245. Woodward, D. and Malcolm Lewis, G. The History of Cartography: Cartography in the Traditional African, American, Arctic, Australian, and Pacific Societies. Volume Two, Book Three. The University of Chicago Press, Chicago and London, 1998. External links Dirk HR Spennemann. Traditional and Nineteenth Century Communication Patterns In the Marshall Islands, article includes extensive explanations of stick charts Polynesian Stick Charts, includes many photographs Micronesian Stick Charts, diagrams and photographs. Archived. Marshall Islands stamps with stick charts, and explanations Marshall Islands Guide A short video on navigation by ocean wave refraction and stick charts by NOAA. Reddit posts showing stick charts: 1, 2 RESOLVING AMBIVALENCE IN MARSHALLESE NAVIGATION:RELEARNING, REINTERPRETING, AND REVIVING THE “STICK CHART” WAVE MODELS, Joseph H. Genz, 2016 Cartography by country Culture of the Marshall Islands Traditional knowledge Plant products
Marshall Islands stick chart
Chemistry
1,410
6,144,651
https://en.wikipedia.org/wiki/Zeta%20Herculis
Zeta Herculis, Latinized from ζ Herculis, is a multiple star system in the constellation Hercules. It has a combined apparent visual magnitude of 2.81, which is readily visible to the naked eye. Parallax measurements put it at a distance of about from Earth. The primary member is a subgiant star that is somewhat larger than the Sun and has just begun to evolve away from the main sequence as the supply of hydrogen at its core becomes exhausted. It is orbited by a smaller companion star at a mean angular separation of 1.5 arcseconds, which corresponds to a physical separation of about 15 Astronomical Units. This distance is large enough so that the two stars do not have a significant tidal effect on each other. The stars orbit each other over a period of 34.45 years, with a semi-major axis of 1.33" and an eccentricity of 0.46. Component A has a stellar classification of F9 IV. It has about 2.7 times the radius of the Sun and 1.45 times the Sun's mass. This star is radiating more than seven times the luminosity of the Sun at an effective temperature of 5,760 K. The secondary component (Component B) is about the same size and mass as the Sun, with an effective temperature of 5,300 K. Both stars are rotating slowly. There may be a faint third member of this system, although little is known about it. The dual nature of this system was reported by F. G. W. Struve in 1826. The pair orbit each other with a period of 34.45 years and an eccentricity of 0.46. The magnitude difference between the A-B pair is 1.52 ± 0.04 magnitudes (at 700 nm). Two astrometric studies have failed to detect a third component to the A-B binary. This system forms part of the Zeta Herculis moving group of stars. This group includes: φ2 Pavonis, ζ Reticuli, 1 Hydrae, Gl 456, Gl 678, and Gl 9079. References External links F-type subgiants G-type main-sequence stars Herculis, Zeta Binary stars Zeta Herculis Moving Group Hercules (constellation) Herculis, Zeta Durchmusterung objects Herculis, 040 0635 150680 081693 6212
Zeta Herculis
Astronomy
497
1,663,543
https://en.wikipedia.org/wiki/Multimap
In computer science, a multimap (sometimes also multihash, multidict or multidictionary) is a generalization of a map or associative array abstract data type in which more than one value may be associated with and returned for a given key. Both map and multimap are particular cases of containers (for example, see C++ Standard Template Library containers). Often the multimap is implemented as a map with lists or sets as the map values. Examples In a student enrollment system, where students may be enrolled in multiple classes simultaneously, there might be an association for each enrollment of a student in a course, where the key is the student ID and the value is the course ID. If a student is enrolled in three courses, there will be three associations containing the same key. The index of a book may report any number of references for a given index term, and thus may be coded as a multimap from index terms to any number of reference locations or pages. Querystrings may have multiple values associated with a single field. This is commonly generated when a web form allows multiple check boxes or selections to be chosen in response to a single form element. Language support C++ C++'s Standard Template Library provides the multimap container for the sorted multimap using a self-balancing binary search tree, and SGI's STL extension provides the hash_multimap container, which implements a multimap using a hash table. As of C++11, the Standard Template Library provides the unordered_multimap for the unordered multimap. Dart Quiver provides a Multimap for Dart. Java Apache Commons Collections provides a MultiMap interface for Java. It also provides a MultiValueMap implementing class that makes a MultiMap out of a Map object and a type of Collection. Google Guava provides a Multimap interface and implementations of it. Kotlin Kotlin does not have explicit support for multimaps, but can implement them using Maps with containers for the value type. E.g. a Map<User, List<Book>> can associate each User with a list of Books. Python Python provides a collections.defaultdict class that can be used to create a multimap. The user can instantiate the class as collections.defaultdict(list). OCaml OCaml's standard library module Hashtbl implements a hash table where it's possible to store multiple values for a key. Scala The Scala programming language's API also provides Multimap and implementations. See also Multiset for the case where same item can appear several times References Associative arrays Abstract data types
Multimap
Mathematics
561
24,145,421
https://en.wikipedia.org/wiki/C24H40O4
{{DISPLAYTITLE:C24H40O4}} The molecular formula C24H40O4 (molar mass: 392.57 g/mol, exact mass: 392.29266) may refer to: Chenodeoxycholic acid, a bile acid Deoxycholic acid, a bile acid Hyodeoxycholic acid, a bile acid Ursodiol, a bile acid
C24H40O4
Chemistry
90
20,321,174
https://en.wikipedia.org/wiki/Marvin%27s%20Marvelous%20Mechanical%20Museum
Marvin's Marvelous Mechanical Museum is an arcade and museum currently located in Farmington Hills, Michigan. It features a large collection of vintage arcade games and other coin-operated entertainment machines, most of which are functional and can be operated by visitors. Exhibits include, for example, the gypsy Fortune teller machine that used to feature in many carnival sideshows. As of January 2025, Marvin's is temporarily closed for relocation; it is expected to reopen in its new location in West Bloomfield, Michigan in the summer of 2025. History Marvin's Marvelous Mechanical Emporium was founded by Marvin Yagoda, a pharmacist who collected, restored, and sold antique arcade machines. Yagoda initially housed his collections in his garage, but at the suggestion of his wife, he installed some of his machines in the food court of the Tally Hall shopping center in Farmington Hills, Michigan in the early 1980s. He later rented a space in the mall until it closed in 1988; and reopened after it was rebuilt as Orchard Lake Plaza (now known as Hunter's Square) in 1990. Yagoda became a recognized expert in the field of mechanical and electrical game apparatus; he has been involved in appraisal of such items for the television series American Pickers. He died on January 8, 2017, at the age of 78, after which his son, Jeremy, assumed control of the museum. Relocation In November 2023, RPT Realty, then-owner of Hunter's Square, proposed a major redevelopment of the center, which would involve demolishing its northern building, including Marvin's, to construct a Meijer Grocery store. Jeremy Yagoda vowed to fight "tooth and nail" against the proposal, and an online petition opposing the plan gathered more than 50,000 signatures on Change.org. The redevelopment plan was unanimously approved by the Farmington Hills Planning Commission during its November 16, 2023 meeting, at which dozens of supporters of the museum spoke in opposition to the plan. The Farmington Hills City Council unanimously approved the redevelopment plan on February 12, 2024. Yagoda stated that he would continue discussions with RPT to remain at Hunter's Square as part of its redevelopment, or seek a new location for the museum. RPT sold the center to a local developer in April 2024. Yagoda announced in December 2024 that the museum had secured a new location at the Orchard Mall, to the north in neighboring West Bloomfield. The new space has an area of , more than double the size of the museum's current space. The Hunter's Square location closed permanently on January 5, 2025, and Yagoda expects to reopen in the new location in the summer of 2025. Collection Among the collection is P. T. Barnum's replica of the Cardiff Giant, one of Sing Sing Prison's electric chairs in which 30 people died, and an automaton "food inspector" set up to continuously vomit into a pile of milk bottles. There are also various modern coin-op arcade games, and a prize counter to exchange tickets. The museum also hosts a collection of Chuck E. Cheese’s Pizza Time Theatre animatronics with a complete set of the Pizza Time Players (excluding Chuck E.) with one of the guest stars Madame Oink and the clapper board. In popular culture In 2005, Tally Hall, a band from nearby Ann Arbor, titled an album after the museum. See also List of magic museums Notes External links Marvin's Marvelous Mechanical Museum – official site Marvin's Marvelous Mechanical Museum June 30, 2009 at Wayback Machine Amusement museums in the United States Animatronic attractions Commercial machines Farmington Hills, Michigan Museums in Oakland County, Michigan
Marvin's Marvelous Mechanical Museum
Physics,Technology
764
65,310,008
https://en.wikipedia.org/wiki/GRC-6211
GRC-6211 is a drug developed by Glenmark Pharmaceuticals which acts as a potent and selective antagonist for the TRPV1 receptor. It has analgesic and antiinflammatory effects and reached Phase IIb human trials, but was ultimately discontinued from development as a medicine, though it continues to have applications in scientific research. References Ureas Benzopyrans Fluoroarenes Cyclobutanes Spiro compounds Abandoned drugs Transient receptor potential channel modulators
GRC-6211
Chemistry
101
77,051,129
https://en.wikipedia.org/wiki/MRC%200406-244
MRC 0406-244 also known as TN J0408-2418, is a radio galaxy producing an astrophysical jet, located in the constellation of Eridanus. At its redshift of 2.44, it is roughly ten billion light years from Earth. Characteristics MRC 0406-244 is one of the most powerful radio galaxies known date-to-date; it was studied extensively by the MRC/1 Jy radio source survey. MRC 0406-244 is also classified a Seyfert type 2 galaxy, with a complex morphology featuring several components, including a point source with an extended nebular and continuum emission. Moreover, it contains an ultra-deep spectrum radio source (USS). Host galaxy The host galaxy of MRC 0406-244 is a dusty obscured massive early-type galaxy with a star formation of (M⋆ ~ 1011 M⊙). The galaxy also has a stellar disc, within the range of 3.5 to 8.2 kpc which is similar to coeval star-forming galaxies, which is found to be smooth. Galaxy merger From recent Hubble Space Telescope (HST) images, researchers have found several bright clumps with figure-of-eight shapes elongated along the radio jet axis of MRC 0406–244. Further images reveal there is a spatially resolved continuum. It is associated with its southeastern component that is aligned with the radio axis, with a complex morphology, including a double nucleus and tidal tail features. This suggests of a tidal origin, meaning a recent galaxy merger has taken place. It is believed that galaxy merger plays a dominant role in which its supermassive black hole is fueled. Signs show the black hole inside the center of MRC 0406–244, is growing at an exponential rate of hundreds to thousands of solar masses per year, so as the luminosity, turning it into a quasar. Observation of MRC 0406-244 According from HST and ground-based multiwavelength observations of MRC 0406–244, researchers found there are two distinct components, aligned with its radio source axis. One of them has the red optical-to-IR and blue ultraviolet colors, making it a characteristic of radio galaxies, while the other is red in all colors. From the Lyα image, a nebula in MRC 0406-244 is found to be 3'' × 5'' extent, confined to the range of azimuth angles with respect to the nucleus of 60° to either side. Another component is also found extending northwest with similar morphology seen in the HST images. Researchers also found the long axis of the Lyman-alpha emission that is 130° east of north in MRC 0406–244, aligns with the radio source. They found out, the total Ly flux is 1.2 × 10−14 ergs s−1 cm−2, equivalent to a luminosity of 1 × 1045 ergs s−1. 1 40% of this flux is within the 25 × 35 area that is measured in the spectrum. The peak surface brightness of the Ly emission is found to be 9.0 × 10−16 ergs s−1 cm−2 arcsec−2, and emission is detected down to a level of 1.5 × 10−16 ergs s−1 cm−2 arcsec−2. Radio Properties A triple radio source found in MRC 0406–244, is amongst the most power radio sources known according to researchers. They noted that the radio source reaches up to luminosity of 1.2 × 1036 ergs s−1 Hz−1 at 1 GHz and radiating at 6.3 × 1045 ergs s−1 between 100 MHz and 100 GHz. This extends over 84 kpc between the hot spots of MRC 0406–244, which found to be steep and strongly curved. The spectral index of the radio source changes from 1.21 at 1 GHz to 1.49 at 16 GHz in the rest frame to flux densities at several frequencies between 408 MHz and 8.44 GHz. This radio core contains a spectral index of 0.80 ± 0.18 (between 4.7 and 8.4 GHz, observed) that contributes the total emission at 15 GHz at 2%. The two radio lobes in MRC 0406-244 are notable for their degree of asymmetry. Given the asymmetries in arm length and the spectral indices in powerful sources, they are both related both intrinsic asymmetries and light travel-time differences. From the correlation between emission-line and radio asymmetries in other 3CR radio galaxies, this suggests dominance of environmental effects, which cosmic dust plays a main role in correlation of optical and radio asymmetries. Gas and dust beyond MRC 0406-244 Further observations from researchers found, that nebular and continuum ultraviolet light extends up to 25 kpc from the nucleus of the galaxy. This is six times more than the length of the minor axis in the direction of the radio jets. Both extended components are shown to contribute to the alignment effect given their distribution is biconical along the jet axis. The dust at similarly large radii is known to exist, given the diffuse blue continuum is dust-scattered from active galactic nucleus light or from dust-reddened young stars. The dust must had probably originated in the galaxy, hence the outflow of both metal-rich gas and dust is important for enriching the intracluster medium. The nebular gas can be driven out from MRC 0406–244 in two ways. One, either through expanding starburst bubbles. Two, by overpressurized cocoons of material that is surrounding the expanding radio jets. The dust grains are likely destroyed when interacting with powerful radio jets, so they instead transported by radiation pressure or supernovae induced winds. Dust in the intergalactic medium has theorized for a long time, but detected only recently. From the study published by Ménard et al. (2010), it is suggested half of the cosmic dust lies within galaxies, whilst the other half is expelled to the intergalactic medium. The extended light in MRC 0406−244 allows researchers, a rare opportunity to observe the dust in the midst of leaving the galaxy. As the host galaxy of MRC 0406−244 does not have different characteristics compared to other massive star-forming galaxies, it is possible that many highly star-forming galaxies lying at a similar redshift also expels out similar amounts of dust in bipolar outflows. The dust remains invisible unless it gets illuminated through sufficiently bright active galactic nuclei or through feedback driven star formations. If such dusty outflows in MRC 0406-244 is common, then researchers are expected to see ultraviolet-bright emission along the minor axis of stacked images of starbursting active galactic nuclei. References Radio galaxies Seyfert galaxies Eridanus (constellation) 2823818 Interacting galaxies Active galaxies
MRC 0406-244
Astronomy
1,448
30,627,411
https://en.wikipedia.org/wiki/Vincenzo%20Balzani
Vincenzo Balzani (born 15 November 1936 in Forlimpopoli, Italy) is an Italian chemist, now emeritus professor at the University of Bologna. Career He has spent most of his professional life at the "Giacomo Ciamician" Department of Chemistry of the University of Bologna, becoming full professor in 1973. He has been appointed emeritus professor on November 1, 2010. Teaching activity He taught courses on General and Inorganic Chemistry, Photochemistry, Supramolecular chemistry. He was chairman of the PhD course on Chemical Sciences from 2002 to 2007 and of the "laurea specialistica" in Photochemistry and Material Chemistry from 2004 to 2007. In the Academic Year 2008–2009, he founded at the University of Bologna an interdisciplinary course on Science and Society. Scientific activity He has carried out an intense scientific activity in the fields of photochemistry, photophysics, electron transfer reactions, supramolecular chemistry, nanotechnology, machines and devices at the molecular level, photochemical conversion of solar energy. With its 650 publications cited more than 64,000 times in the scientific literature (H index 119), he is one of the best known chemists in the world. He is author or co-author of texts for researchers in English, some translated into Chinese and Japanese, which are currently adopted in universities in many countries. A few of the most significant texts are: Photochemistry of Coordination Compounds (1970), Supramolecular Photochemistry (1991), Molecular Devices and Machines - Concepts and Perspectives for the Nanoworld (2008), Energy for a Sustainable World (2011), Photochemistry and Photophysics: Concepts, Research, Applications (2014). Public education activity For many years, alongside scientific research, he has carried out an intense dissemination activity, also on the relationship between science and society and between science and peace, with particular reference to energy and resource issues. He is convinced that scientists have a great responsibility that derives from their knowledge and therefore it is their duty to actively contribute to solving the problems of humanity, particularly those connected to the current energy-climate crisis. Every year he holds dozens of seminars in primary or secondary schools and public conferences to illustrate to students and citizens the problems created by the use of fossil fuels: climate change, ecological unsustainability and the social unease deriving from growing inequalities. He believes that three transitions are necessary: from fossil fuels to renewable energies, from the linear economy to the circular economy and from consumerism to sobriety. On these themes he is coauthor of books much appreciated by students and teachers of secondary schools: Chimica (2000); Energia oggi e domani: Prospettive, sfide, speranze (2004); Energia per l'astronave Terra (2017), whose first edition (2007) won the Galileo award for scientific dissemination; Chimica! Leggere e scrivere il libro della natura (2012), English version: Chemistry! Reading and writing the book of Nature (2014); Energia, risorse, ambiente (2014); Le macchine molecolari (2018), finalist in the National Award for Scientific Dissemination Giancarlo Dosi. Other activities Visiting professor: University of British Columbia, Vancouver, Canada 1972; Energy Research Center, Hebrew University of Jerusalem, Israel, 1979; University of Strasbourg, France, 1990; University of Leuven, Belgium, 1991; University of Bordeaux, France, 1994. Chairman: Gruppo Italiano di Fotochimica (1982–1986), European Photochemistry Association (1988–92); XII IUPAC Symposium on Photochemistry (1988); International Symposium on "Photochemistry and Photophysics of Coordination Compounds (since 1989, now Honorary Chairman); PhD course in Chemistry Sciences (2002–2007) e Laurea specialistica in Photochemistry and Chemistry of Materials (2004–2007), University of Bologna. Director: Institute of Photochemistry and High Energy Radiations (FRAE), National Research Council (Italy), Bologna (1977–1988) and Center for the Photochemical Conversion of Solar Energy, University of Bologna (1981–1998). Member of the Scientific Committee of several international scientific journals. Member of the Scientific Committee of the Urban Plan for Sustainable Mobility (PUMS), of the Bologna metropolitan area (2008–). Political activity: In 2009 he started the Science and Society interdisciplinary course at the University of Bologna with the aim of bridging the gap between University and City; it has long been hoping for the strengthening of similar initiatives for the cultural growth of the Metropolitan City. In 2014 he founded the Energia per l'Italia group,[2] formed by 22 professors and researchers of the university and of the most important research centers of Bologna, with the aim of offering the Government and local politicians guidelines to tackle the energy problem according to a broad perspective that includes scientific, social, environmental and cultural aspects. Coordinator and editor: Supramolecular Photochemistry, NATO ASI Series n. 214, Reidel, Dordrecht (1987); Supramolecular Chemistry, NATO ASI Series n. 371, Reidel, Dordrecht (1992) (with L. De Cola); Guest Editor, Supramolecular Photochemistry, New J. Chem., N.7–8, vol. 20 (1996); Editor in chief of the Handbook on Electron Transfer in Chemistry, in five volumes, Wiley-VCH, Weinheim (2001); Topics in Current Chemistry, volumes 280 and 281 on Photochemistry and Photophysics of Coordination Compounds (2007). Associations and academies He is a member of: Società Chimica Italiana; Accademia delle Scienze di Bologna; Accademia delle Scienze di Torino; Società Nazionale di Scienze, Lettere ed Arti in Napoli; Accademia Nazionale delle Scienze detta dei XL; Accademia Nazionale dei Lincei; European Photochemistry Association; ChemPubSoc Europe; Academia Europaea; European Academy of Sciences, European Academy of Sciences and Arts; American Association for the Advancement of Science. Honors and awards Pacific West Coast Inorganic Lectureship, USA and Canada, 1985; Gold Medal "S. Cannizzaro", Italian Chemical Society, 1988; Doctorate "Honoris Causa", University of Fribourg (CH), 1989; Accademia dei Lincei Award in Chemistry, Italy, 1992; Ziegler-Natta Lecturer, Gesellschaft Deutscher Chemiker, Germany, 1994; Italgas European Prize for Research and Innovation, 1994; Centenary Lecturer, The Royal Chemical Society (U.K.), 1995; Porter Medal for Photochemistry, 2000; Prix Franco-Italien de la Société Française de Chimie, 2002; Grande Ufficiale dell’Ordine al Merito della Repubblica Italiana, 2006; Quilico Gold Metal, Organic Division, Italian Chemical Society, 2008; Honor Professor, East China University of Science and Technology of Shanghai, 2009; Blaise Pascal Medal, European Academy of Sciences, 2009; Rotary Club Galileo International Prize for scientific research, 2011; Nature Award for Mentoring in Science, 2013; Archiginnasio d’oro, Città di Bologna, 2016; Grand Prix de la Maison de la Chimie (France) 2016; Leonardo da Vinci Award, European Academy of Sciences, 2017; Nicholas J. Turro Award, Inter-American Photochemical Society, 2018; Cavaliere di Gran Croce della Repubblica Italiana per meriti scientifici, 2019; Primo Levi Award, Gesellschaft Deutscher Chemiker and Società Chimica Italiana, 2019; UNESCO-Russia Mendeleev Prize, 2021. Publications Scientific books Translated into Chinese and Japanese. Translated into Chinese. Translated into Chinese. Translated into Chinese Educational books (in Italian) Some important papers in scientific journals References External links Homepage at the University of Bologna. Accessed 2011-07-19. CV. Accessed 2015-09-06. List of publications. Accessed 2015-09-06. 1936 births 20th-century Italian chemists Living people Academic staff of the University of Bologna National Research Council (Italy) people Photochemists Members of Academia Europaea
Vincenzo Balzani
Chemistry
1,727
10,538,424
https://en.wikipedia.org/wiki/Petrifilm
The Neogen Petrifilm plate is an all-in-one plating system made by the Food Safety Division of the Neogen Corporation. They are heavily used in many microbiology-related industries and fields to culture various micro-organisms and are meant to be a more efficient method for detection and enumeration compared to conventional plating techniques. A majority of its use is for the testing of foodstuffs. Petrifilm plates are designed to be as accurate as conventional plating methods. Ingredients usually vary from plate to plate depending on what micro-organism is being cultured, but generally a Petrifilm comprises a cold-water-soluble gelling agent, nutrients, and indicators for activity and enumeration. A typical Petrifilm plate has a 10 cm(H) × 7.5 cm(W) bottom film which contains a foam barrier accommodating the plating surface, the plating surface itself (a circular area of about 20 cm2), and a top film which encloses the sample within the Petrifilm. A 1 cm × 1 cm yellow grid is printed on the back of the plate to assist enumeration. A plastic “spreader” is also used to spread the inoculum evenly. Comparisons between Petrifilm plates and standard methods Petrifilm plates have become widely used because of their cost-effectiveness, simplicity, convenience, and ease of use. For example, conventional plating would require preparing agar for pour plating, or using agar plates and vial inoculum loops for streak plating; but for Petrifilm plates, the agar is completely housed in a single unit so that only the sample has to be added, which saves time. For incubation, Petrifilm plates can be safely stacked and incubated just like Petri dishes. Since they are paper thin, more plates can be stacked together than Petri dishes (although it is recommended that Petrifilms be stacked no higher than 20). For enumeration, Petrifilm plates can be used on any colony counter for enumeration just like a Petri dish. Various enumeration experiments have shown very little or no variance between counts obtained through Petrifilm and standard agar counts. In some cases, Petrifilms were more sensitive in detection than standard microbiology methods, other than the case that a higher sensitivity could possibly lead to an increased risk of false positive results. Procedure for using Petrifilm plates First, a sample must be prepared through standard weighing and serial dilution, with stomaching and pH adjustment if necessary. Next, to inoculate, the top layer is lifted to expose the plating surface, and with a pipette, 1mL of the diluted sample is added. The top film is then slowly rolled down and the “spreader” is used for even distribution. It takes a minute for gelling to occur. After incubation After the required incubation period, colonies appear either as splotches, spots which are surrounded by bubbles, or a combination of both, which differ from micro-organism to micro-organism. Enumeration can be done on a standard colony counter. Picking out individual colonies for interpretation can also be done because the top film can be lifted quite effortlessly to expose the gel. Unfortunately, if a sample is too dark in colour (e.g. chocolate or hot chocolate), enumeration becomes more difficult or impossible, since the stained colonies are less visible. Types of Petrifilm plates Aerobic plate count Aerobic plate count with lactic acid bacteria Lactic acid bacteria Coliform Rapid coliform High-Sensitivity coliform E. coli / coliform Enterobacteriaceae Staphylococcus express Yeast and mould Environmental listeria References External links Petrifilm Plates Neogen Petrifilm Catalog See also Agar plate Agar Petri dish Neogen Corporation Microbiology terms Microbiology equipment
Petrifilm
Biology
803
11,621,390
https://en.wikipedia.org/wiki/Map%20analysis
A map analysis is a study regarding map types, i.e. political maps, military maps, contour lines etc., and the unique physical qualities of a map, i.e. scale, title, legend etc. It is also a way of decoding the message and symbols of the map and placing it within its proper spatial and cultural context, as well as identifying changes in features and landscapes. References Cartography
Map analysis
Environmental_science
87
14,815,148
https://en.wikipedia.org/wiki/SIX3
Homeobox protein SIX3 is a protein that in humans is encoded by the SIX3 gene. Function The SIX homeobox 3 (SIX3) gene is crucial in embryonic development by providing necessary instructions for the formation of the forebrain and eye development. SIX3 is a transcription factor that binds to specific DNA sequences, controlling whether the gene is active or inactive. Activity of the SIX3 gene represses Wnt1 gene activity which ensures development of the forebrain and establishes the proper anterior posterior identity in the mammalian brain. By blocking Wnt1 activity, SIX3 is able to prevent abnormal expansion of the posterior portion of the brain into the anterior brain area. During retinal development, SIX3 has been proven to hold a key responsibility in the activation of Pax6, the master regulator of eye development. Furthermore, SIX3 assumes its activity in the PLE (presumptive lens ectoderm), the region in which the lens is expected to develop. If its presence is removed from this region, the lens fails to thicken and construct itself to its proper morphological state. Also, SIX3 plays a strategic role in the activation of SOX2. SIX3 has also been proven to play a role in repression of selected members of the Wnt family. In retinal development, SIX3 is responsible for the repression of Wnt8b. Also, in forebrain development, SIX3 is responsible for the repression of Wnt1 and activation of SHH, Sonic Hedgehog gene. Clinical significance Mutations in SIX3 are the cause of a severe brain malformation, called holoprosencephaly type 2 (HPE2). In HPE2, the brain fails to separate into two hemispheres during early embryonic development, leading to eye and brain malformations, which result in serious facial abnormalities. A mutant zebrafish knockout model has been developed, in which the anterior part of the head was missing due to the atypical increase of Wnt1 activity. When injected with SIX3, these zebrafish embryos were able to successfully develop a normal forebrain. When SIX3 was turned off in mice, it resulted in a lack of retina formation due to excessive expression of Wnt8b in the region where the forebrain normally develops. Both of these studies demonstrate the importance of SIX3 activity in brain and eye development. Interactions SIX3 has been shown to interact with TLE1 and Neuron-derived orphan receptor 1. References Further reading External links GeneReviews/NCBI/NIH/UW entry on Anophthalmia / Microphthalmia Overview Transcription factors Developmental neuroscience
SIX3
Chemistry,Biology
547
19,317,909
https://en.wikipedia.org/wiki/Craterellus%20lutescens
Craterellus lutescens, formerly sometimes called Cantharellus lutescens or Cantharellus xanthopus or Cantharellus aurora, commonly known as Yellow Foot, camagroc in Catalan, craterelle jaune in French, is a species of mushroom. It is closely related to Craterellus tubaeformis. Its hymenium is usually orange or white, whereas the hymenium of C. tubaeformis is grey. C. lutescens is also usually found in wetlands. Description The species is more brightly coloured than Craterellus tubaeformis. The cap is lobed irregularly and is brown to bistre. The hymenium and stipe are also more brightly coloured than C. tubaeformis. The hymenium is almost smooth or slightly veined and is pink. The stipe is yellow-orange. The species is edible. Habitat The species can commonly be found in large colonies in some coniferous forests, under spruce, mountain fir trees, or pinewoods near the seashore. Research An extract of Craterellus lutescens exhibits inhibitory activity on thrombin. References External links Craterellus aurora MushroomExpert.com Edible fungi Cantharellales Fungi of Europe Fungus species Taxa named by Elias Magnus Fries
Craterellus lutescens
Biology
269
2,271,614
https://en.wikipedia.org/wiki/Arrayed%20waveguide%20grating
Arrayed waveguide gratings (AWG) are commonly used as optical (de)multiplexers in wavelength division multiplexed (WDM) systems. These devices are capable of multiplexing many wavelengths into a single optical fiber, thereby increasing the transmission capacity of optical networks considerably. The devices are based on a fundamental principle of optics, which states that light waves of different wavelengths do not interfere linearly with each other. This means that, if each channel in an optical communication network makes use of light of a slightly different wavelength, then the light from many of these channels can be carried by a single optical fiber with negligible crosstalk between the channels. The AWGs are used to multiplex channels of several wavelengths onto a single optical fiber at the transmission end and are also used as demultiplexers to retrieve individual channels of different wavelengths at the receiving end of an optical communication network. Operation of AWG devices Conventional silica-based AWGs, as illustrated in the figure above, are planar lightwave circuits fabricated by depositing layers of doped and undoped silica on a silicon substrate. The AWGs consist of a number of input (1) and output (5) couplers, a free space propagation region (2) and (4) and the grating waveguides (3). The grating waveguides consists of many waveguides, each having a constant length increment (ΔL). Light is coupled into the device via an optical fiber (1) connected to the input port. Light diffracting out of the input waveguide at the coupler/slab interface propagates through the free-space region (2) and illuminates the grating with a Gaussian distribution. Each wavelength of light coupled to the grating waveguides (3) undergoes a constant change of phase attributed to the constant length increment in grating waveguides. The diffracted light from each waveguide within the grating undergoes constructive interference, resulting in a refocusing of the light at the output waveguides (5). The spatial position of the output channels is wavelength-dependent, determined by the array phase shift induced by the constant length increment in the grating waveguides. References Optical devices Photonics Fiber optics Multiplexing
Arrayed waveguide grating
Materials_science,Engineering
478
5,558,617
https://en.wikipedia.org/wiki/BLOSUM
In bioinformatics, the BLOSUM (BLOcks SUbstitution Matrix) matrix is a substitution matrix used for sequence alignment of proteins. BLOSUM matrices are used to score alignments between evolutionarily divergent protein sequences. They are based on local alignments. BLOSUM matrices were first introduced in a paper by Steven Henikoff and Jorja Henikoff. They scanned the BLOCKS database for very conserved regions of protein families (that do not have gaps in the sequence alignment) and then counted the relative frequencies of amino acids and their substitution probabilities. Then, they calculated a log-odds score for each of the 210 possible substitution pairs of the 20 standard amino acids. All BLOSUM matrices are based on observed alignments; they are not extrapolated from comparisons of closely related proteins like the PAM Matrices. Biological background The genetic instructions of every replicating cell in a living organism are contained within its DNA. Throughout the cell's lifetime, this information is transcribed and replicated by cellular mechanisms to produce proteins or to provide instructions for daughter cells during cell division, and the possibility exists that the DNA may be altered during these processes. This is known as a mutation. At the molecular level, there are regulatory systems that correct most — but not all — of these changes to the DNA before it is replicated. The functionality of a protein is highly dependent on its structure. Changing a single amino acid in a protein may reduce its ability to carry out this function, or the mutation may even change the function that the protein carries out. Changes like these may severely impact a crucial function in a cell, potentially causing the cell — and in extreme cases, the organism — to die. Conversely, the change may allow the cell to continue functioning albeit differently, and the mutation can be passed on to the organism's offspring. If this change does not result in any significant physical disadvantage to the offspring, the possibility exists that this mutation will persist within the population. The possibility also exists that the change in function becomes advantageous. The 20 amino acids translated by the genetic code vary greatly by the physical and chemical properties of their side chains. However, these amino acids can be categorised into groups with similar physicochemical properties. Substituting an amino acid with another from the same category is more likely to have a smaller impact on the structure and function of a protein than replacement with an amino acid from a different category. Sequence alignment is a fundamental research method for modern biology. The most common sequence alignment for protein is to look for similarity between different sequences in order to infer function or establish evolutionary relationships. This helps researchers better understand the origin and function of genes through the nature of homology and conservation. Substitution matrices are utilized in algorithms to calculate the similarity of different sequences of proteins; however, the utility of Dayhoff PAM Matrix has decreased over time due to the requirement of sequences with a similarity more than 85%. In order to fill in this gap, Henikoff and Henikoff introduced BLOSUM (BLOcks SUbstitution Matrix) matrix which led to marked improvements in alignments and in searches using queries from each of the groups of related proteins. Terminology BLOSUM Blocks Substitution Matrix, a substitution matrix used for sequence alignment of proteins. Scoring metrics (statistical versus biological) When evaluating a sequence alignment, one would like to know how meaningful it is. This requires a scoring matrix, or a table of values that describes the probability of a biologically meaningful amino-acid or nucleotide residue-pair occurring in an alignment. Scores for each position are obtained frequencies of substitutions in blocks of local alignments of protein sequences. BLOSUM r The matrix built from blocks with less than r% of similarity E.g., BLOSUM62 is the matrix built using sequences with less than 62% similarity (sequences with ≥ 62% identity were clustered together). Note: BLOSUM 62 is the default matrix for protein BLAST. Experimentation has shown that the BLOSUM-62 matrix is among the best for detecting most weak protein similarities. Several sets of BLOSUM matrices exist using different alignment databases, named with numbers. BLOSUM matrices with high numbers are designed for comparing closely related sequences, while those with low numbers are designed for comparing distant related sequences. For example, BLOSUM80 is used for closely related alignments, and BLOSUM45 is used for more distantly related alignments. The matrices were created by merging (clustering) all sequences that were more similar than a given percentage into one single sequence and then comparing those sequences (that were all more divergent than the given percentage value) only; thus reducing the contribution of closely related sequences. The percentage used was appended to the name, giving BLOSUM80 for example where sequences that were more than 80% identical were clustered. Construction of BLOSUM matrices BLOSUM matrices are obtained by using blocks of similar amino acid sequences as data, then applying statistical methods to the data to obtain the similarity scores. Statistical Methods Steps : Eliminating Sequences Eliminate the sequences that are more than r% identical. There are two ways to eliminate the sequences. It can be done either by removing sequences from the block or just by finding similar sequences and replace them by new sequences which could represent the cluster. Elimination is done to remove protein sequences that are more similar than the specified threshold. Calculating Frequency & Probability A database storing the sequence alignments of the most conserved regions of protein families. These alignments are used to derive the BLOSUM matrices. Only the sequences with a percentage of identity lower than the threshold are used. By using the block, counting the pairs of amino acids in each column of the multiple alignment. Log odds ratio It gives the ratio of the occurrence each amino acid combination in the observed data to the expected value of occurrence of the pair. It is rounded off and used in the substitution matrix. where is the probability of observing the pair and is the expected probability of such a pair occurring, given the background probabilities of each amino acid. BLOSUM Matrices The odds for relatedness are calculated from log odd ratio, which are then rounded off to get the substitution matrices BLOSUM matrices. Score of the BLOSUM matrices A scoring matrix or a table of values is required for evaluating the significance of a sequence alignment, such as describing the probability of a biologically meaningful amino-acid or nucleotide residue-pair occurring in an alignment. Typically, when two nucleotide sequences are being compared, all that is being scored is whether or not two bases are the same at one position. All matches and mismatches are respectively given the same score (typically +1 or +5 for matches, and -1 or -4 for mismatches). But it is different for proteins. Substitution matrices for amino acids are more complicated and implicitly take into account everything that might affect the frequency with which any amino acid is substituted for another. The objective is to provide a relatively heavy penalty for aligning two residues together if they have a low probability of being homologous (correctly aligned by evolutionary descent). Two major forces drive the amino-acid substitution rates away from uniformity: substitutions occur with the different frequencies, and lessen functionally tolerated than others. Thus, substitutions are selected against. Commonly used substitution matrices include the blocks substitution (BLOSUM) and point accepted mutation (PAM) matrices. Both are based on taking sets of high-confidence alignments of many homologous proteins and assessing the frequencies of all substitutions, but they are computed using different methods. Scores within a BLOSUM are log-odds scores that measure, in an alignment, the logarithm for the ratio of the likelihood of two amino acids appearing with a biological sense and the likelihood of the same amino acids appearing by chance. The matrices are based on the minimum percentage identity of the aligned protein sequence used in calculating them. Every possible identity or substitution is assigned a score based on its observed frequencies in the alignment of related proteins. A positive score is given to the more likely substitutions while a negative score is given to the less likely substitutions. To calculate a BLOSUM matrix, the following equation is used: Here, is the probability of two amino acids and replacing each other in a homologous sequence, and and are the background probabilities of finding the amino acids and in any protein sequence. The factor is a scaling factor, set such that the matrix contains easily computable integer values. An example - BLOSUM62 BLOSUM80: more related proteins BLOSUM62: midrange BLOSUM45: distantly related proteins An article in Nature Biotechnology revealed that the BLOSUM62 used for so many years as a standard is not exactly accurate according to the algorithm described by Henikoff and Henikoff. Surprisingly, the miscalculated BLOSUM62 improves search performance. The BLOSUM62 matrix with the amino acids in the table grouped according to the chemistry of the side chain, as in (a). Each value in the matrix is calculated by dividing the frequency of occurrence of the amino acid pair in the BLOCKS database, clustered at the 62% level, divided by the probability that the same two amino acids might align by chance. The ratio is then converted to a logarithm and expressed as a log odds score, as for PAM. BLOSUM matrices are usually scaled in half-bit units. A score of zero indicates that the frequency with which a given two amino acids were found aligned in the database was as expected by chance, while a positive score indicates that the alignment was found more often than by chance, and negative score indicates that the alignment was found less often than by chance. Some uses in bioinformatics Research applications BLOSUM scores was used to predict and understand the surface gene variants among hepatitis B virus carriers and T-cell epitopes. Surface gene variants among hepatitis B virus carriers DNA sequences of HBsAg were obtained from 180 patients, in which 51 were chronic HBV carrier and 129 newly diagnosed patients, and compared with consensus sequences built with 168 HBV sequences imported from GenBank. Literature review and BLOSUM scores were used to define potentially altered antigenicity. Reliable prediction of T-cell epitopes A novel input representation has been developed consisting of a combination of sparse encoding, Blosum encoding, and input derived from hidden Markov models. this method predicts T-cell epitopes for the genome of hepatitis C virus and discuss possible applications of the prediction method to guide the process of rational vaccine design. Use in BLAST BLOSUM matrices are also used as a scoring matrix when comparing DNA sequences or protein sequences to judge the quality of the alignment. This form of scoring system is utilized by a wide range of alignment software including BLAST. Comparing PAM and BLOSUM In addition to BLOSUM matrices, a previously developed scoring matrix can be used. This is known as a PAM. The two result in the same scoring outcome, but use differing methodologies. BLOSUM looks directly at mutations in motifs of related sequences while PAM's extrapolate evolutionary information based on closely related sequences. Since both PAM and BLOSUM are different methods for showing the same scoring information, the two can be compared but due to the very different method of obtaining this score, a PAM100 does not equal a BLOSUM100. The relationship between PAM and BLOSUM The differences between PAM and BLOSUM Software Packages There are several software packages in different programming languages that allow easy use of Blosum matrices. Examples are the blosum module for Python, or the BioJava library for Java. See also Sequence alignment Point accepted mutation References External links BLOCKS WWW server Scoring systems for BLAST at NCBI Data files of BLOSUM on the NCBI FTP server. Interactive BLOSUM Network Visualization Genetics Biochemistry methods Computational phylogenetics Matrices
BLOSUM
Chemistry,Mathematics,Biology
2,429
10,380,134
https://en.wikipedia.org/wiki/Boeing%20Everett%20Factory
The Boeing Everett Factory, officially the Everett Production Facility, is an airplane assembly facility operated by Boeing in Everett, Washington, United States. It sits on the north side of Paine Field and includes the largest building in the world by volume at over , which covers . The entire complex covers approximately and spans both sides of State Route 526 (named the Boeing Freeway). The factory was built in 1967 for the Boeing 747 and has since been expanded several times to accommodate new airliners, including the 767, 777, and 787 programs. More than 5,000 widebody aircraft have been built at the Everett factory since it opened. Facilities The Boeing Everett complex sits on in southwestern Everett, about north of Seattle. It includes up to 200 separate buildings and facilities, mostly on the north and east sides of Paine Field's main runway, and straddles both sides of State Route 526 (named the Boeing Freeway). The complex includes a fire station, a medical clinic, a gymnasium, on-site security, and seven restaurants and cafes. , Boeing has 30,000 workers at its Everett site who are scheduled in three shifts, primarily during daytime hours. The company is the largest employer in Everett and Snohomish County. The main assembly building, immediately north of the Boeing Freeway, covers and is organized into six production lines that are separated by walls, offices, and other spaces. It is the world's largest building by volume at of interior space according to Guinness World Records; the building is large enough to fit all of Disneyland or 75 American football fields. The production lines move at a rate of per minute and are guided by 26 overhead cranes that move along of track. These cranes are suspended along the roof trusses, which are long and are supported by columns that are tall. A network of pedestrian and utilities tunnels span under the factory floor; employees also use a shared fleet of 1,300 bicycles and tricycles to move around the factory floor. The main building is tall and has six hangar doors that are each tall and wide. The doors have a six-part mural that was recognized as the world's largest digital image in 2006 by Guinness World Records. The building has a central ventilation system but lacks air conditioning; it is instead cooled by opening the doors for outdoor air. The building is heated through residual warming from employees and equipment, including the 1 million overhead lights in the factory. An urban legend states that clouds used to form inside the main building due to its size prior to the installation of upgraded ventilation systems. Adjacent buildings include a composite wing manufacturing plant with of floor space; paint and seal buildings; and an auxiliary fuselage assembly plant for the Boeing 777X. The north side of the factory complex is connected to the flight line at Paine Field via a taxiway that crosses over the Boeing Freeway west of Airport Road; airplanes are towed from the factory to flight line facilities at night to avoid disrupting traffic. The south side includes a set of three paint hangars, a delivery center with conference rooms, and parking spaces for airplanes. The flight line area connects to the main runway at Paine Field, which is long and is the only one at the airport that can accommodate jetliners. The runway has also used for commercial service since the opening of a new passenger terminal at the airport in 2019. Additional spaces for parked airplanes are on the west side of the runway and southwest of the main building; Paine Field's short crosswind runway has also occasionally been used to park airplanes since 2010; the runway and an adjacent taxiway have been leased by Boeing from the county government to store airplanes. History Boeing opened its first facilities in Everett on October 13, 1943, at a former auto garage to produce sections for the Boeing B-17 Flying Fortress. The company had several small shops in the city, but their presence in the area was reduced by 1963. The first 25 orders for the Boeing 747, to be the world's largest jetliner, were sold to Pan American World Airways for $525 million (equivalent to $ billion in ) in March 1966. The program would require a larger factory than their Renton facility, which was instead planned to be used for the conceptual 2707 supersonic airliner. Among the sites considered by Boeing for a new factory were Monroe, Washington; McChord Air Force Base near Tacoma, Washington; Moses Lake, Washington; Cleveland, Ohio; and Walnut Creek, California. On June 17, 1966, the company announced that it had selected a site adjacent to Paine Field as the future home of its Boeing 747 assembly plant. Boeing purchased north of the airport, which had primarily been used by the U.S. military and small businesses; a 75-year lease for use of Paine Field was also signed with the county government, which owned the airport. The company had already spent several months acquiring properties around the airport in preparation of the announcement and cleared parts of the site by late May. The factory, planned to become the world's largest building by volume, was built in sections beginning in late June. The first section housed a mockup of the Boeing 747 that had been under assembly at the Renton factory. A railroad spur connecting the site to the mainline tracks at Mukilteo was constructed through Japanese Gulch. The first 113 workers at the Everett factory began work on January 3, 1967, and prepared for the assembly of the relocated Renton mockup. The factory was officially opened on May 1, 1967, four months after the first workers had arrived to start construction of the 747. Construction of the factory involved of soil to be excavated. The main factory building was originally and later expanded by 45 percent in 1979 as part of the Boeing 767 program and another 50 percent in 1990 for the Boeing 777. The company acquired of Paine Field property from the county government in 1989 to expand its flight line. To accommodate the Dreamlifter, a converted 747-400 which delivered 787 sections to the plant, a base was constructed on the western edge of Paine Field's runway. Opening in October 2013, the base, called the Dreamlifter Operations Center, was funded by Snohomish County with $35 million in bonds; it is owned by the county via the airport, with Boeing originally leasing the site and servicing the bonds. Following Boeing's decision to shutter the 787 production line in Everett and consolidate 787 production in South Carolina, the lease on the Dreamlifter Operations Center was transferred to FedEx for use as a cargo base. Several workers at the Everett facility tested positive for COVID-19 in early March 2020, prior to a full shutdown of operations. The factory was shut down for three weeks until workers were able to return with mandatory face masks, social distancing, and staggered start times to reduce potential exposure. Current production aircraft Boeing 767 The Boeing 767 is a mid-size, wide-body, twin-engine, jet airliner. First introduced in 1979 to complement the larger 747, the aircraft was capable of carrying 218 passengers in a typical three-class configuration over a range of and a cruising speed of Mach 0.80 (530 mph, 851 km/h, 470 kn). Production of passenger variants ended in 2017 after its successor, the 787 Dreamliner, entered service in 2011. Freighter and military variants remain in limited production. These are the 767 variants currently in production as of 2023: 767-300F (Freighter) KC-46 Pegasus Boeing 777 The Boeing 777 is a large-size, wide-body, twin-engine, jet airliner. Production of this plane began in 1993. , the factory is being retooled to produce the 777X, the next-generation of the aircraft. The 777-9 provides seating for 426 passengers and a range of over 7,285 nmi (13,492 km; 8,383 mi). These are the 777 variants currently in production as of 2024: 777-9 777F (Freighter) Boeing 737 MAX The Boeing 737 MAX is a mid-size, narrow-body, twin-engine, jet airliner. Production of the aircraft was expected to begin in the second half of 2024. This will be the fourth production line for the Boeing 737 MAX and is intended allow for added production capacity beyond that of the Boeing Renton Factory to meet demand. The line will replace the discontinued Boeing 787 line at the factory. In January 2024, the FAA announced it would not grant any production expansion of the 737 MAX until it was satisfied that more stringent quality assurance measures had been enacted, stemming from the in-flight loss of a plug door panel of a MAX 9 jet. No timeline has been given on when it may do so. Former production aircraft Boeing 747 The Boeing 747 is a large-size, wide-body, four-engine, jet airliner. The 747-8I, the last passenger variant in production, is capable of carrying 467 passengers in a typical three-class configuration, has a range of and a cruising speed Mach 0.855 (570 mph, 918 km/h, 495 kn). The Boeing 747 was one of the first wide-body aircraft to be produced and was the first jet to use a wide-body configuration for carrying passengers. Because of the vast size of the 747, the Boeing Everett Factory was designed and built to accommodate the assembly of these large planes as there was not enough room at the Boeing facilities in Seattle. Production of this aircraft began in 1967 and continued until 2022, with the last 747-8F (N863GT) rolling out in December for customer Atlas Air. Boeing 787 The Boeing 787 Dreamliner is a mid-size, wide-body, twin-engine jet airliner. The current passenger variants in production, are capable of carrying 242–290 passengers in a typical two-class configuration, have a range of and a cruising speed of Mach 0.85 (562 mph, 902 km/h, 487 kn). Production of this plane began in 2006. In February 2011, Boeing announced that some 787 work was being moved to a plant in North Charleston, South Carolina in order to relieve overcrowding of 787s at Everett caused by large volumes of 787 orders. In July 2014, Boeing announced that the 787-10 variant, the longest variant of the 787, would be produced exclusively in South Carolina as the fuselage pieces for that variant are too large for the Dreamlifter to fit for transport to Everett. Undertaking drastic cost-cutting measures in the wake of the COVID-19 pandemic and its resulting impact on aviation, Boeing announced in July 2020 that it would consider consolidating all of its 787 assembly in a single location; the company chose to move all production to South Carolina on October 1, causing backlash from the Washington state government. The move was completed in February 2021; it was cemented with Boeing's agreement to transfer its lease of the Dreamlifter Operations Center to package courier FedEx in April 2021. FedEx, which takes over the lease on November 1, plans to use it for its cargo airline operations. The two 787 variants formerly produced in Everett were the 787-8 and the 787-9. Tours Following several months of unofficial visits, Boeing began offering factory tours with the first rollout of the 747 in 1968. The first year of tours had over 39,000 visitors, which later grew to 55,000 annually by the 1980s; a dedicated tour building was constructed in 1984 and later replaced by the Future of Flight Aviation Center in 2005. The new center has a theater, exhibits, a Boeing Store gift shop, and café. As of 2020 over 150,000 people come each year to visit the factory. The Boeing factory tour was suspended from 2020 to 2023 due to the COVID-19 pandemic. References External links 1967 establishments in Washington (state) Boeing manufacturing facilities Boeing Buildings and structures completed in 1967 Buildings and structures in Everett, Washington Industrial buildings and structures in Washington (state) Manufacturing plants in the United States Manufacturing plants Manufacturing Manufacturing buildings and structures Tourist attractions in Everett, Washington
Boeing Everett Factory
Engineering
2,489
76,191,936
https://en.wikipedia.org/wiki/Republica%20%28plant%29
Republica is an enigmatic genus of flowering plants which includes three known species: Republica hickeyi, Republica kummerensis, and Republica litseafolia. The genus has been found in Eocene age geologic formations along the Pacific coast of North America. The affiliations of Republica are uncertain, with the most recent placement being tentatively in the now broken up subclass Hamamelididae. Distribution The three species currently assigned to Republica are all known from western North America. The type species R. hickeyi is isolated to the Klondike Mountain Formation in the Ypresian Eocene Okanagan Highlands of northwest central Washington. The first named species, R. litseafolia has been identified from its type locality at the "Chalk bluffs" site in the northern area of California's Ione Formation. The site has been variously assigned to the early Eocene by Harry MacGinitie, based on attempted correlation to the Ione type strata resulting in a Ypresian age often being reported. However other authors suggest the age may be mistaken, based on anomalously low mean annual temperature estimates compared to other sites purported to be the same age located north and inland of the Chalk Bluffs site, with a possible age begin suggested by Donald Prothero et al. (2011). Leaves assigned to R. litseafolia were later reported by Jack Wolfe (1968) from the Eocene Puget Group floras of the Green River gorge in King County, Washington by Jack Wolfe. Similar looking leaves were assigned to the third species R. kummerensis with the two separated by geochronology. R. litseafolia is most frequent in the older Franklinian and Fultonian stages before becoming scarce in the early Ravenian localities. R. kummerensis on the other hand first appears in the Puget groups late Ravenian and is found frequently in the Kummerian age sites. The R. kummerensis range was expanded by Wolfe (1977) to include the Kulthieth Formation (as the "Kushtaka formation"), in the panhandle of southeast Alaska. The formation was reported by Wolfe 1977 as early Oligocene and of the Kummerian paleofloral stage with R. kummerensis coming from two sites outcropping along the southern slopes of Carbon Mountain above Berg Lake, Hoonah–Angoon Census Area. The Kummerian has subsequently been revised to spanning between 40 mya and the Eocene-Oligocene boundary. History and classification The first Republica species to be named was initially studied and described by Harry MacGinitie in 1941 based on fossils from the Ione Formations Chalk Bluff and Buckeye Flat sites. Based on a series of five cotypes, numbers 2199 - 2203 in the University of California Museum of Paleontology paleobotany collection, he named the new species Laurophyllum litseafolia. He did not give specific details on the etymology, but chose to place the new species in Laurophyllum a form genus for Lauraceae-like leaves, while noting that he considered the most similar species to be Cryptocary multipaniculata. In 1968 Wolfe finished his monograph on the fossil plants of the Puget Groups Green River gorge, among which were a series of leaves which he deemed the same as the Ione fossils. However, he disagreed with MacGinities placement of the species in Lauraceae and opted to follow Edward W. Berrys choice of genus for similar leaves from the Wilcox Group. As such the species was moved to the form genus Artocarpoides as the new combination Artocarpoides litseafolia, with suggested family affiliation in Moraceae. Wolfe also described a second species Artocarpoides kummerensis from holotype USNM 42104 and paratypes USNM 42105, USNM 42158, and USNM 42159, all part of the US National Museum. Found at five sites in the Green River gorge area, Wolfe states that the two species form a gradual series, with the leaves having less than a 2:1 length/width ratio being placed in A. litseafolia and those with a length/width greater than 2:1 considered as A. kummerensis. As with MacGinities species, Wolfe did not give an etymological explanation for the species, though the paper does discuss the Kummer sandstone bed being the base of the Kummerian section at the type locality for the stage. The next year, while discussing general taxonomic changes in western fossil floras, MacGinitie (1969) again discussed Artocarpoides litseafolia which he and Wolfe had talked over after Wolfes 1968 paper. Both paleobotanists were of the same opinion that placement within Artocarpoides and thus Moraceae was wrong. While the thick and long petiole and heart shaped base surrounding are found in lauraceous genera, and the distinct quaternary and quintery veins are seen in Moraceous genera, all those characters combined are not seen in either family. As such MacGinitie moved the species to Dicotylophyllum litseafolia, Dicotylophyllum being a form genus for angiosperm leaf fossils of uncertain family or higher affinity. Wolfe again addressed "A." kummerensis while reporting it from the "Kushtaka formation" in Alaska. While he acknowledged and backed the 1969 move to D. litseafolia, he also maintained that it was closely related to the leaved from Alaska and the Puget Group. So he moved the species to Dicotylophyllum as well under the new combination Dicotylophyllum kummerensis. During the study of fossil angiosperms from the Klondike Mountain Formation around Republic, Washington, Jack Wolfe and Wesley Wehr identified a leaf, specimen USNM 32697A, B. of unique venation and uncertain placement but bearing a similarity with both the species then included in Dicotylophyllum. They chose to erect a new genus, named for Republic, which encompassed the two older species as Republica kummerensis and Republica litseafolia respectively, along with the new species from Republic. Wolfe and Wehr named their new species Republica hickeyi, with USNM 32697A, B. as the holotype and noted that the species epithet as coined as a patronym for Leo Hickey for his work on angiosperm leaf morphology comparison. Wolfe and Wehr again discussed the possible taxonomic affinities for the genus, noting it to be rather uncertain. They again discounted a placement within Lauraceae, despite superficial similarity to Clethra, based on the lack of branches along the lower sides of the secondaries as seen in Republica. Likewise, they considered Gironniera, then placed in Ulmaceae, as superficially similar, but the numerous and well developed secondaries in Republica seem to exlcude a family relationship. As such Wolfe and Wehr were still uncertain regarding the taxons higher affiliation and suggested placement into subclass Hamamelididae of the now abandoned Cronquist system. Molecular phylogenetics published by the Angiosperm Phylogeny Group broke up the subclass in the late 1990's, with at least one pharmacognosist, Sonny Larsson, describing Hamamelididae as "grossly polyphyletic". In 2021, a new genus of damselflies was described from the Klondike Mountain Formation at Republic, and the genus was named the hemihomonym Republica. Description Leaves of the genus Republica are smooth margined, with a symmetrical outline and simple pinnate venation. The secondary veins fork from the midvein with a transition from a high fork angle near the apex though a low fork angle in the middle region of the leaf and then back to a high angle on the basal most pair of secondaries. The middle and more basal secondary veins have a broad upward curving path as they approach the margin, while the upper secondaries have a more pronounced and quicker upturn. The veins loop upwards towards the next secondary up, before joining with a fork from the next secondary up or with a tertiary vein. There are typically no interseconday veins forking from the primary vein, but the secondaries typically have several branches that fork at low angles from the lower sides. The tertiary veins can run the full space between two secondaries, branch, or form orthogonal junctions and polygonal areolae. Similarly the quaternary veins are branched and also form a polygonal reticulum. R. hickeyi Leaves of Republica hickeyi are a wide elliptic in outline with an apparently thick leathery texture in life. The leaf base is a narrow v-shape in outline while the apex is broad and slightly pointed. The petiole is thick transitioning into the base of the primary vein which gradually narrows from leaf base to apex. In the only specimen known to Wolfe and Wehr, there are eight secondary veins on one side of the primary, and nine secondaries on the opposite side. The thinner basal seconday pair both branch from the primary at an angle of around 50° before taking rather irregular paths towards the leaf margin, curving upwards and merging with tertiary veins below the next secondary apical. The middle secondaries fork from the primary vein at increasing degrees of angle basally to apically, shifting from 45° up to 55°. The tertiary veins form a reticulate vein structure between the secondaries, the quaternaries are similarly reticulate, typically forming into quadrangular and pentagonal shapes with quinternary veinlets forming areolea enclosing a freely ending veinlet that may be unbranched or singularly branched. R. kummerensis The leaves of Republica kummerensis are obovate in general outline, with a more elongate outline then the proposed ancestral R. litseafolia, which typically has a length:width of less than 2:1, while R. kummerensis is more than 2:1. The general size range reported by Wolfe is between long and wide with between 9 and 10 pairs of secondaries. The bases of R. kummerensis are most frequently broadly rounded in shape, with rare specimens showing a more cordate base. Where they are known, the petioles are between in length. The secondaries branch from the primary at irregularly spaced intervals with departure angles between 40°-60°, a greater range than seen in either R. hickeyi or R. litseafolia. Additionally R. kummerensis has frequent intersecondary veins branching from the primary between the secondaries. R. litseafolia Leaves of Republica litseafolia range between long and wide, with an obovate outline different from the elliptical outline of R. hickeyi. The apex is usually acutely pointed, while the bases range between cordate and wedge like cuneate. The stout petiole transitions into a thick primary vein running up the center of the leave blade. The leaves typically have ten to twelve pairs of secondaries, 1-3 more than seen in R. hickeyi, which fork from the primary vein irregularly lower in the leaves then transitioning into sub-opposite forking in the upper portion of the leaves. The branch angle for secondaries in middle section of the leaves is around 50°. The R. litseafolia also have distinct and well developed branch veins forking off the external or basal sides of the secondaries before curving out towards the margin and then upwards to the next secondary. References External links † Plants described in 1987 Fossil taxa described in 1987 Eocene plants Flora of the Northwestern United States Extinct flora of North America Eocene life of North America Prehistoric angiosperm genera Prehistoric plants of North America Klondike Mountain Formation Puget Group Ione Formation Enigmatic angiosperm taxa
Republica (plant)
Biology
2,450
40,627,582
https://en.wikipedia.org/wiki/Coplanar%20waveguide
Coplanar waveguide is a type of electrical planar transmission line which can be fabricated using printed circuit board technology, and is used to convey microwave-frequency signals. On a smaller scale, coplanar waveguide transmission lines are also built into monolithic microwave integrated circuits. Conventional coplanar waveguide (CPW) consists of a single conducting track printed onto a dielectric substrate, together with a pair of return conductors, one to either side of the track. All three conductors are on the same side of the substrate, and hence are coplanar. The return conductors are separated from the central track by a small gap, which has an unvarying width along the length of the line. Away from the central conductor, the return conductors usually extend to an indefinite but large distance, so that each is notionally a semi-infinite plane. Conductor-backed coplanar waveguide (CBCPW), also known as coplanar waveguide with ground (CPWG), is a common variant which has a ground plane covering the entire back-face of the substrate. The ground-plane serves as a third return conductor. Coplanar waveguide was invented in 1969 by Cheng P. Wen, primarily as a means by which non-reciprocal components such as gyrators and isolators could be incorporated in planar transmission line circuits. The electromagnetic wave carried by a coplanar waveguide exists partly in the dielectric substrate, and partly in the air above it. In general, the dielectric constant of the substrate will be different (and greater) than that of the air, so that the wave is travelling in an inhomogeneous medium. In consequence CPW will not support a true TEM wave; at non-zero frequencies, both the E and H fields will have longitudinal components (a hybrid mode). However, these longitudinal components are usually small and the mode is better described as quasi-TEM. Application to nonreciprocal gyromagnetic devices Nonreciprocal gyromagnetic devices such as resonant isolators and differential phase shifters depend on a microwave signal presenting a rotating (circularly polarized) magnetic field to a statically magnetized ferrite body. CPW can be designed to produce just such a rotating magnetic field in the two slots between the central and side conductors. The dielectric substrate has no direct effect on the magnetic field of a microwave signal travelling along the CPW line. For the magnetic field, the CPW is then symmetrical in the plane of the metalization, between the substrate side and the air side. Consequently, currents flowing along parallel paths on opposite faces of each conductor (on the air-side and on the substrate-side) are subject to the same inductance, and the overall current tends to be divided equally between the two faces. Conversely, the substrate does affect the electric field, so that the substrate side contributes a larger capacitance across the slots than does the air side. Electric charge can accumulate or be depleted more readily on the substrate face of the conductors than on the air face. As a result, at those points on the wave where the current reverses direction, charge will spill over the edges of the metalization between the air face and the substrate face. This secondary current over the edges gives rise to a longitudinal (parallel with the line), magnetic field in each of the slots, which is in quadrature with the vertical (normal to the substrate surface) magnetic field associated with the main current along the conductors. If the dielectric constant of the substrate is much greater than unity, then the magnitude of the longitudinal magnetic field approaches that of the vertical field, so that the combined magnetic field in the slots approaches circular polarization. Application in solid state physics Coplanar waveguides play an important role in the field of solid state quantum computing, e.g. for the coupling of microwave photons to a superconducting qubit. In particular the research field of circuit quantum electrodynamics was initiated with coplanar waveguide resonators as crucial elements that allow for high field strength and thus strong coupling to a superconducting qubit by confining a microwave photon to a volume that is much smaller than the cube of the wavelength. To further enhance this coupling, superconducting coplanar waveguide resonators with extremely low losses were applied. (The quality factors of such superconducting coplanar resonators at low temperatures can exceed 106 even in the low-power limit.) Coplanar resonators can also be employed as quantum buses to couple multiple qubits to each other. Another application of coplanar waveguides in solid state research is for studies involving magnetic resonance, e.g. for electron spin resonance spectroscopy or for magnonics. Coplanar waveguide resonators have also been employed to characterize the material properties of (high-Tc) superconducting thin films. See also Waveguide (electromagnetism) Microstrip Stripline Post-wall waveguide Telegrapher's equations Via fence References Planar transmission lines Microwave technology Distributed element circuits
Coplanar waveguide
Engineering
1,063
43,587,042
https://en.wikipedia.org/wiki/3-Maleylpyruvic%20acid
3-Maleylpyruvic acid, or 3-maleylpyruvate, is a dicarboxylic acid formed by the oxidative ring opening of gentisic acid by gentisate 1,2-dioxygenase during the metabolism of tyrosine. It is converted into 3-fumarylpyruvate by maleylpyruvate isomerase. References Dicarboxylic acids Diketones
3-Maleylpyruvic acid
Chemistry
93
11,236,095
https://en.wikipedia.org/wiki/Motorola%20Jazz
The Motorola Jazz is a pager produced by Motorola between 1991 and 1993 which uses the FLEX pager protocol. It was available in Slate Gray, Arctic White, Ocean Blue and transparent colors. The Jazz was the smallest messaging pager at the time of its release, ran on a single AAA battery and had a green LCD. The user could also opt for news updates from the service provider. See also Motorola MINITOR pager Motorola StarTAC Motorola DynaTAC Motorola MicroTAC Pager Jazz
Motorola Jazz
Technology
102
33,287,490
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2025
In molecular biology, glycoside hydrolase family 25 is a family of glycoside hydrolases. Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes. Glycoside hydrolase family 25 CAZY GH_25 comprises enzymes with only one known activity; lysozyme (). It has been shown that a number of cell-wall lytic enzymes are evolutionary related and can be classified into a single family. Two residues, an aspartate and a glutamate, have been shown to be important for the catalytic activity of the Charalopsis enzyme. These residues as well as some others in their vicinity are conserved in all proteins from this family. References EC 3.2.1 Glycoside hydrolase families Protein families
Glycoside hydrolase family 25
Biology
261
12,928,483
https://en.wikipedia.org/wiki/Infra%20Corporation
Infra Corporation is a division of EMC Corporation that produces infraEnterprise, which is a multi-tier web-based IT Service Management software tool. The software is based on ITIL and it implements a number of ITIL processes, including Service Desk management (including Incident Management and Problem Management), Change Management, Release Management, Configuration Management (including Federated CMDB), Availability Management and Service Level Management. The tool also includes a knowledge base module (known as the "knowledge bank"), which complies with principles of Knowledge-Centered Support (KCS). History Infra Corporation was first established in 1991 in Australia, and now has regional head offices in North America, Australia, the UK and Europe and a worldwide network of partners and distributors. Merger and acquisitions Infra was acquired by Hopkinton, Massachusettsbased EMC Corporation on 10 March 2008, in a move viewed by analysts as part of EMC's ongoing strategy to establish itself as an IT management solution provider. VMware acquired Ionix Service Manager (formerly Infra) in 2010 and subsequently re-branded the tool VMware Service Manager. Support for this product began on 1 July 2010 and end of support has been announced for its latest and believed last version 9.x as of 8 March 2017. In July 2014, it was announced that VMware and IT Service Management software company Alemba had entered into an agreement to hand control of the support and development of VMware Service Manager to Alemba. Under the terms of this agreement, Alemba has taken over all operational aspects of VMware Service Manager, including support, account management and consultancy. Full product support and a development roadmap will now continue past the previous end of availability date of March 2017. Alemba rebranded and relicensed the VMware Service Manager product as vFire Core. In December 2014, Alemba announced the release of vFire Core 9.2.0, the first major release of the tool under Alemba's ownership. Awards and recognition In 2002, infraEnterprise was awarded PinkVerify ITIL certification from Pink Elephant, an independent consulting firm specialising in ITIL and PRINCE2. Infra has won a number of awards. In 2007, they were awarded the Network Computing magazine "Helpdesk Product of the Year" for infraEnterprise, and were awarded HDI's Best Business Use of Business Support Technology in 2006 at the 11th Annual Help Desk and IT Support Excellence Awards. References External links Infra Corporation Infra Benelux Infra France VMware Alemba Agreement. Alemba Release vFire Core Alemba Information technology management Dell EMC Defunct software companies of the United States
Infra Corporation
Technology
554
12,283,839
https://en.wikipedia.org/wiki/Chivela%20Pass
The Chivela Pass is a narrow mountain pass in the Sierra Madre Mountains that funnels cooler, drier air from the North American continent, through southern Mexico, into the Pacific. These northeasterly winds, specifically the Tehuano wind, blows periodically across the Isthmus of Tehuantepec in southern Mexico, and offshore over hundreds of miles of the Pacific Ocean. The wind activity forces the upwelling of colder subsurface waters. This strong upwelling brings nutrients from the subsurface layers of the ocean, thereby enhancing the fertility of the offshore waters. This results in strong plankton growth which in turn supports a more bountiful fishery in the region. In extreme circumstances during the winter, truly cold, dense air occasionally flows from the Bay of Campeche in the Gulf of Mexico through the Chivela Pass into the Gulf of Tehuantepec on the Pacific side. These winds can be strong enough to sandblast paint off ships in near-coastal waters. Notes External links The Gulf of Tehuantepec Hurricane Force Wind Event of 30-31 March 2003 The Structure and Evolution of Gap Outflow over the Gulf of Tehuantepec, Mexico Atmospheric dynamics Mountain passes of Mexico Landforms of Oaxaca Rail mountain passes of Mexico
Chivela Pass
Chemistry
261
70,714,373
https://en.wikipedia.org/wiki/John%20L.%20Magee%20%28chemist%29
John Lafayette Magee (October 28, 1914 – December 16, 2005) was an American chemist known for his work on kinetic models of radiation chemistry, especially the Samuel-Magee model for describing radiolysis in solution. Education and career Magee obtained his A.B. at Mississippi College in 1935, M.S. at Vanderbilt University in 1936, and his Ph.D. in chemistry at University of Wisconsin in 1939, under the supervision of Farrington Daniels. He then worked with Henry Eyring at Princeton University during his postdoctoral research. Between 1943 and 1946, he worked at the Los Alamos National Laboratory on the Manhattan Project. Afterwards, he moved to Argonne National Laboratory. In 1948, he joined the Department of Chemistry at University of Notre Dame at the invitation of Milton Burton and became a full professor in 1953. He became the director of the Radiation Laboratory at Notre Dame between 1971 and 1975. He moved to Lawrence Berkeley National Laboratory afterwards, conducting research on the biological effects of ionizing radiation. He retired from Berkeley in 1986. Magee was elected president of the Radiation Research Society for the year 1967, and he became a fellow of the American Physical Society in 1976. Bibliography Paper series Reviews Books See also Milton Burton Spur (chemistry) References 20th-century American chemists Mississippi College alumni Vanderbilt University alumni University of Wisconsin–Madison alumni Princeton University faculty Los Alamos National Laboratory personnel Manhattan Project people Argonne National Laboratory people University of Notre Dame faculty 1914 births 2005 deaths Radiobiologists Theoretical chemists Fellows of the American Physical Society Lawrence Berkeley National Laboratory people People from Franklinton, Louisiana
John L. Magee (chemist)
Chemistry
323
11,757,994
https://en.wikipedia.org/wiki/Yamabe%20problem
The Yamabe problem refers to a conjecture in the mathematical field of differential geometry, which was resolved in the 1980s. It is a statement about the scalar curvature of Riemannian manifolds: By computing a formula for how the scalar curvature of relates to that of , this statement can be rephrased in the following form: The mathematician Hidehiko Yamabe, in the paper , gave the above statements as theorems and provided a proof; however, discovered an error in his proof. The problem of understanding whether the above statements are true or false became known as the Yamabe problem. The combined work of Yamabe, Trudinger, Thierry Aubin, and Richard Schoen provided an affirmative resolution to the problem in 1984. It is now regarded as a classic problem in geometric analysis, with the proof requiring new methods in the fields of differential geometry and partial differential equations. A decisive point in Schoen's ultimate resolution of the problem was an application of the positive energy theorem of general relativity, which is a purely differential-geometric mathematical theorem first proved (in a provisional setting) in 1979 by Schoen and Shing-Tung Yau. There has been more recent work due to Simon Brendle, Marcus Khuri, Fernando Codá Marques, and Schoen, dealing with the collection of all positive and smooth functions such that, for a given Riemannian manifold , the metric has constant scalar curvature. Additionally, the Yamabe problem as posed in similar settings, such as for complete noncompact Riemannian manifolds, is not yet fully understood. The Yamabe problem in special cases Here, we refer to a "solution of the Yamabe problem" on a Riemannian manifold as a Riemannian metric on for which there is a positive smooth function with On a closed Einstein manifold Let be a smooth Riemannian manifold. Consider a positive smooth function so that is an arbitrary element of the smooth conformal class of A standard computation shows Taking the -inner product with results in If is assumed to be Einstein, then the left-hand side vanishes. If is assumed to be closed, then one can do an integration by parts, recalling the Bianchi identity to see If has constant scalar curvature, then the right-hand side vanishes. The consequent vanishing of the left-hand side proves the following fact, due to Obata (1971): Obata then went on to prove that, except in the case of the standard sphere with its usual constant-sectional-curvature metric, the only constant-scalar-curvature metrics in the conformal class of an Einstein metric (on a closed manifold) are constant multiples of the given metric. The proof proceeds by showing that the gradient of the conformal factor is actually a conformal Killing field. If the conformal factor is not constant, following flow lines of this gradient field, starting at a minimum of the conformal factor, then allows one to show that the manifold is conformally related to the cylinder , and hence has vanishing Weyl curvature. The non-compact case A closely related question is the so-called "non-compact Yamabe problem", which asks: Is it true that on every smooth complete Riemannian manifold which is not compact, there exists a metric that is conformal to g, has constant scalar curvature and is also complete? The answer is no, due to counterexamples given by . Various additional criteria under which a solution to the Yamabe problem for a non-compact manifold can be shown to exist are known (for example ); however, obtaining a full understanding of when the problem can be solved in the non-compact case remains a topic of research. See also Yamabe flow Yamabe invariant References Research articles . Textbooks Aubin, Thierry. Some nonlinear problems in Riemannian geometry. Springer Monographs in Mathematics. Springer-Verlag, Berlin, 1998. xviii+395 pp. Schoen, R.; Yau, S.-T. Lectures on differential geometry. Lecture notes prepared by Wei Yue Ding, Kung Ching Chang [Gong Qing Zhang], Jia Qing Zhong and Yi Chao Xu. Translated from the Chinese by Ding and S. Y. Cheng. With a preface translated from the Chinese by Kaising Tso. Conference Proceedings and Lecture Notes in Geometry and Topology, I. International Press, Cambridge, MA, 1994. v+235 pp. Struwe, Michael. Variational methods. Applications to nonlinear partial differential equations and Hamiltonian systems. Fourth edition. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], 34. Springer-Verlag, Berlin, 2008. xx+302 pp. Riemannian geometry Mathematical problems
Yamabe problem
Mathematics
1,013
11,798,914
https://en.wikipedia.org/wiki/Phyllachora%20graminis
Phyllachora graminis is a plant pathogen infecting wheat. References External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases Wheat diseases Phyllachorales Taxa named by Christiaan Hendrik Persoon Fungus species
Phyllachora graminis
Biology
57
42,828,447
https://en.wikipedia.org/wiki/Selig%20Hecht
Selig Hecht (February 8, 1892 – September 18, 1947) was an American physiologist who studied photochemistry in photoreceptor cells. Biography Hecht was born into a Jewish family in Glogau, then in the German Empire (now Głogów, Poland), the son of Mandel Hecht and Mirel Mresse. The family migrated to the United States in 1898, settling in New York City. In June 1917 Hecht received his Ph.D. and married Celia Huebschmann. Their daughter Maressa was born in 1924. He became professor of biophysics at Columbia University in 1928. Work Hecht began his study into light sensitivity with clams (Mya arenaria) and insects. His specialty was photochemistry, the kinetics of the reactions initiated by light in the receptors. He made contributions to the knowledge of dark adaptation, visual acuity, brightness discrimination, color vision, and the mechanism of the visual threshold. He spent time as a post-doctoral researcher with the group of Edward Charles Cyril Baly at the University of Liverpool, UK. Baly was a pioneer in the application of the technique of spectroscopy in chemistry, and Hecht took this further by applying it to biological molecules. Hecht's responsibility in showing the protein character of rhodopsin was recounted by historians of protein science: Identification of visual purple as a protein of high molecular weight ...[came] from the work of Selig Hecht at Columbia University in New York, begun in 1937. Ultracentrifugation was one of methods he used for characterization and this produced an added dividend, demonstrating that the complex absorption of the 'pigment' (suggesting the possibility of many components) segmented in toto with the protein. By this time the carotenoid prosthetic group had been discovered as the source of colour by George Wald and Hecht pointed out that this meant that the protein had to be a conjugated protein, with the chromophore firmly attached. According to biographer Pirenne, Hecht was a "brilliant lecturer and expositor." Pirenne continues, The lack of synthesis discernible in present-day knowledge and teaching perturbed him, and he took an active interest in all the human implications of science. He dealt with persons and ideas on the basis of their intrinsic worth,... In 1941, The Optical Society of America awarded him the Frederic Ives Medal, the Society's highest honor. Explaining the atom When World War II ended with the use of atomic weapons which had been developed in secret by the Manhattan Project, Hecht was concerned that the American public was uninformed about the development of this new source of energy. He wrote a book Explaining the Atom (1947), to educate the public. He wrote, So long as one supposes this business is mysterious and secret, one cannot have a just evaluation of our possessions and security. Only by understanding the basis and development of atomic energy can one judge the legislation and foreign policy that concern it. In a 1947 review in the New York Times, Stephen Wheeler wrote that it was "by all odds the best book on atomic energy so far to be published for the ordinary reader." Similarly, James J. Jelinek wrote that it was an "invaluable contribution to the layman." He credits Hecht with "conveying to the layman the intellectual drama" of the development. Jelinek asserts that the book is "profoundly provocative in its political and sociological implications." After Hecht died, a second edition was issued in 1959 by Eugene Rabinowitch. Both editions were recommended by George Gamow. Selected publications Hecht, S. (1937) "Rods, cones, and the chemical basis of vision", Physiological Reviews 17: 239 to 89. Hecht, S. & Pickels, E.G. (1938) "The sedimentation constant of visual purple", Proceedings of the National Academy of Sciences of the USA 24: 172 to 6 References 1892 births 1947 deaths American physiologists Columbia University faculty Emigrants from the German Empire to the United States Jewish American scientists Jewish biologists Photochemists City College of New York alumni Harvard University alumni Creighton University faculty
Selig Hecht
Chemistry
881
3,220,879
https://en.wikipedia.org/wiki/Metal%20aromaticity
Metal aromaticity or metalloaromaticity is the concept of aromaticity, found in many organic compounds, extended to metals and metal-containing compounds. The first experimental evidence for the existence of aromaticity in metals was found in aluminium cluster compounds of the type where M stands for lithium, sodium or copper. These anions can be generated in a helium gas by laser vaporization of an aluminium / lithium carbonate composite or a copper or sodium / aluminium alloy, separated and selected by mass spectrometry and analyzed by photoelectron spectroscopy. The evidence for aromaticity in these compounds is based on several considerations. Computational chemistry shows that these aluminium clusters consist of a tetranuclear plane and a counterion at the apex of a square pyramid. The unit is perfectly planar and is not perturbed by the presence of the counterion or even the presence of two counterions in the neutral compound . In addition its HOMO is calculated to be a doubly occupied delocalized pi system making it obey Hückel's rule. Finally a match exists between the calculated values and the experimental photoelectron values for the energy required to remove the first 4 valence electrons. The first fully metal aromatic compound was a cyclogallane with a Ga32- core discovered by Gregory Robinson in 1995. D-orbital aromaticity is found in trinuclear tungsten and molybdenum metal clusters generated by laser vaporization of the pure metals in the presence of oxygen in a helium stream. In these clusters the three metal centers are bridged by oxygen and each metal has two terminal oxygen atoms. The first signal in the photoelectron spectrum corresponds to the removal of the valence electron with the lowest energy in the anion to the neutral compound. This energy turns out to be comparable to that of bulk tungsten trioxide and molybdenum trioxide. The photoelectric signal is also broad which suggests a large difference in conformation between the anion and the neutral species. Computational chemistry shows that the anions and dianions are ideal hexagons with identical metal-to-metal bond lengths. Tritantalum oxide clusters (Ta3O3−) also are observed to exhibit possible D-orbital aromaticity. The molecules discussed thus far only exist diluted in the gas phase. A study exploring the properties of a compound formed in water from sodium molybdate () and iminodiacetic acid also revealed evidence of aromaticity, but this compound has actually been isolated. X-ray crystallography showed that the sodium atoms are arranged in layers of hexagonal clusters akin to pentacenes. The sodium-to-sodium bond lengths are unusually short (327 pm versus 380 pm in elemental sodium) and, like benzene, the ring is planar. In this compound each sodium atom has a distorted octahedral molecular geometry with coordination to molybdenum atoms and water molecules. The experimental evidence is supported by computed NICS aromaticity values. See also References Cluster chemistry Chemical bonding
Metal aromaticity
Physics,Chemistry,Materials_science
627
8,593,160
https://en.wikipedia.org/wiki/Forisome
Forisomes are proteins occurring in the sieve tubes of Fabaceae. Their molecules are about 1–3 μm wide and 10–30 μm long. They expand and contract anisotropically in response to changes of electric field, pH, or concentration of Ca2+ ions. Unlike most other moving proteins, the change is not dependent on ATP. Forisomes function as valves in sieve tubes of the phloem system, by reversibly changing shape between low-volume ordered crystalloid spindles and high-volume disordered spherical conformations. The change from ordered to disordered conformation involves tripling of the protein's volume, loss of birefringence present in the crystalline phase, 120% radial expansion and 30% longitudinal shrinkage. In Vicia it was shown that forisomes are associated to the endoplasmic reticulum at sieve plates. There are evidences that the forisomes's behavior could depend on Ca2+ changes provoked by Ca2+-permeable ion channels, located on the endoplasmic reticulum and plasma membrane of sieve elements. responsible for shape changes. Forisomes have possible applications as biomimetic smart materials (e.g. valves in microdevices) or smart composite materials. References External links Forisome: A smart plant protein inside a phloem system Forisome based biomimetic smart materials Forisome Protein, a Key to Biomimetic Materials Motor proteins Smart materials Plant proteins
Forisome
Chemistry,Materials_science,Engineering
316
31,638,066
https://en.wikipedia.org/wiki/Media%20control%20symbols
In digital electronics, analogue electronics and entertainment, the user interface may include media controls, transport controls or player controls, to enact and change or adjust the process of video playback, audio playback, and alike. These controls are commonly depicted as widely known symbols found in a multitude of products, exemplifying what is known as dominant design. Symbols Media control symbols are commonly found on both software and physical media players, remote controls, and multimedia keyboards. Their application is described in ISO/IEC 18035. The main symbols date back to the 1960s, with the Pause symbol having reportedly been invented at Ampex during that decade for use on reel-to-reel audio recorder controls, due to the difficulty of translating the word "pause" into some languages used in foreign markets. The Pause symbol was designed as a combination of the existing square Stop symbol and the caesura, and was intended to evoke the concept of an interruption or "stutter stop". In popular culture Consumer products The Play symbol is arguably the most widely used of the media control symbols. In many ways, this symbol has become synonymous with music culture and more broadly the digital download era. As such, there are now a multitude of items such as T-shirts, posters, and tattoos that feature this symbol. Similar cultural references can be observed with the Power symbol which is especially popular among video gamers and technology enthusiasts. Branding Media symbols can be found on an array of advertisements: from live music venues to streaming services. In 2012, Google rebranded its digital download store to Google Play, using the Play symbol in its logo. The Play symbol also serves as a logo for YouTube since 2017. Television station owners Morgan Murphy Media and TEGNA have begun to institute the Play symbol into the logos of their stations to further connect their websites to their over-the-air television presences. Use on appliances and other mechanical devices In recent years, there has been a proliferation of electronics that use media control symbols in order to represent the Run, Stop, and Pause functions. Likewise, user interface programing pertaining to these functions has also been influenced by that of media players. For example, some washers and dryers with an illuminated Play/pause button are programmed such that it stays lit when the appliance is running. A line of Philips pasta makers has the Play/pause button for controlling the pasta-making process. See also List of international common standards Power symbol Miscellaneous Technical References Audio electronics Symbols Video hardware
Media control symbols
Mathematics,Engineering
499
21,115,944
https://en.wikipedia.org/wiki/Trypanosomiasis%20vaccine
A Trypanosomiasis vaccine is a vaccine against trypanosomiasis. No effective vaccine currently exists, but development of a vaccine is the subject of current research. The Bill & Melinda Gates Foundation has been involved in funding research conducted by the Sabin Vaccine Institute and others. There are many obstacles to development of such a vaccine. One obstacle is variant surface glycoprotein which makes it difficult for the immune system to recognize the infectious organism. Also, Trypanosoma brucei has a direct inhibitory effect upon B cells. It has been suggested that these challenges could be overcome by a vaccine against the initial antigens, or generating an immune response against the cysteine protease (for example, cruzipain). An effective vaccine was achieved in 2021 using a mouse model of infection with Trypanosoma vivax. See also Trypanocidal agent References Vaccines
Trypanosomiasis vaccine
Biology
183
15,807,148
https://en.wikipedia.org/wiki/Chelates%20in%20animal%20nutrition
Chelates in animal feed is jargon for metalloorganic compounds added to animal feed. The compounds provide sources of various metals that improve the health or marketability of the animal. Typical metal salts are derived from cobalt, copper, iron, manganese, and zinc. The objective of supplementation with trace minerals is to avoid a variety of deficiency diseases. Trace minerals carry out key functions in relation to many metabolic processes, most notably as cofactors for enzymes and hormones, and are essential for optimum health, growth and productivity. For example, supplementary minerals help ensure good growth, bone development, feathering in birds, hoof, skin and hair quality in mammals, enzyme structure and functions, and appetite. Deficiency of trace minerals affects many metabolic processes and so may be manifested by different symptoms, such as poor growth and appetite, reproductive failures, impaired immune responses, and general ill-thrift. From the 1950s to the 1990s most trace mineral supplementation of animal diets was in the form of inorganic minerals, and these largely eradicated associated deficiency diseases in farm animals. The role in fertility and reproductive diseases of dairy cattle highlights that organic forms of Zn are retained better than inorganic sources and so may provide greater benefit in disease prevention, notably mastitis and lameness. Animals are thought to better absorb, digest, and use mineral chelates than inorganic minerals or simple salts. In theory lower concentrations of these minerals can be used in animal feeds. In addition, animals fed chelated sources of essential trace minerals excrete lower amounts in their faeces, and so there is less environmental contamination. History and terminology Since the 1950s, animal feeds have been supplemented with a variety of trace minerals such as copper (Cu), iron (Fe), iodine (I), manganese (Mn), molybdenum (Mo), selenium (Se), and zinc (Zn). Initially, such supplementation was in the form of inorganic salts of these trace elements, e.g. copper(II) sulfate. Chelated minerals were first developed in the early 1970s, but saw more growth in the 1980s and 1990s. Trace mineral chelates have been shown in some cases to be more efficient than inorganic minerals in meeting the nutritional needs of farm animals. In some cases, chelates offer no advantage however. Terminology "Essential metals" usually refers to ions that are components of enzymes that are required for growth. Only small amounts are typically required, but their deficiency leads to disease and death. Some trace elements are molybdenum (MoO42-), [ cobalt (Co2+), and copper (Cu2+). Illustrative enzymes containing these elements are, respectively, xanthine oxidase, vitamin B12, and azurin. Some metals are more abundant in nature, such as zinc (as Zn2+), iron (as Fe2+ and Fe3+), and magnesium (as Mg2+). Some trace elements are not metals, such as selenium. "Mineral" is jargon for compounds that contain metal ions, more specifically "inorganic nutrients, usually required in small amts. from less than 1 to 2500 mg per day". "Chelate" is jargon for metal complexes of chelating agents. *Chelates are organic molecules, normally consisting of 2 organic parts with an essential trace mineral occupying a central position and held in place by covalent bonding. "Chelating agents" are ligands that bind metal ions through more than one bond. Most chelating agents are organic compounds, e.g., EDTA4-. Metal chelate formulations often contain 10-20% of the metal. A variety of chelating agents are used, such as peptides and amino acids derived from hydrolysed soy proteins, which form amino acid complexes. Research The utilisation of chelated copper, including copper-lysine formulations, is higher than that of inorganic copper sulfate when fed to rats in the presence and absence of elemental Zn or Fe. The data suggest that, unlike inorganic Cu, organic Cu chelates exhibit absorption and excretion mechanisms that do not interfere with Fe. Copper chelate also achieved higher liver Zn, suggesting less interference at gut absorption sites in comparison with the other forms of Cu. The effects of organic zinc sources on performance, zinc status, carcass, meat, and claw quality in fattening bulls has been studied. Livestock Prod. compared a Zn chelate, a Zn polysaccharide complex and ZnO (inorganic zinc oxide) in bull beef cattle, and concluded that the organic forms resulted in some improvement in hoof claw quality. The bioavailability of Cu and Zn chelates in sheep have been compared to the inorganic sulfate forms, at "low" and "high" supplementation rates. Copper and Zn chelates at the lower rates caused significantly greater increases in blood plasma concentrations than the corresponding treatments with Zn sulfate (p<0.05) and Cu sulfate (p<0.01). In addition, zinc chelate supplementation resulted in significantly greater hoof and horn Zn content than did Zn sulfate (p<0.05). At the "low" supplementation rate, zinc chelate achieved better hoof quality than Zn sulfate (p<0.05). The data suggest that Cu and Zn chelates are more readily absorbed and more easily deposited in key tissues such as hooves, in comparison with inorganic Zn forms. In weaned piglets, various supplementation rates of organic Zn in the form of a chelate or as a polysaccharide complex have been evaluated and compared with ZnO, zinc oxide, at 2,000 ppm. Feeding lower concentrations of organic Zn greatly decreased the amount of Zn excreted in comparison with inorganic Zn, without loss of growth performance. Copper chelate in weaned pigs have been compared with inorganic Cu and sulfate. Piglet performance was consistently better with organic Cu at 50 to 100 ppm, in comparison with inorganic Cu at 250 ppm. In addition, organic Cu increased Cu absorption and retention, and decreased Cu excretion 77% and 61% respectively, compared with 250 ppm inorganic Cu. The effects of an Mg chelate in broiler chickens have been compared with magnesium oxide and an unsupplemented control group. Diets for fattening chicken are not normally supplemented with Mg, but this study indicated positive effects on performance and meat quality. During the first 3 weeks of life, the Mg chelate improved feed efficiency significantly in comparison with both the inorganic MgO and the negative control group (p<0.05). Thigh meat pH and oxidative deterioration during storage were also studied. The Mg chelate increased thigh meat pH in comparison with the negative control (p<0.05). Mg supplementation significantly reduced chemical indicators (TBARS) of oxidative deterioration in liver and thigh muscle (p<0.01), with Mg chelate significantly more efficient than MgO (p<0.01). The data suggest that organic Mg in the form of a chelate is capable of reducing oxidation, and so improve chicken meat quality. A Zn chelate supplement was compared with zinc sulfate in broiler chickens. Weight gain and feed intake increased quadratically (p<0.05) with increasing Zn concentrations from the chelate and linearly with Zn sulfate. The relative bioavailability of the Zn chelate was 183% and 157% of Zn sulfate for weight gain and tibia Zn, respectively. The authors concluded that the supplemental concentration of Zn required in corn-soy diets for broilers from 1–21 days of age would be 9.8 mg/kg diet as Zn chelate and 20.1 mg/kg diet as Zn sulfate, respectively. The effects of replacing inorganic minerals with organic minerals in broiler chickens have been studied. One group of chickens received inorganic sulfates of Cu (12 ppm), Fe (45 ppm), Mn (70 ppm) and Zn (37 ppm) and their performance was compared to a similar group supplemented with chelates of Cu (2.5 ppm), Fe, Mn, and Zn (all at 10 ppm). There were no differences in performance between the birds fed the high inorganic minerals and the birds fed the low organic chelates. Faecal concentrations of Cu, Fe, Mn and Zn were 55%, 73%, 46% and 63%, respectively, of control birds fed inorganic minerals. A study compared inorganic and organic mineral supplementation in broiler chickens. Control birds were fed Cu, Fe, Mn, Se, and Zn in inorganic forms (15 ppm Cu from sulfate; 60 ppm Fe from sulfate etc.), and compared with three treatment groups supplemented with organic forms. Apart from improved feathering, most likely associated with the presence of organic Se, there were no significant performance differences between birds fed inorganic and organic minerals. The authors concluded that the use of organic trace minerals permits a reduction of at least 33% in supplement rates in comparison with inorganic minerals, without compromising performance. Regulation The European Union is concerned about possible detrimental effects of excess supplementation with trace minerals on the environment or human and animal health, and in 2003 legislated a reduction in permitted feed concentrations of several trace metals (Co, Cu, Fe, Mn and Zn). Further reading References Topics of the works SCAN (2003a) Opinion of the Scientific Committee for Animal Nutrition on the use of copper in feedingstuffs. SCAN (2003b), Opinion of the Scientific Committee for Animal Nutrition on the use of zinc in feedingstuffs. Commission Regulation (EC) No 1334/2003 of 25 July 2003 amending the conditions for authorisation of a number of additives in feedingstuffs belonging to the group of trace elements. 26.7.2003 EN Official Journal of the European Union . E. McCartney (2008) Trace minerals in poultry nutrition–sourcing safe minerals, organically? World Poultry D. Wilde (2006). Influence of macro and micro minerals in the peri-parturient period on fertility in dairy cattle. Animal Reproduction. Animal nutrition Animal testing Animal feed
Chelates in animal nutrition
Chemistry,Biology
2,128
11,013,368
https://en.wikipedia.org/wiki/British%20Pharmaceutical%20Codex
The British Pharmaceutical Codex (BPC) was first published in 1907, to supplement the British Pharmacopoeia which although extensive, did not cover all the medicinal items that a pharmacist might require in daily work. Other books existed, such as Squire's, but the BPC was intended to be official, published by the Pharmaceutical Society of Great Britain (PSGB). It laid down standards for the composition of medicines and surgical dressings. Subsequent editions were published in 1911, 1923, 1934, 1949, 1954, 1959, 1963, 1968, and finally 1973. The 1934 edition was described by the British Medical Journal as "one of the most useful reference books available to the medical profession". In 1963 Edward G Feldmann, director of revision for the US National Formulary, described it as "a compilation of highly authoritative and useful therapeutic (actions and doses) information as well as a valuable compendium of recognised standards and specifications". In 1979 a new edition was published with a new title, The Pharmaceutical Codex. The Medicines Commission had recommended in 1972 that the British Pharmacopoeia should henceforth be the only compendium of official standards for medicines in the UK, and the BPC lost its status as an official book. The PSGB remained as the publishers. The current edition is the 12th, published in 1994. References Pharmacology literature 1907 non-fiction books British books Pharmacy in the United Kingdom
British Pharmaceutical Codex
Chemistry
294
34,522,140
https://en.wikipedia.org/wiki/Unicast%20flood
In computer networking, a unicast flood occurs when a switch receives a unicast frame and the switch does not know that the addressee is on any particular switch port. Since the switch has no information regarding which port, if any, the addressee might be reached through, it forwards the frame through all ports aside from the one through which the frame was received. Background Unicast refers to a one-to-one transmission from one node in a network to another. This diagram illustrates the unicast transmission of a frame from one network node to another: When a switch receives a unicast frame with a destination address not in the switch’s forwarding table, the frame is treated like a broadcast frame and sent to all network segments to which it is attached except the one from which it received the frame: Causes The learning process of transparent bridging requires that the switch receive a frame from a device before unicast frames can be forwarded to it. Before any such transmission is received, unicast flooding is used to ensure transmissions reach their intended destinations. This is normally a short-lived condition as receipt typically produces a response that completes the learning process. The process occurs when a device is initially connected to a network segment, or after its address and port identifier is purged from the forwarding information base. An entry is purged when the link goes down on the original port or when it expires due to inactivity (five minutes is the default on many switches). A time limit is necessary because a switch does not necessarily see any indication when a network node is moved or disconnected. When a bridge or switch has no room left in its forwarding information base and so cannot add an entry for a new node, it must forward any frame addressed to that node through all ports except the one on which the frame was received. This is a common problem on networks with many hosts. Less common is the artificial flooding of address tables in a MAC flooding attack. Another common cause is a host with an ARP cache timeout longer than the timeout of the forwarding information base (FIB) in a switch—the switch forgets which port connects to the target before the host forgets the MAC address of the target. This can be prevented by configuring the switch with a FIB timeout longer than the ARP cache timeouts of nodes on its network. When a node needs to send a frame to a host after its corresponding ARP cache entry expires it must first send an ARP broadcast frame, which the switch must forward through all ports, to discover the (current) MAC address of the host. Misconfigured features of the networks may lead to unicast flooding as well. If there are two layer-2 paths from Host A to B and Host A uses path 1 to talk to Host B, but Host B uses path 2 to respond to Host A, then intermediate switches on path 1 will never learn the destination MAC address of Host B and intermediate switches on path 2 will never learn the destination MAC address of Host A. A final cause of unicast floods are topology changes. When a link state changes on a network port that participates in rapid spanning tree, the address cache on that switch will be flushed causing all subsequent frames to be flooded out of all ports until the addresses are relearned by the switch. Remedies A feature blocking unicast floods is available on Cisco switches but is not enabled by default. After ensuring that timeouts and security features have been configured to maintain table entries on client access ports longer than typical host ARP cache timeouts, this command is used to quiet down the unicast floods on those ports: Switch(config-if)# switchport block unicast Other techniques involve isolating hosts at Layer 2. Ports configured as protected ports are forbidden to communicate with other protected ports. Private VLANs implements port isolation such that members of the VLAN are only allowed to communicate via a designated uplink are not allowed to talk to other members of the VLAN. Effects on Networks When a network is experiencing unicast flooding, network performance may be degraded. Here is a graph of a bridge before and after adjusting the size of the bridge address cache: 80% of the frames were flooded out never to be received by the destination address, while 20% was valid traffic. In high-volume networks, the flooded traffic may cause ports to saturate, leading to packet loss and high latency. Another side effect of exhausted address tables is the compromise of data. The security considerations are discussed in the MAC flooding—one of several causes of unicast floods. If an end user is running a packet sniffer, the flooded frames can be captured and viewed. See also Broadcast, unknown-unicast and multicast traffic References Internet architecture
Unicast flood
Technology
992
543,568
https://en.wikipedia.org/wiki/Lorentz%20covariance
In relativistic physics, Lorentz symmetry or Lorentz invariance, named after the Dutch physicist Hendrik Lorentz, is an equivalence of observation or observational symmetry due to special relativity implying that the laws of physics stay the same for all observers that are moving with respect to one another within an inertial frame. It has also been described as "the feature of nature that says experimental results are independent of the orientation or the boost velocity of the laboratory through space". Lorentz covariance, a related concept, is a property of the underlying spacetime manifold. Lorentz covariance has two distinct, but closely related meanings: A physical quantity is said to be Lorentz covariant if it transforms under a given representation of the Lorentz group. According to the representation theory of the Lorentz group, these quantities are built out of scalars, four-vectors, four-tensors, and spinors. In particular, a Lorentz covariant scalar (e.g., the space-time interval) remains the same under Lorentz transformations and is said to be a Lorentz invariant (i.e., they transform under the trivial representation). An equation is said to be Lorentz covariant if it can be written in terms of Lorentz covariant quantities (confusingly, some use the term invariant here). The key property of such equations is that if they hold in one inertial frame, then they hold in any inertial frame; this follows from the result that if all the components of a tensor vanish in one frame, they vanish in every frame. This condition is a requirement according to the principle of relativity; i.e., all non-gravitational laws must make the same predictions for identical experiments taking place at the same spacetime event in two different inertial frames of reference. On manifolds, the words covariant and contravariant refer to how objects transform under general coordinate transformations. Both covariant and contravariant four-vectors can be Lorentz covariant quantities. Local Lorentz covariance, which follows from general relativity, refers to Lorentz covariance applying only locally in an infinitesimal region of spacetime at every point. There is a generalization of this concept to cover Poincaré covariance and Poincaré invariance. Examples In general, the (transformational) nature of a Lorentz tensor can be identified by its tensor order, which is the number of free indices it has. No indices implies it is a scalar, one implies that it is a vector, etc. Some tensors with a physical interpretation are listed below. The sign convention of the Minkowski metric is used throughout the article. Scalars Spacetime interval Proper time (for timelike intervals) Proper distance (for spacelike intervals) Mass Electromagnetism invariants D'Alembertian/wave operator Four-vectors 4-displacement 4-position 4-gradient which is the 4D partial derivative: 4-velocity where 4-momentum where and is the rest mass. 4-current where 4-potential Four-tensors Kronecker delta Minkowski metric (the metric of flat space according to general relativity) Electromagnetic field tensor (using a metric signature of + − − −) Dual electromagnetic field tensor Lorentz violating models In standard field theory, there are very strict and severe constraints on marginal and relevant Lorentz violating operators within both QED and the Standard Model. Irrelevant Lorentz violating operators may be suppressed by a high cutoff scale, but they typically induce marginal and relevant Lorentz violating operators via radiative corrections. So, we also have very strict and severe constraints on irrelevant Lorentz violating operators. Since some approaches to quantum gravity lead to violations of Lorentz invariance, these studies are part of phenomenological quantum gravity. Lorentz violations are allowed in string theory, supersymmetry and Hořava–Lifshitz gravity. Lorentz violating models typically fall into four classes: The laws of physics are exactly Lorentz covariant but this symmetry is spontaneously broken. In special relativistic theories, this leads to phonons, which are the Goldstone bosons. The phonons travel at less than the speed of light. Similar to the approximate Lorentz symmetry of phonons in a lattice (where the speed of sound plays the role of the critical speed), the Lorentz symmetry of special relativity (with the speed of light as the critical speed in vacuum) is only a low-energy limit of the laws of physics, which involve new phenomena at some fundamental scale. Bare conventional "elementary" particles are not point-like field-theoretical objects at very small distance scales, and a nonzero fundamental length must be taken into account. Lorentz symmetry violation is governed by an energy-dependent parameter which tends to zero as momentum decreases. Such patterns require the existence of a privileged local inertial frame (the "vacuum rest frame"). They can be tested, at least partially, by ultra-high energy cosmic ray experiments like the Pierre Auger Observatory. The laws of physics are symmetric under a deformation of the Lorentz or more generally, the Poincaré group, and this deformed symmetry is exact and unbroken. This deformed symmetry is also typically a quantum group symmetry, which is a generalization of a group symmetry. Deformed special relativity is an example of this class of models. The deformation is scale dependent, meaning that at length scales much larger than the Planck scale, the symmetry looks pretty much like the Poincaré group. Ultra-high energy cosmic ray experiments cannot test such models. Very special relativity forms a class of its own; if charge-parity (CP) is an exact symmetry, a subgroup of the Lorentz group is sufficient to give us all the standard predictions. This is, however, not the case. Models belonging to the first two classes can be consistent with experiment if Lorentz breaking happens at Planck scale or beyond it, or even before it in suitable preonic models, and if Lorentz symmetry violation is governed by a suitable energy-dependent parameter. One then has a class of models which deviate from Poincaré symmetry near the Planck scale but still flows towards an exact Poincaré group at very large length scales. This is also true for the third class, which is furthermore protected from radiative corrections as one still has an exact (quantum) symmetry. Even though there is no evidence of the violation of Lorentz invariance, several experimental searches for such violations have been performed during recent years. A detailed summary of the results of these searches is given in the Data Tables for Lorentz and CPT Violation. Lorentz invariance is also violated in QFT assuming non-zero temperature. There is also growing evidence of Lorentz violation in Weyl semimetals and Dirac semimetals. See also 4-vector Antimatter tests of Lorentz violation Fock–Lorentz symmetry General covariance Lorentz invariance in loop quantum gravity Lorentz-violating electrodynamics Lorentz-violating neutrino oscillations Planck length Symmetry in physics Notes References Background information on Lorentz and CPT violation: http://www.physics.indiana.edu/~kostelec/faq.html Special relativity Symmetry Hendrik Lorentz
Lorentz covariance
Physics,Mathematics
1,523
2,571,938
https://en.wikipedia.org/wiki/Jacob%27s%20staff
The term Jacob's staff is used to refer to several things, also known as cross-staff, a ballastella, a fore-staff, a ballestilla, or a balestilha. In its most basic form, a Jacob's staff is a stick or pole with length markings; most staffs are much more complicated than that, and usually contain a number of measurement and stabilization features. The two most frequent uses are: in astronomy and navigation for a simple device to measure angles, later replaced by the more precise sextants; in surveying (and scientific fields that use surveying techniques, such as geology and ecology) for a vertical rod that penetrates or sits on the ground and supports a compass or other instrument. The simplest use of a Jacob's staff is to make qualitative judgements of the height and angle of an object relative to the user of the staff. In astronomy and navigation In navigation the instrument is also called a cross-staff and was used to determine angles, for instance the angle between the horizon and Polaris or the sun to determine a vessel's latitude, or the angle between the top and bottom of an object to determine the distance to said object if its height is known, or the height of the object if its distance is known, or the horizontal angle between two visible locations to determine one's point on a map. The Jacob's staff, when used for astronomical observations, was also referred to as a radius astronomicus. With the demise of the cross-staff, in the modern era the name "Jacob's staff" is applied primarily to the device used to provide support for surveyor's instruments. Etymology The origin of the name of the instrument is not certain. Some refer to the Biblical patriarch Jacob, specifically in the Book of Genesis (). It may also take its name after its resemblance to Orion, referred to by the name of Jacob on some medieval star charts. Another possible source is the Pilgrim's staff, the symbol of St James (Jacobus in Latin). The name cross staff simply comes from its cruciform shape. History The original Jacob's staff was developed as a single pole device, in the 14th century, that was used in making astronomical measurements. It was first described by the French-Jewish mathematician Levi ben Gerson of Provence, in his "Book of the Wars of the Lord" (translated in Latin as well as Hebrew). He used a Hebrew name for the staff that translates to "Revealer of Profundities", while the term "Jacob's staff" was used by his Christian contemporaries. Its invention was likely due to fellow French-Jewish astronomer Jacob ben Makir, who also lived in Provence in the same period. Attribution to 15th century Austrian astronomer Georg Purbach is less likely, because Purbach was not born until 1423. (Such attributions may refer to a different instrument with the same name.) Its origins may be traced to the Chaldeans around 400 BCE. Although it has become quite accepted that ben Gerson first described Jacob's staff, the British Sinologist Joseph Needham theorizes that the Song dynasty Chinese scientist Shen Kuo (1031–1095), in his Dream Pool Essays of 1088, described a Jacob's staff. Shen was an antiquarian interested in ancient objects; after he unearthed an ancient crossbow-like device from a home's garden in Jiangsu, he realized it had a sight with a graduated scale that could be used to measure the heights of distant mountains, likening it to how mathematicians measure heights by using right-angle triangles. He wrote that when one viewed the whole breadth of a mountain with it, the distance on the instrument was long; when viewing a small part of the mountainside, the distance was short; this, he wrote, was due to the cross piece that had to be pushed further away from the eye, while the graduation started from the further end. Needham does not mention any practical application of this observation. During the medieval European Renaissance, the Dutch mathematician and surveyor Adriaan Metius developed his own Jacob's staff; Dutch mathematician Gemma Frisius made improvements to this instrument. In the 15th century, the German mathematician Johannes Müller (called Regiomontanus) made the instrument popular in geodesic and astronomical measurements. Construction In the original form of the cross-staff, the pole or main staff was marked with graduations for length. The cross-piece (BC in the drawing to the right), also called the transom or transversal, slides up and down on the main staff. On older instruments, the ends of the transom were cut straight across. Newer instruments had brass fittings on the ends, with holes in the brass for observation. (In marine archaeology, these fittings are often the only components of a cross-staff that survive.) It was common to provide several transoms, each with a different range of angles it would measure; three transoms were common. In later instruments, separate transoms were switched in favour of just one with pegs to indicate the ends. These pegs were mounted in one of several pairs of holes symmetrically located on either side of the transom. This provided the same capability with fewer parts. The transom on Frisius' version had a sliding vane on the transom as an end point. Usage The user places one end of the main staff against their cheek, just below the eye. By sighting the horizon at the end of the lower part of the transom (or through the hole in the brass fitting) [B], then adjusting the cross arm on the main arm until the sun is at the other end of the transom [C], the altitude can be determined by reading the position of the cross arm on the scale on the main staff. This value was converted to an angular measurement by looking up the value in a table. Cross-staff for navigation The original version was not reported to be used at sea, until the Age of Discoveries. Its use was reported by João de Lisboa in his Treatise on the Nautical Needle of 1514. Johannes Werner suggested the cross-staff be used at sea in 1514 and improved instruments were introduced for use in navigation. John Dee introduced it to England in the 1550s. In the improved versions, the rod was graduated directly in degrees. This variant of the instrument is not correctly termed a Jacob's staff but is a cross-staff. The cross-staff was difficult to use. In order to get consistent results, the observer had to position the end of the pole precisely against his cheek. He had to observe the horizon and a star in two different directions while not moving the instrument when he shifted his gaze from one to the other. In addition, observations of the sun required the navigator to look directly at the sun. This could be a uncomfortable exercise and made it difficult to obtain an accurate altitude for the sun. Mariners took to mounting smoked-glass to the ends of the transoms to reduce the glare of the sun. As a navigational tool, this instrument was eventually replaced, first by the backstaff or quadrant, neither of which required the user to stare directly into the sun, and later by the octant and the sextant. Perhaps influenced by the backstaff, some navigators modified the cross-staff to operate more like the former. Vanes were added to the ends of the longest cross-piece and another to the end of the main staff. The instrument was reversed so that the shadow of the upper vane on the cross piece fell on the vane at the end of the staff. The navigator held the instrument so that he would view the horizon lined up with the lower vane and the vane at the end of the staff. By aligning the horizon with the shadow of the sun on the vane at the end of the staff, the elevation of the sun could be determined. This actually increased the accuracy of the instrument, as the navigator no longer had to position the end of the staff precisely on his cheek. Another variant of the cross-staff was a spiegelboog, invented in 1660 by the Dutchman, Joost van Breen. Ultimately, the cross-staff could not compete with the backstaff in many countries. In terms of handling, the backstaff was found to be more easy to use. However, it has been proven by several authors that in terms of accuracy, the cross-staff was superior to the backstaff. Backstaves were no longer allowed on board Dutch East India Company vessels as per 1731, with octants not permitted until 1748. In surveying In surveying, the term jacob staff refers to a monopod, a single straight rod or staff made of nonferrous material, pointed and metal-clad at the bottom for penetrating the ground. It also has a screw base and occasionally a ball joint on the mount, and is used for supporting a compass, transit, or other instrument. The term cross-staff may also have a different meaning in the history of surveying. While the astronomical cross-staff was used in surveying for measuring angles, two other devices referred to as a cross-staff were also employed. Cross-head, cross-sight, surveyor's cross or cross - a drum or box shaped device mounted on a pole. It had two sets of mutually perpendicular sights. This device was used by surveyors to measure offsets. Sophisticated versions had a compass and spirit levels on the top. The French versions were frequently eight-sided rather than round. Optical square - an improved version of the cross-head, the optical square used two silvered mirrors at 45° to each other. This permitted the surveyor to see along both axes of the instrument at once. In the past, many surveyor's instruments were used on a Jacob's staff. These include: Cross-head, cross-sight, surveyor's cross or cross Graphometer Circumferentor Holland circle Miner's dial Optical square Surveyor's sextant Surveyor's target Abney level Some devices, such as the modern optical targets for laser-based surveying, are still in common use on a Jacob's staff. In geology In geology, the Jacob's staff is mainly used to measure stratigraphic thicknesses in the field, especially when bedding is not visible or unclear (e.g., covered outcrop) and when due to the configuration of an outcrop, the apparent and real thicknesses of beds diverge therefore making the use of a tape measure difficult. There is a certain level of error to be expected when using this tool, due to the lack of an exact reference mean for measuring stratigraphic thickness. High-precision designs include a laser able to slide vertically along the staff and to rotate on a plane parallel to bedding. See also Backstaff Cross of St James Pilgrim's staff Tacheometry As a symbol in Scouting: 5th World Scout Jamboree References Further reading Levi ben Gerson and the Cross Staff Revisited, Bernard R Goldstein 14th-century introductions Angle measuring instruments History of astronomy Historical scientific instruments Navigational equipment Song dynasty Surveying instruments Celestial navigation
Jacob's staff
Astronomy
2,283
306,316
https://en.wikipedia.org/wiki/Thioridazine
Thioridazine (Mellaril or Melleril) is a first generation antipsychotic drug belonging to the phenothiazine drug group and was previously widely used in the treatment of schizophrenia and psychosis. The branded product was withdrawn worldwide in 2005 because it caused severe cardiac arrhythmias. However, generic versions are still available in the US. Indications Thioridazine was voluntarily discontinued by its manufacturer, Novartis, worldwide because it caused severe cardiac arrhythmias. However, generics remain on the market in some countries. Its primary use in medicine is for the treatment of schizophrenia. It was also tried with some success as a treatment for various psychiatric symptoms seen in people with dementia, but chronic use of thioridazine and other anti-psychotics in people with dementia is not recommended. Generic forms of thioridazine remain on the market in a few countries, usually with restrictions due to the risk of arrhythmias. For example, in the US, it is restricted to patients who have taken at least 2 other antipsychotics that either failed or caused serious side effects. Side effects Thioridazine prolongs the QTc interval in a dose-dependent manner. It produces significantly less extrapyramidal side effects than most first-generation antipsychotics, likely due to its potent anticholinergic effect. Its use, along with the use of other typical antipsychotics, has been associated with degenerative retinopathies (specifically retinitis pigmentosa). It has a higher propensity for causing anticholinergic side effects coupled with a lower propensity for causing extrapyramidal side effects and sedation than chlorpromazine, but also has a higher incidence of hypotension and cardiotoxicity. It is also known to possess a relatively high liability for causing orthostatic hypotension compared to other antipsychotics. Similarly to other first-generation antipsychotics it has a relatively high liability for causing prolactin elevation. It is moderate risk for causing weight gain. As with all antipsychotics thioridazine has been linked to cases of tardive dyskinesia (an often permanent neurological disorder characterised by slow, repetitive, purposeless and involuntary movements, most often of the facial muscles, that is usually brought on by years of continued treatment with antipsychotics, especially the first-generation (or typical) antipsychotics such as thioridazine) and neuroleptic malignant syndrome (a potentially fatal complication of antipsychotic treatment). Blood dyscrasias such as agranulocytosis, leukopenia and neutropenia are possible with thioridazine treatment. Thioridazine is also associated with abnormal retinal pigmentation after many years of use. Thioridazine has been correlated to rare instances of clinically apparent acute cholestatic liver injury. Pharmacology Thioridazine has the following binding profile: Note: The Binding affinities given are towards cloned human receptors unless otherwise specified Acronyms used HB – Human brain receptor RC – Cloned rat receptor ND – No data Metabolism Thioridazine is a racemic compound with two enantiomers, both of which are metabolized, according to Eap et al., by CYP2D6 into (S)- and (R)-thioridazine-2-sulfoxide, better known as mesoridazine, and into (S)- and (R)-thioridazine-5-sulfoxide. Mesoridazine is in turn metabolized into sulforidazine. Thioridazine is an inhibitor of CYP1A2 and CYP3A4. History The manufacturer Novartis/Sandoz/Wander of the brands of thioridazine, Mellaril in the US and Canada and Melleril in Europe, discontinued the drug worldwide in June 2005. Generic forms of thioridazine however remain on the market in a few countries usually with restrictions for example in the US its restricted to patients who have taken at least 2 other antipsychotics that either failed or caused serious side effects Antibiotic activity Thioridazine is known to kill extensively drug-resistant tuberculosis and to make methicillin-resistant Staphylococcus aureus sensitive to β-lactam antibiotics. A possible mechanism of action for the drug's antibiotic activity is via the inhibition of bacterial secretion pumps. The β-lactam antibiotic resistance is due to the secretion β-lactamase a protein that destroys antibiotics. If the bacteria cannot secrete the β-lactamase, then the antibiotic will be effective. The drug has been successfully used in the treatment of granulomatous amoebic encephalitis in conjunction with more conventional amoebicidal medications. Synthesis Note: Same sidechain used for mesoridazine and sulforidazine. The alkylation of 2-Picoline [109-06-8] (1) with formaldehyde gives 2-Pyridineethanol [103-74-2] (2). Forming the quat salt with methyl iodide [74-88-4] leads to 2-(2-hydroxyethyl)-1-methyl-pyridinium iodide [56622-15-2] (3). Catalytic hydrogenation in the presence of hydrochloric acid leads to 2-(2-Chloroethyl)-1-Methylpiperidine [50846-01-0] (4). Alkylation of 2-Methylthiophenothiazine [7643-08-5] (5) in the presence of sodium hydride base completed the synthesis of Thioridazine (6). References Further reading External links Antipsychotic Mellaril Removed from Market Schizophrenia Daily News Blog. Alpha-1 blockers CYP2D6 inhibitors Drugs developed by Novartis HERG blocker M1 receptor antagonists M2 receptor antagonists M3 receptor antagonists M4 receptor antagonists M5 receptor antagonists Phenothiazines Piperidines Prodrugs Tertiary amines Thioethers Typical antipsychotics
Thioridazine
Chemistry
1,326
180,624
https://en.wikipedia.org/wiki/Vehicle%20dynamics
Vehicle dynamics is the study of vehicle motion, e.g., how a vehicle's forward movement changes in response to driver inputs, propulsion system outputs, ambient conditions, air/surface/water conditions, etc. Vehicle dynamics is a part of engineering primarily based on classical mechanics. It may be applied for motorized vehicles (such as automobiles), bicycles and motorcycles, aircraft, and watercraft. Factors affecting vehicle dynamics The aspects of a vehicle's design which affect the dynamics can be grouped into drivetrain and braking, suspension and steering, distribution of mass, aerodynamics and tires. Drivetrain and braking Automobile layout (i.e. location of engine and driven wheels) Powertrain Braking system Suspension and steering Some attributes relate to the geometry of the suspension, steering and chassis. These include: Ackermann steering geometry Axle track Camber angle Caster angle Ride height Roll center Scrub radius Steering ratio Toe Wheel alignment Wheelbase Distribution of mass Some attributes or aspects of vehicle dynamics are purely due to mass and its distribution. These include: Center of mass Moment of inertia Roll moment Sprung mass Unsprung mass Weight distribution Aerodynamics Some attributes or aspects of vehicle dynamics are purely aerodynamic. These include: Automobile drag coefficient Automotive aerodynamics Center of pressure Downforce Ground effect in cars Tires Some attributes or aspects of vehicle dynamics can be attributed directly to the tires. These include: Camber thrust Circle of forces Contact patch Cornering force Ground pressure Pacejka's Magic Formula Pneumatic trail Radial Force Variation Relaxation length Rolling resistance Self aligning torque Skid Slip angle Slip (vehicle dynamics) Spinout Steering ratio Tire load sensitivity Vehicle behaviours Some attributes or aspects of vehicle dynamics are purely dynamic. These include: Body flex Body roll Bump Steer Bundorf analysis Directional stability Critical speed Noise, vibration, and harshness Pitch Ride quality Roll Speed wobble Understeer, oversteer, lift-off oversteer, and fishtailing Weight transfer and load transfer Yaw Analysis and simulation The dynamic behavior of vehicles can be analysed in several different ways. This can be as straightforward as a simple spring mass system, through a three-degree of freedom (DoF) bicycle model, to a large degree of complexity using a multibody system simulation package such as MSC ADAMS or Modelica. As computers have gotten faster, and software user interfaces have improved, commercial packages such as CarSim have become widely used in industry for rapidly evaluating hundreds of test conditions much faster than real time. Vehicle models are often simulated with advanced controller designs provided as software in the loop (SIL) with controller design software such as Simulink, or with physical hardware in the loop (HIL). Vehicle motions are largely due to the shear forces generated between the tires and road, and therefore the tire model is an essential part of the math model. In current vehicle simulator models, the tire model is the weakest and most difficult part to simulate. The tire model must produce realistic shear forces during braking, acceleration, cornering, and combinations, on a range of surface conditions. Many models are in use. Most are semi-empirical, such as the Pacejka Magic Formula model. Racing car games or simulators are also a form of vehicle dynamics simulation. In early versions many simplifications were necessary in order to get real-time performance with reasonable graphics. However, improvements in computer speed have combined with interest in realistic physics, leading to driving simulators that are used for vehicle engineering using detailed models such as CarSim. It is important that the models should agree with real world test results, hence many of the following tests are correlated against results from instrumented test vehicles. Techniques include: Linear range constant radius understeer Fishhook Frequency response Lane change Moose test Sinusoidal steering Skidpad Swept path analysis See also Automotive suspension design Automobile handling Hunting oscillation Multi-axis shaker table Vehicular metrics 4-poster 7 post shaker References Further reading A new way of representing tyre data obtained from measurements in pure cornering and pure braking conditions. Mathematically oriented derivation of standard vehicle dynamics equations, and definitions of standard terms. Vehicle dynamics as developed by Maurice Olley from the 1930s onwards. First comprehensive analytical synthesis of vehicle dynamics. Latest and greatest, also the standard reference for automotive suspension engineers. Vehicle dynamics and chassis design from a race car perspective. Handling, Braking, and Ride of Road and Race Cars. Lecture Notes to the MOOC Vehicle Dynamics of iversity Automotive engineering Automotive technologies Dynamics (mechanics) Vehicle technology
Vehicle dynamics
Physics,Engineering
916
34,010,466
https://en.wikipedia.org/wiki/Biological%20applications%20of%20bifurcation%20theory
Biological applications of bifurcation theory provide a framework for understanding the behavior of biological networks modeled as dynamical systems. In the context of a biological system, bifurcation theory describes how small changes in an input parameter can cause a bifurcation or qualitative change in the behavior of the system. The ability to make dramatic change in system output is often essential to organism function, and bifurcations are therefore ubiquitous in biological networks such as the switches of the cell cycle. Biological networks and dynamical systems Biological networks originate from evolution and therefore have less standardized components and potentially more complex interactions than networks designed by humans, such as electrical networks. At the cellular level, components of a network can include a large variety of proteins, many of which differ between organisms. Network interactions occur when one or more proteins affect the function of another through transcription, translation, translocation, phosphorylation, or other mechanisms. These interactions either activate or inhibit the action of the target protein in some way. While humans build networks with a concern for simplicity and order, biological networks acquire redundancy and complexity over the course of evolution. Therefore, it can be impossible to predict the quantitative behavior of a biological network from knowledge of its organization. Similarly, it is impossible to describe its organization purely from its behavior, though behavior can indicate the presence of certain network motifs. However, with knowledge of network interactions and a set of parameters for the proteins and protein interactions (usually obtained through empirical research), it is often possible to construct a model of the network as a dynamical system. In general, for n proteins, the dynamical system takes the following form where x is typically protein concentration: These systems are often very difficult to solve, so modeling of networks as a linear dynamical systems is easier. Linear systems contain no products between xs and are always solvable. They have the following form for all i: Unfortunately, biological systems are often nonlinear and therefore need nonlinear models. Input/output motifs Despite the great potential complexity and diversity of biological networks, all first-order network behavior generalizes to one of four possible input-output motifs: hyperbolic or Michaelis–Menten, ultra-sensitive, bistable, and bistable irreversible (a bistability where negative and therefore biologically impossible input is needed to return from a state of high output). Examples of each in biological contexts can be found on their respective pages. Ultrasensitive, bistable, and irreversibly bistable networks all show qualitative change in network behavior around certain parameter values – these are their bifurcation points. Basic bifurcations in the presence of error Nonlinear dynamical systems can be most easily understood with a one-dimensional example system where the change in some quantity x (e.g. protein concentration) abundance depends only on itself: Instead of solving the system analytically, which can be difficult or impossible for many functions, it is often quickest and most informative to take a geometric approach and draw a phase portrait. A phase portrait is a qualitative sketch of the differential equation's behavior that shows equilibrium solutions or fixed points and the vector field on the real line. Bifurcations describe changes in the stability or existence of fixed points as a control parameter in the system changes. As a very simple explanation of a bifurcation in a dynamical system, consider an object balanced on top of a vertical beam. The mass of the object can be thought of as the control parameter, r, and the beam's deflection from the vertical axis is the dynamic variable, x. As r increases, x remains relatively stable. But when the mass reaches a certain point – the bifurcation point – the beam will suddenly buckle, in a direction dependent on minor imperfections in the setup. This is an example of a pitchfork bifurcation. Changes in the control parameter eventually changed the qualitative behavior of the system. Saddle-node bifurcation For a more rigorous example, consider the dynamical system shown in Figure 2, described by the following equation: where r is once again the control parameter (labeled ε in Figure 2). The system's fixed points are represented by where the phase portrait curve crosses the x-axis. The stability of a given fixed point can be determined by the direction of flow on the x-axis; for instance, in Figure 2, the green point is unstable (divergent flow), and the red one is stable (convergent flow). At first, when r is greater than 0, the system has one stable fixed point and one unstable fixed point. As r decreases the fixed points move together, briefly collide into a semi-stable fixed point at r = 0, and then cease to exist when r < 0. In this case, because the behavior of the system changes significantly when the control parameter r is 0, 0 is a bifurcation point. By tracing the position of the fixed points in Figure 2 as r varies, one is able to generate the bifurcation diagram shown in Figure 3. Other types of bifurcations are also important in dynamical systems, but the saddle-node bifurcation tends to be most important in biology. Real biological systems are subject to small stochastic variations that introduce error terms into the dynamical equations, and this usually leads to more complex bifurcations simplifying into separate saddle nodes and fixed points. Two such examples of "imperfect" bifurcations that can appear in biology are discussed below. Note that the saddle node itself in the presence of error simply translates in the x-r plane, with no change in qualitative behavior; this can be proven using the same analysis as presented below. Imperfect transcritical bifurcation A common simple bifurcation is the transcritical bifurcation, given by and the bifurcation diagram in Figure 4 (black curves). The phase diagrams are shown in Figure 5. Tracking the x-intercepts in the phase diagram as r changes, there are two fixed point trajectories which intersect at the origin; this is the bifurcation point (intuitively, when the number of x-intercepts in the phase portrait changes). The left fixed point is always unstable, and the right one stable. Now consider the addition of an error term h, where 0 < h << 1. That is, The error term translates all the phase portraits vertically, downward if h is positive. In the left half of Figure 6 (x < 0), the black, red, and green fixed points are semistable, unstable, and stable, respectively. This is mirrored by the magenta, black, and blue points on the right half (x > 0). Each of these halves thus behaves like a saddle-node bifurcation; in other words, the imperfect transcritical bifurcation can be approximated by two saddle-node bifurcations when close to the critical points, as evident in the red curves of Figure 4. Linear stability analysis Besides observing the flow in the phase diagrams, it is also possible to demonstrate the stability of various fixed points using linear stability analysis. First, find the fixed points in the phase portrait by setting the bifurcation equation to 0: Using the quadratic formula to find the fixed points x*: where in the last step the approximation 4h << r 2 has been used, which is reasonable for studying fixed points well past the bifurcation point, such as the light blue and green curves in Figure 6. Simplifying further, Next, determine whether the phase portrait curve is increasing or decreasing at the fixed points, which can be assessed by plugging x* into the first derivative of the bifurcation equation. The results are complicated by the fact that r can be both positive and negative; nonetheless, the conclusions are the same as before regarding the stability of each fixed point. This comes as no surprise, since the first derivative contains the same information as the phase diagram flow analysis. The colors in the above solution correspond to the arrows in Figure 6. Imperfect pitchfork bifurcation The buckling beam example from earlier is an example of a pitchfork bifurcation (perhaps more appropriately dubbed a "trifurcation"). The "ideal" pitchfork is shown on the left of Figure 7, given by and r = 0 is where the bifurcation occurs, represented by the black dot at the origin of Figure 8. As r increases past 0, the black dot splits into three trajectories: the blue stable fixed point that moves right, the red stable point that moves left, and a third unstable point that stays at the origin. The blue and red are solid lines in Figure 7 (left), while the black unstable trajectory is the dotted portion along the positive x-axis. As before, consider an error term h, where 0 < h << 1, i.e. Once again, the phase portraits are translated upward an infinitesimal amount, as shown in Figure 9.Tracking the x-intercepts in the phase diagram as r changes yields the fixed points, which recapitulate the qualitative result from Figure 7 (right). More specifically, the blue fixed point from Figure 9 corresponds to the upper trajectory in Figure 7 (right); the green fixed point is the dotted trajectory; and the red fixed point is the bottommost trajectory. Thus, in the imperfect case (h ≠ 0), the pitchfork bifurcation simplifies into a single stable fixed point coupled with a saddle-node bifurcation. A linear stability analysis can also be performed here, except using the generalized solution for a cubic equation instead of quadratic. The process is the same: 1) set the differential equation to zero and find the analytical form of the fixed points x*, 2) plug each x* into the first derivative , then 3) evaluate stability based on whether is positive or negative. Multistability Combined saddle-node bifurcations in a system can generate multistability. Bistability (a special case of multistability) is an important property in many biological systems, often the result of network architecture containing a mix of positive feedback interactions and ultra-sensitive elements. Bistable systems are hysteretic, i.e. the state of the system depends on the history of inputs, which can be crucial for switch-like control of cellular processes. For instance, this is important in contexts where a cell decides whether to commit to a particular pathway; a non-hysteretic response might switch the system on-and-off rapidly when subject to random thermal fluctuations close to the activation threshold, which can be resource-inefficient. Specific examples in biology Networks with bifurcation in their dynamics control many important transitions in the cell cycle. The G1/S, G2/M, and Metaphase–Anaphase transitions all act as biochemical switches in the cell cycle. For instance, egg extracts of Xenopus laevis are driven in and out of mitosis irreversibly by positive feedback in the phosphorylation of Cdc2, a cyclin-dependent kinase. In population ecology, the dynamics of food web interactions networks can exhibit Hopf bifurcations. For instance, in an aquatic system consisting of a primary producer, a mineral resource, and an herbivore, researchers found that patterns of equilibrium, cycling, and extinction of populations could be qualitatively described with a simple nonlinear model with a Hopf Bifurcation. Galactose utilization in budding yeast (S. cerevisiae) is measurable through GFP expression induced by the GAL promoter as a function of changing galactose concentrations. The system exhibits bistable switching between induced and non-induced states. Similarly, lactose utilization in E. coli as a function of thio-methylgalactoside (a lactose analogue) concentration measured by a GFP-expressing lac promoter exhibits bistability and hysteresis (Figure 10, left and right respectively). See also Biochemical switches in the cell cycle Dynamical Systems Dynamical systems theory Bifurcation Theory Cell cycle Theoretical Biology Computational Biology Systems Biology Cellular model Ricardo Kevin References Bifurcation theory
Biological applications of bifurcation theory
Mathematics
2,508
7,152,596
https://en.wikipedia.org/wiki/Sham%20peer%20review
Sham peer review or malicious peer review is a name given to the abuse of a medical peer review process to attack a doctor for personal or other non-medical reasons. The American Medical Association conducted an investigation of medical peer review in 2007 and concluded that while it is easy to allege misconduct and 15% of surveyed physicians indicated that they were aware of peer review misuse or abuse, cases of malicious peer review able to be proven through the legal system are rare. Legal basis Those who maintain that sham peer review is a pervasive problem suggest that the Healthcare Quality Improvement Act (HCQIA) of 1986 allows sham reviews by granting significant immunity from liability to doctors and others who participate in peer reviews. This immunity extends to investigative activities as well as to any associated peer review hearing, whether or not it leads to a disciplinary (or other) action. The definition of a peer review body can be broad, including not only individuals but also (for example, in Oregon), "tissue committees, governing bodies or committees including medical staff committees of a [licensed] health care facility...or any other medical group in connection with bona fide medical research, quality assurance, utilization review, credentialing, education, training, supervision or discipline of physicians or other health care providers." The California legislature framed its statutes so as to allow "aggrieved physicians the opportunity to prove that the peer review to which they were subject was in fact carried out for improper purposes, i.e., for purposes unrelated to assuring quality care or patient safety". These statutes allow that a peer review can be found in court to have been improper due to bad faith or malice, in which case the peer reviewers' immunities from civil liability "fall by the wayside". Those who practice sham peer review could draw out the process by legal maneuvering, and the fairness of a peer review that has been unduly delayed has been called into question. Many medical staff laws specify guidelines for the timeliness of peer review, in compliance with JCAHO standards. Medical peer review process The medical peer review system is a quasi-judicial one. It is modeled in some ways on the grand jury / petit jury system. After a complainant asks for an investigation, a review body is assembled for fact-finding. This fact-finding body, called an ad hoc committee, is appointed by the medical Chief of Staff and is composed of other physician staff members chosen at the Chief of Staff's discretion. This ad hoc committee then conducts an investigation in the manner it feels is appropriate. This may include a review of the literature or an outside expert. Thus, there is no standard for impartiality and specifically no standard for due process in the "peer-review 'process' ." Physicians that are indicted (and sanctioned) have the right to request a hearing. At the hearing, counsel is allowed. A second independent panel of physicians is chosen as the petit jury, and a hearing officer is chosen. The accused physician has the option to demonstrate conflicts of interest and attempt to disqualify jurors based on reasonable suspicions of bias or conflicts of interest in a voir dire process. Although some medical staff bodies utilize the hospital attorney and accept hospital funds to try peer review cases, the California Medical Association discourages this practice. California has enacted legislation formally requiring the separation of the hospital and medical staff. Alleged cases Some physicians allege that sham peer review is often conducted in retaliation for whistleblowing, although one study in 2007 suggested that such events were rare. Khajavi v. Feather River Anesthesiology Medical Group Those who disagree with the AMA point to the case of Nosrat Khajavi. In 1996, Khajavi, an anesthesiologist in Yuba City, California, disagreed with a surgeon over the appropriateness of cataract surgery for a patient and refused to attend during the procedure. Khajavi was subsequently terminated from his anesthesia group. He sued for wrongful termination under California Business & Professions' Code Section 2053, and the suit was allowed by the California Court of Appeals. In 2000, the court held that Khajavi was not protected from termination on the basis of advocating for what he felt was medically appropriate care. The court did not rule on the merits of the dispute. Mileikowsky v. Tenet A doctor was allegedly subject to multiple hearings for the same charges, and his rights to an expedited hearing were allegedly denied while a suspension was in place. On May 15, 2001, the California Medical Association filed an amicus curiae brief to emphasize legal protections meant to prevent physicians being arbitrarily excluded from access to healthcare facilities based on mechanisms such as summary suspension without a speedy hearing. This case was decided on April 18, 2005. The court ruled that the hearing officer in the case could indeed terminate the physician's peer review hearing based on grounds that the physician refused to cooperate on procedural and other matters necessary for the good conduct of the proceedings. Thus, the physician lost his membership and privileges at the hospital. Ironically, the same physician was brought into a peer review hearing at another facility a short time later. The hearing officer in that case also terminated the proceedings, this time due to the physician's failure to turn over certain evidence for use in the hearing. The physician challenged the termination through the court system arguing, contrary to the Tenet appellate court ruling, that California's peer review statutes never intended the hearing officer in peer review hearings to have such powers of termination. The California Supreme Court reviewed the case and agreed in April 2009. The High Court ruled, among other things, that peer review hearing officers must defer the question of termination to the panel of physicians who sit in judgment of each peer review hearing. Roland Chalifoux Roland Chalifoux, member of an advocacy organisation called the Semmelweis Society, had his medical license revoked in Texas in 2004 after numerous incidents including the death of a patient. The Texas State Board of Medical Examiners stated that Chalifoux's practices "constitute such a deviation from the standard of care that revocation of his license is the only sanction that will adequately protect the public". Chalifoux subsequently secured permission to practice in West Virginia, and alleges that the Texas board's actions constitute sham peer review. Charles Williams, MD Six years after Charles Williams, MD, an anesthesiologist was summarily suspended by University Medical Center of Southern Nevada, a federal jury in Las Vegas awarded Dr. Williams $8.8 million as compensation for the due process violations he experienced in his sham peer review. Before the trial, which began May 16, U.S. District Judge Philip Pro made a finding that Ellerton and UMC's medical staff had violated Williams' due process rights. That left only the question of damages for the jury. This case appears to be the highest jury verdict in the nation for sham peer review which has not been overturned. Richard Chudacoff, MD On May 28, 2008, without any notice or opportunity to be heard, the Medical Staff of UMC suspended Dr. Chudacoff's clinical privileges. As a result of this, UMC filed a report against Dr. Chudacoff with the National Practitioner Data Bank claiming that Dr. Chudacoff was a risk to patient safety and had inadequate skills. This led to the virtual destruction of Dr. Chudacoff's career. Dr. Chudacoff sued. U.S. District Court Judge Edward Reed opined that, in Nevada, a physician's hospital privileges are a constitutionally protected property right. The Ninth Circuit Court of Appeals then affirmed that Dr. Chudacoff's due process rights were violated by UMC. As well, the Medical Executive members lost their immunity under the HCQIA for failure to follow their bylaws. The case was settled out of court in favor of Dr. Chudacoff, under the cloak of confidentiality. Development of the Patient Safety Organization (PSO) The Patient Safety and Quality Improvement Act of 2005 (Public Law 109-41) allows for the creation of Patient Safety Organizations, quality of care committees that can act in parallel with peer review boards. PSOs were authorized to gather information to be analyzed by hospital administrators, nurses, and physicians as a tool for systems failure analysis. They may be used by any healthcare entity except insurance companies, but must be registered with the AHRQ wing of the US Department of Health and Human Services. In PSOs, root cause analysis and "near misses" are evaluated in an attempt to avert major errors. Participants in PSOs are immune from prosecution in civil, criminal, and administrative hearings. See also False accusations Subpoena duces tecum References Further reading Abuse Medical sociology Human resource management Deception Peer review Workplace harassment and bullying Criticism of academia
Sham peer review
Biology
1,820
9,680,070
https://en.wikipedia.org/wiki/Security%20Technical%20Implementation%20Guide
A Security Technical Implementation Guide or STIG is a configuration standard consisting of cybersecurity requirements for a specific product. The use of STIGs enables a methodology for securing protocols within networks, servers, computers, and logical designs to enhance overall security. These guides, when implemented, enhance security for software, hardware, physical and logical architectures to further reduce vulnerabilities. Examples where STIGs would be of benefit is in the configuration of a desktop computer or an enterprise server. Most operating systems are not inherently secure, which leaves them open to criminals such as identity thieves and computer hackers. A STIG describes how to minimize network-based attacks and prevent system access when the attacker is interfacing with the system, either physically at the machine or over a network. STIGs also describe maintenance processes such as software updates and vulnerability patching. Advanced STIGs might cover the design of a corporate network, covering configurations of routers, databases, firewalls, domain name servers and switches. See also CIA triad Information Assurance Security Content Automation Protocol References External links NIST Security Configuration Checklists Repository Security Technical Implementation Guides and Supporting Documents in the Public Area Online STIG search Configuration management Security compliance
Security Technical Implementation Guide
Engineering
246
3,206,654
https://en.wikipedia.org/wiki/Krein%E2%80%93Milman%20theorem
In the mathematical theory of functional analysis, the Krein–Milman theorem is a proposition about compact convex sets in locally convex topological vector spaces (TVSs). This theorem generalizes to infinite-dimensional spaces and to arbitrary compact convex sets the following basic observation: a convex (i.e. "filled") triangle, including its perimeter and the area "inside of it", is equal to the convex hull of its three vertices, where these vertices are exactly the extreme points of this shape. This observation also holds for any other convex polygon in the plane Statement and definitions Preliminaries and definitions Throughout, will be a real or complex vector space. For any elements and in a vector space, the set is called the or closed interval between and The or open interval between and is when while it is when it satisfies and The points and are called the endpoints of these interval. An interval is said to be or proper if its endpoints are distinct. The intervals and always contain their endpoints while and never contain either of their endpoints. If and are points in the real line then the above definition of is the same as its usual definition as a closed interval. For any the point is said to (strictly) and if belongs to the open line segment If is a subset of and then is called an extreme point of if it does not lie between any two points of That is, if there does exist and such that and In this article, the set of all extreme points of will be denoted by For example, the vertices of any convex polygon in the plane are the extreme points of that polygon. The extreme points of the closed unit disk in is the unit circle. Every open interval and degenerate closed interval in has no extreme points while the extreme points of a non-degenerate closed interval are and A set is called convex if for any two points contains the line segment The smallest convex set containing is called the convex hull of and it is denoted by The closed convex hull of a set denoted by is the smallest closed and convex set containing It is also equal to the intersection of all closed convex subsets that contain and to the closure of the convex hull of ; that is, where the right hand side denotes the closure of while the left hand side is notation. For example, the convex hull of any set of three distinct points forms either a closed line segment (if they are collinear) or else a solid (that is, "filled") triangle, including its perimeter. And in the plane the unit circle is convex but the closed unit disk is convex and furthermore, this disk is equal to the convex hull of the circle. The separable Hilbert space Lp space of square-summable sequences with the usual norm has a compact subset whose convex hull is closed and thus also compact. However, like in all complete Hausdorff locally convex spaces, the convex hull of this compact subset will be compact. But if a Hausdorff locally convex space is not complete then it is in general guaranteed that will be compact whenever is; an example can even be found in a (non-complete) pre-Hilbert vector subspace of Every compact subset is totally bounded (also called "precompact") and the closed convex hull of a totally bounded subset of a Hausdorff locally convex space is guaranteed to be totally bounded. Statement In the case where the compact set is also convex, the above theorem has as a corollary the first part of the next theorem, which is also often called the Krein–Milman theorem. The convex hull of the extreme points of forms a convex subset of so the main burden of the proof is to show that there are enough extreme points so that their convex hull covers all of For this reason, the following corollary to the above theorem is also often called the Krein–Milman theorem. To visualized this theorem and its conclusion, consider the particular case where is a convex polygon. In this case, the corners of the polygon (which are its extreme points) are all that is needed to recover the polygon shape. The statement of the theorem is false if the polygon is not convex, as then there are many ways of drawing a polygon having given points as corners. The requirement that the convex set be compact can be weakened to give the following strengthened generalization version of the theorem. The property above is sometimes called or . Compactness implies convex compactness because a topological space is compact if and only if every family of closed subsets having the finite intersection property (FIP) has non-empty intersection (that is, its kernel is not empty). The definition of convex compactness is similar to this characterization of compact spaces in terms of the FIP, except that it only involves those closed subsets that are also convex (rather than all closed subsets). More general settings The assumption of local convexity for the ambient space is necessary, because constructed a counter-example for the non-locally convex space where Linearity is also needed, because the statement fails for weakly compact convex sets in CAT(0) spaces, as proved by . However, proved that the Krein–Milman theorem does hold for compact CAT(0) spaces. Related results Under the previous assumptions on if is a subset of and the closed convex hull of is all of then every extreme point of belongs to the closure of This result is known as (partial) to the Krein–Milman theorem. The Choquet–Bishop–de Leeuw theorem states that every point in is the barycenter of a probability measure supported on the set of extreme points of Relation to the axiom of choice Under the Zermelo–Fraenkel set theory (ZF) axiomatic framework, the axiom of choice (AC) suffices to prove all versions of the Krein–Milman theorem given above, including statement KM and its generalization SKM. The axiom of choice also implies, but is not equivalent to, the Boolean prime ideal theorem (BPI), which is equivalent to the Banach–Alaoglu theorem. Conversely, the Krein–Milman theorem KM together with the Boolean prime ideal theorem (BPI) imply the axiom of choice. In summary, AC holds if and only if both KM and BPI hold. It follows that under ZF, the axiom of choice is equivalent to the following statement: The closed unit ball of the continuous dual space of any real normed space has an extreme point. Furthermore, SKM together with the Hahn–Banach theorem for real vector spaces (HB) are also equivalent to the axiom of choice. It is known that BPI implies HB, but that it is not equivalent to it (said differently, BPI is strictly stronger than HB). History The original statement proved by was somewhat less general than the form stated here. Earlier, proved that if is 3-dimensional then equals the convex hull of the set of its extreme points. This assertion was expanded to the case of any finite dimension by . The Krein–Milman theorem generalizes this to arbitrary locally convex ; however, to generalize from finite to infinite dimensional spaces, it is necessary to use the closure. See also Citations Bibliography N. K. Nikol'skij (Ed.). Functional Analysis I. Springer-Verlag, 1992. H. L. Royden, Real Analysis. Prentice-Hall, Englewood Cliffs, New Jersey, 1988. Convex hulls Oriented matroids Theorems involving convexity Theorems in convex geometry Theorems in discrete geometry Theorems in functional analysis Topological vector spaces
Krein–Milman theorem
Mathematics
1,567
29,167,845
https://en.wikipedia.org/wiki/Llin%C3%A1s%27s%20law
Llinás's law, or law of no interchangeability of neurons, is a statement in neuroscience made by Rodolfo Llinás in 1989, during his Luigi Galvani Award Lecture at the Fidia Research Foundation Neuroscience Award Lectures. A neuron of a given kind (e.g. a thalamic cell) cannot be functionally replaced by one of another type (e.g. an inferior ollivary cell) even if their synaptic connectivity and the type of neurotransmitter outputs are identical. (The difference is that the intrinsic electrophysiological properties of thalamic cells are extraordinarily different from those of inferior olivary neurons). The statement of this law is a consequence of an article written by Rodolfo Llinas himself in 1988 and published in Science with the title "The Intrinsic Electrophysiological Properties of Mammalian Neurons: Insights into Central Nervous System Function", which is considered a watershed due to its more than 2000 citations in the scientific literature, marking a major shift in viewpoint in neuroscience around the functional aspect. Until then, the prevailing belief in neuroscience was that just the connections and neurotransmitters released by neurons was enough to determine their function. Research by Llinás and colleagues during the 80's with vertebrates revealed this previously held dogma was wrong. References Biology laws Colombian inventions Neuroscience
Llinás's law
Biology
281
33,476,292
https://en.wikipedia.org/wiki/Out%20of%20Phase%20Stereo
Out of Phase Stereo (OOPS) is an audio technique which manipulates the phase of a stereo audio track, to isolate or remove certain components of the stereo mix. It works on the principle of phase cancellation, in which two identical but inverted waveforms summed together will "cancel the other out". Process When a sine wave is mixed with one of identical frequency but opposite amplitude (ie: of an inverse polarity), the combined result is silence. A two-channel stereo recording contains a number of waveforms; sounds that are panned to the extreme left or right will contain the greatest difference in amplitude between the two channels, while those towards the centre will contain the smallest. A mix of the left channel with the polar inverse of the right channel will reduce centre-panned sounds towards silence, while preserving those towards the extremities. In practice, the OOPS technique can be performed by inverting the polarity of one speaker or signal lead. It can also be performed using digital audio software by inverting one of the channels of a stereo audio waveform, and then summing both channels together to create a single mono channel. Applications in music This technique has been previously used to eliminate vocals in a stereo track (as vocals tend to be panned centre) to create crude karaoke tracks, or generate surround channels from a stereo source, such as in Dolby Pro Logic. It has also been used in the recording process to include tracks that were only audible once an OOPS technique was applied. This feature can be observed in several of the Beatles' stereo albums. Australian band Cinema Prague recorded a single track Meldatype that contained two songs played simultaneously, one of which was only audible after an OOPS technique was applied. It consisted of two mono tracks: a loud and distorted electric guitar playing chords repetitively, as well as a quiet vocal track. The guitar had one of the channels inverted, while the vocal track was identical in both channels. During normal playback, the guitar would be heard throughout the entire track. When the channels were summed to mono, however, the regular and inverted guitar tracks would cancel out, revealing the vocal track to the listener. References Interference Stereophonic techniques Wave mechanics
Out of Phase Stereo
Physics
453
67,944,695
https://en.wikipedia.org/wiki/Biaxial%20tensile%20testing
In materials science and solid mechanics, biaxial tensile testing is a versatile technique to address the mechanical characterization of planar materials. It is a generalized form of tensile testing in which the material sample is simultaneously stressed along two perpendicular axes. Typical materials tested in biaxial configuration include metal sheets, silicone elastomers, composites, thin films, textiles and biological soft tissues. Purposes of biaxial tensile testing A biaxial tensile test generally allows the assessment of the mechanical properties and a complete characterization for uncompressible isotropic materials, which can be obtained through a fewer number of specimens with respect to uniaxial tensile tests. Biaxial tensile testing is particularly suitable for understanding the mechanical properties of biomaterials, due to their directionally oriented microstructures. If the testing aims at the material characterization of the post elastic behaviour, the uniaxial results become inadequate, and a biaxial test is required in order to examine the plastic behaviour. In addition to this, using uniaxial test results to predict rupture under biaxial stress states seems to be inadequate. Even if a biaxial tensile test is performed in a planar configuration, it may be equivalent to the stress state applied on three-dimensional geometries, such as cylinders with an inner pressure and an axial stretching. The relationship between the inner pressure and the circumferential stress is given by the Mariotte formula: where is the circumferential stress, P the inner pressure, D the inner diameter and t the wall thickness of the tube. Equipment Typically, a biaxial tensile machine is equipped with motor stages, two load cells and a gripping system. Motor stages Through the movement of the motor stages a certain displacement is applied on the material sample. If the motor stage is one, the displacement is the same in the two direction and only the equi-biaxial state is allowed. On the other hand, by using four independent motor stages, any load condition is allowed; this feature makes the biaxial tensile test superior to other tests that may apply a biaxial tensile state, such as the hydraulic bulge, semispherical bulge, stack compression or flat punch. Using four independent motor stages allows to keep the sample centred during the whole duration of the test; this feature is particularly useful to couple an image analysis during the mechanical test. The most common way to obtain the fields of displacements and strains is the Digital Image Correlation (DIC), which is a contactless technique and so very useful since it doesn't affect the mechanical results. Load cells Two load cells are placed along the two orthogonal load directions to measure the normal reaction forces explicated by the specimen. The dimensions of the sample have to be in accordance with the resolution and the full scale of the load cells. A biaxial tensile test can be performed either in a load-controlled condition, or a displacement-controlled condition, in accordance with the settings of the biaxial tensile machine. In the former configuration a constant loading rate is applied and the displacements are measured, whereas in the latter configuration a constant displacement rate is applied and the forces are measured. Dealing with elastic materials the load history is not relevant, whereas in viscoelastic materials it is not negligible. Furthermore, for this class of materials also the loading rate plays a role. Gripping system The gripping system transfers the load from the motor stages to the specimen. Although the use of biaxial tensile testing is growing more and more, there is still a lack of robust standardized protocols concerning the gripping system. Since it plays a fundamental role in the application and distribution of the load, the gripping system has to be carefully designed in order to satisfy the Saint-Venant principle. Some different gripping systems are reported below. Clamps The clamps are the most common used gripping system for biaxial tensile test since they allow a quite uniformly distributed load at the junction with the sample. To increase the uniformity of stress in the region of the sample close to the clamps, some notches with circular tips are obtained from the arm of the sample. The main problem related with the clamps is the low friction at the interface with the sample; indeed, if the friction between the inner surface of the clamps and the sample is too low, there could be a relative motion between the two systems altering the results of the test. Sutures Small holes are performed on the surface on the sample to connect it to the motor stages through wire with a stiffness much higher than the sample. Typically, sutures are used with square samples. In contrast to the clamps, sutures allow the rotation of the sample around the axis perpendicular to the plane; in this way they do not allow the transmission of shear stresses to the sample. The load transmission is very local, thereby the load distribution is not uniform. A template is needed to apply the sutures in the same position in different samples, to have repeatability among different tests. Rakes This system is similar to the suture gripping system, but stiffer. The rakes transfer a limited quantity of shear stress, so they are less useful than sutures if used in presence of large shear strains. Although the load is transmitted in a discontinuous way, the load distribution is more uniform if compared to the sutures. Specimen shape The success of a biaxial tensile test is strictly related to the shape of the specimen. The two most used geometries are the square and cruciform shapes. Dealing with fibrous materials or fibres reinforced composites, the fibres should be aligned to the load directions for both classes of specimens, in order to minimize the shear stresses and to avoid the sample rotation. Square samples Square or more generally rectangular specimens are easy to obtain, and their dimension and ratio depend on the material availability. Large specimens are needed to make negligible the effects of the gripping system in the core of the sample. However this solution is very material consuming so small specimen are required. Since the gripping system is very close to the core of the specimen the strain distribution is not homogeneous. Cruciform samples A proper cruciform sample should fulfil the following requirements: maximization of the biaxially loaded area in the centre of the sample, where the strain field is uniform; minimization of the shear strain in the centre of the sample; minimization of regions of stress concentration, even outside the area of interest; failure in the biaxially loaded area; repeatable results. Is important to note that on this kind of sample, the stretch is larger in the outer region than in the centre, where the strain is uniform. Method Uniaxial stress test is typically used to measure mechanical properties of materials, while many materials exhibit various behavior when different loading stress are exerted. Thus, biaxial tensile test become one of the prospective measurements. Small Punch Test (SPT) and Bulge Testing are two methods applying biaxial tensile state. Small Punch Test (SPT) The Small Punch Test (SPT) was first developed in the 1980s as minimal invasive in-situ technique to investigate the local degradation and embrittlement of nuclear material. The SPT is a kind of miniaturized test method that only small volume specimen is required. Using small volumes would not severely affect and damage an in-service component which make SPT a good method to determine the mechanical properties of unirradiated and irradiated materials or analyze small regions of structural components. In terms of the testing, the disc shaped specimen is clamped between two dies. The punch is then pushed with a constant displacement rate through the specimen. A flat punch or concave tip pushing a ball are typically used in the test. After the testing, some characteristic parameters such as force-displacement curves are used to estimate yield strength, ultimate tensile stress. Considering the curves with various temperatures from SPT tensile/fracture data, ductile to brittle transition temperature (DBTT) can be calculated. One thing to be noticed is that the specimen used in SPT is suggested to be very flat to reduce the stress error caused by undefined contact situation. Hydraulic Bulge Test (HBT) Hydraulic Bulge Test (HBT) is a method of biaxial tensile testing. It used to determine the mechanical properties such as Young’s moduli, yield strength, ultimate tensile strength, and strain-hardening properties of sheet material like thin films. HBT can better describe the plastic properties of a sheet at large strains since the strain in press forming are normally larger than the uniform strain. However, the geometries of forming part are not symmetry, therefore, the true stress and strain measured by HBT will be higher than that measured by tensile test. In HBT, rupture discs and high-pressure hydraulic oil are used to cause specimen deformation which also used to avoid influence factors such as friction during small punch test. While there are constraints in test conditions, the temperature is limited by solidification and vaporization of hydraulic oil. High temperature would lead to loading failure, while low temperature result in the failure of the seal part and the leaking vapor might be dangerous. In HBT, a circular sample is normally stripped from a substrate on which they have been prepared and clamped over a hole around its periphery at the end of a cylinder. It experiences pressure from one side using hydraulic oil and then bulges and expands into a cavity with increasing pressure. The flow stress is calculated from the dome height of the bulging blank and the pressure and height can also be determined. Strain will be measured by Digital Image Correlation (DIC). With the specimen thickness and clamper size being considered, the true stress and strain can be calculated. Other liquids may also be used as the hydraulic fluid in HBT. Xiang et al. (2005) developed a HBT for sub-micron thin films by using standard photolithographic microfabrication techniques etch away a small channel behind the film of interest, then pressurized the channel with water to bulge thin films. Validity of this method was confirmed using finite element analysis (FEA). Gas Bulge Test (GBT) Gas bulge tests (GBT) operate similarly to HBT. Instead of a hydraulic oil, high-pressure gas is used to back-pressure a thin plate specimen. Since gas has a much lower density than liquid, the maximum safe pressure output from GBT is considerably lower than hydraulic systems.  Therefore, elevated temperature GBT is often used to increase ductility of the specimen, enabling plastic deformation at lower pressures. Unlike HBT, elevated temperatures are possible for GBT. Operating temperatures of biaxial bulge testing are limited by phase transitions of the pressurized fluid—gasses therefore have an extremely wide range of operating temperatures. GBT is suitable for studying fatigue, low and high-temperature mechanical properties (given sufficient ductility at low temperatures), and thermal cycling. Additionally, holding pressure at a high temperature allows for testing time-dependent mechanical properties such as creep. High temperature DIC may be used to measure biaxial stress and strain during GBT. Alternatively, a laser interferometer may be used to find the displacement near the apex of the dome, and many models are presented for calculating both radius of curvature and radial strain of bulged specimens. True stress is best approximated by the Young-Laplace equation. Results are comparable to biaxial testing standard ISO 16808. Clamping of elevated-temperature gas bulge specimens requires clamping materials with an operating temperature in excess of the operating temperature. This is possible using high-temperature mechanical fasteners, or by directly bonding materials via traditional welding, friction stir welding (FSW), or diffusion bonding. GBT example studies Frary et al. (2002) use GBT to demonstrate superplastic deformation of commercially pure (CP) titanium and Ti64 by thermally cycling through the material’s α/β transformation temperature. Huang et al. (2019) measure coefficients of thermal expansion through GBT, and thermally cycle NiTi shape memory alloys to measure stress evolution. The ability to perform GBT in parallel for an array of specimens enables high-throughput screening of mechanical properties and facilitates rapid materials design. Ding et al. (2014) conducted parallel measurements of viscosity across a huge composition-space of bulk metallic glass. Instead of using a direct pressure hookup, tungstic acid was placed into the cavities behind the specimen plate and decomposed to produce gas upon heating to ~100 °C. Analytical solution A biaxial tensile state can be derived starting from the most general constitutive law for isotropic materials in large strains regime: where S is the second Piola-Kirchhoff stress tensor, I the identity matrix, C the right Cauchy-Green tensor, and , and the derivatives of the strain energy function per unit of volume in the undeformed configuration with respect to the three invariants of C. For an uncompressible material, the previous equation becomes: where p is of hydrostatic nature and plays the role of a Lagrange multiplier. It is worth nothing that p is not the hydrostatic pressure and must be determined independently of constitutive model of the material. A well-posed problem requires specifying ; for a biaxial state of a membrane , thereby the p term can be obtained where is the third component of the diagonal of C. According to the definition, the three non zero components of the deformation gradient tensor F are , and . Consequently, the components of C can be calculated with the formula , and they are , and . According with this stress state, the two non zero components of the second Piola-Kirchhoff stress tensor are: By using the relationship between the second Piola-Kirchhoff and the Cauchy stress tensor, and can be calculated: Equi-biaxial configuration The simplest biaxial configuration is the equi-biaxial configuration, where each of the two direction of load are subjected to the same stretch at the same rate. In an uncompressible isotropic material under a biaxial stress state, the non zero components of the deformation gradient tensor F are and . According to the definition of C, its non zero components are and . The Cauchy stress in the two directions is: Strip biaxial configuration A strip biaxial test is a test configuration where the stretch of one direction is confined, namely there is a zero displacement applied on that direction. The components of the C tensor become , and . It is worth nothing that even if there is no displacement along the direction 2, the stress is different from zero and it is dependent on the stretch applied on the orthogonal direction, as stated in the following equations: The Cauchy stress in the two directions is: The strip biaxial test has been used in different applications, such as the prediction of the behaviour of orthotropic materials under a uniaxial tensile stress, delamination problems, and failure analysis. FEM analysis Finite Element Methods (FEM) are sometimes used to obtain the material parameters. The procedure consists of reproducing the experimental test and obtain the same stress-stretch behaviour; to do so, an iterative procedure is needed to calibrate the constitutive parameters. Nevertheless, the cracking behavior of a cruciform specimen under mixed mode loading can be determined using FEA. Franc2d program is used to calculate the stress intensity factor (SIF) for such specimens using the linear elastic fracture mechanics approach. This kind of approach has been demonstrated to be effective to obtain the stress-stretch relationship for a wide class of hyperelastic material models (Ogden, Neo-Hooke, Yeoh, and Mooney-Rivlin). Standards ISO 16842:2014 metallic materials – sheet and strip – biaxial tensile testing method using a cruciform test piece. ISO 16808:2014 metallic materials – sheet and strip – determination of biaxial stress-strain curve by means of bulge test with optical measuring systems. ASTM D5617 – 04(2015) – Standard Test Method for Multi-Axial Tension Test for Geosynthetics. DIN EN 17117 – A German standard describes methods of the test using biaxial stress states for the determination of the tensile stiffness properties of biaxially oriented coated fabrics See also Tensile testing Mechanical properties References Materials testing Continuum mechanics Solid mechanics
Biaxial tensile testing
Physics,Materials_science,Engineering
3,400
20,685,304
https://en.wikipedia.org/wiki/Enolase%20deficiency
Enolase deficiency is a rare genetic disorder of glucose metabolism. Partial deficiencies have been observed in several caucasian families. The deficiency is transmitted through an autosomal dominant inheritance pattern. The gene for enolase 1 has been localized to chromosome 1 in humans. Enolase deficiency, like other glycolytic enzyme deficiences, usually manifests in red blood cells as they rely entirely on anaerobic glycolysis. Enolase deficiency is associated with a spherocytic phenotype and can result in hemolytic anemia, which is responsible for the clinical signs of Enolase deficiency. Symptoms and signs Symptoms of enolase deficiency include exercise-induced myalgia and generalized muscle weakness and fatigability, both with onset in adulthood. Symptoms also include muscle pain without cramps, and decreased ability to sustain long term exercise. Causes Genetics is found to be the cause of enolase deficiency. The individual in the first known case of this deficiency was heterozygous for the gene for β-enolase, and carried two missense mutations, one inherited from each parent. His muscle cells synthesized two forms of β-enolase, each carrying a different mutation. These mutations changed glycine at position 374 to glutamate (G374E) and glycine at position 156 to aspartate (G156D). Pathophysiology The enolase enzyme catalyzes the conversion of 2-phosphoglycerate to phosphoenolpyruvate; this is the ninth step in glycolysis. Enolase is a dimeric protein formed from three subunits, α, β, and γ, encoded by different genes. The αα homodimer assumes all enolase activity in the early stages of embryo development and in some adult tissues. In tissues that need large amounts of energy, the αγ and γγ in the brain, and αβ and ββ in striated muscles these forms of enolase are present. At all stages of development, β-enolase expression is only found in striated muscles. In adult humans, the ββ homodimer accounts for more than 90% of total enolase activity in muscle. Two mutations in the ENO3 gene, the gene encoding β-enolase, is responsible for the deficiency, both mutations changed highly conserved amino acid residues. One of the changes was of a glycine residue at position 374 to aspartate, this amino acid change was located in close proximity to the His residue of human enolase, which is an important part of the β-enolase catalytic site, while the glycine at position 156 changed to glutamate, which may have brought about change the secondary structure of the enzyme. These mutations may impair activity by significantly reducing the steady- state level of the protein, rather than produce a non- functional mutant protein. Mutations of the β-enolase dimer complexes might result in incorrect folding and increased susceptibility to protein degradation thus causing the deficiency. Similar mutations on yeast showed destabilization of the protein and decreased substrate affinity. Destabilization of the protein results in partial dissociation, some researchers propose that in muscle cells this dissociation may be perceived as an abnormality leading to degradation of the mutated enolase. Diagnosis Treatment References External links Inborn errors of carbohydrate metabolism
Enolase deficiency
Chemistry
719
26,481,124
https://en.wikipedia.org/wiki/Vabicaserin
Vabicaserin (codenamed SCA-136) was a novel antipsychotic and anorectic under development by Wyeth. As of 2010 it is no longer in clinical trials for the treatment of psychosis. It was also under investigation as an antidepressant but this indication appears to have been dropped as well. Vabicaserin acts as a selective 5-HT2C receptor full agonist (Ki = 3 nM; EC50 = 8 nM; IA = 100% (relative to 5-HT)) and 5-HT2B receptor antagonist (IC50 = 29 nM). It is also a very weak antagonist at the 5-HT2A receptor (IC50 = 1,650 nM), though this action is not clinically significant. By activating 5-HT2C receptors, vabicaserin inhibits dopamine release in the mesolimbic pathway, likely underlying its efficacy in alleviating positive symptoms of schizophrenia, and increases acetylcholine and glutamate levels in the prefrontal cortex, suggesting benefits against cognitive symptoms as well. See also Bexicaserin BMB-101 Lorcaserin WAY-163909 References 5-HT2B agonists 5-HT2C agonists Abandoned drugs Antipsychotics Cyclopentanes Pyridobenzodiazepines Quinolines
Vabicaserin
Chemistry
298
64,485,992
https://en.wikipedia.org/wiki/Vladimir%20Varyukhin
Vladimir Alekseevich Varyukhin (December 14, 1921 — July 8, 2007) was a Soviet and Ukrainian scientist, Professor, Doctor of Technical Sciences, Honored Scientist of the Ukrainian SSR, Major-General, founder of the theory of multichannel analysis, and creator of the scientific school on digital antenna arrays (DAAs). Scientific results In 1962, he founded the well-known scientific school on multichannel analysis and digital antenna arrays. The theory of multichannel analysis that was developed by him considers the methods of determination of the angular coordinates of sources as functions of the angular distances between the sources and the phase and energy relations between signals and gives the basis for the functional schemes of units realizing the theoretical conclusions. The determination of the parameters of sources is carried on by the direct solution of systems of high-order transcendental equations describing the response function of a multichannel analyzer. In this case, the super-Rayleigh resolution of the sources of signals is ensured. The difficulties arising at the solution of the transcendental systems of high-order equations were overcome by V. A. Varyukhin by means of the “separation” of the unknowns, at which the determination of angular coordinates is reduced to the solution of algebraic equations, and the complex amplitudes are found by the solution of the linear systems of equations of the N-th order. The Interdepartmental scientific-technical meeting organized in 1977 by the Scientific Council of the USSR Academy of Sciences on the problem “Statistical radiophysics” (Chairman — Academician Yuri Kobzarev) dated officially the start of the studies performed under the guidance of V. A. Varyukhin in the trend of digital antenna arrays by 1962 and recognized the priority of Varyukhin's scientific school in the development and practical realization of the theory of multichannel analysis. In the subsequent years, V. A. Varyukhin supervised a number of theoretical works realized in several experimental radar stations with DAAs which were successfully tested on testing sites. Life data Participant of the World War II 27.09.1939 — 08.03.1942 — radio operator of the 2nd Red-Banner army of the Far-East front; 08.03.1942 — 05.08.1942 — commander of a radiostation of the 927-th separate signal battalion of the 96-th infantry division; 05.08.1942 — 11.01.1943 — commander of a radioplatoon, Stalingrad (Don) front; 11.01.1943 — 28.04.1943 — wound, medical treatment at the evacuation hospital No. 3755; 28.04.1943 — January 1944 — commander of a radioplatoon, Ural military district; January 1944 — 12.04.1944 — commander of a radioplatoon, one of the Ukrainian fronts, 12.04.1944 — 10.07.1945 — commander of a radioplatoon (Soviet-American Operation Frantic); After of the World War II 10.07.1945 — 23.05.1951 — listener of S. M. Budennyi Military Electrotechnical Academy of Communication (Leningrad); 23.05.1951 — 18.11.1961 — lecturer at the Military Command Academy of Communication and then at S. M. Budennyi Military Electrotechnical Academy of Communication (Leningrad); 27.11.1956 – defended the dissertation for a Candidate degree (Techn. Sci.) at the Council of S. M. Budennyi Military Red-Banner Engineering Academy of Communication. Since 18.11.1961, he served at Kiev Higher Air Defense Engineering School and then was the Head of a chair at A. M. Vasilevskii Military Academy of Air Defense of the Land Forces till 1981. The significant stage of the recognition of Varyukhin's scientific results became the defense of the dissertation for a Doctoral degree (Techn. Sci.) in 1968. The distinctive feature of the theoretical trend developed by him is the maximal automatization of the process of estimation of the coordinates and the parameters of signals under conditions of their superresolution. Professor — since 1972, Honored Scientist of the UkrSSR — 1979. Since 1996 — work at the Academy of Armed Forces of Ukraine (Kyiv). Selected publications V. A. Varyuhin, S. A. Kas'yanyuk, “On a certain method for solving nonlinear systems of a special type”, Zh. Vychisl. Mat. Mat. Fiz., 6:2 (1966), 347–352; U.S.S.R. Comput. Math. Math. Phys., 6:2 (1966), 214–221 V. A. Varyuhin, S. A. Kas'yanyuk, V. G. Finogenova, “A problem of constrained extremum for a class of functions representable by a Stieltyes integral”, Izv. Vyssh. Uchebn. Zaved. Matematika, 1966, 6 (55), 40–49. V. A. Varyuhin, S. A. Kas'yanyuk, “The effects of the fluctuations of the terms of a positive sequence on its canonical principal representations”, Zh. Vychisl. Mat. Mat. Fiz., 8:1 (1968), 169–173; U.S.S.R. Comput. Math. Math. Phys., 8:1 (1968), 230–236 V. A. Varyuhin, S. A. Kas'yanyuk, “Iteration methods for sharpening the roots of equations”, Zh. Vychisl. Mat. Mat. Fiz., 9:3 (1969), 684–687; U.S.S.R. Comput. Math. Math. Phys., 9:3 (1969), 247–252 V. A. Varyuhin, S. A. Kas'yanyuk, “The iteration methods of the solution of nonlinear systems”, Zh. Vychisl. Mat. Mat. Fiz., 10:6 (1970), 1533–1536; U.S.S.R. Comput. Math. Math. Phys., 10:6 (1970), 234–239 V. A. Varyuhin, S. A. Kas'yanyuk, “A class of iterative procedures for the solution of systems of nonlinear equations”, Zh. Vychisl. Mat. Mat. Fiz., 17:5 (1977), 1123–1131; U.S.S.R. Comput. Math. Math. Phys., 17:5 (1977), 17–23 V.A. Varyukhin, V.I. Pokrovskii, and V. F. Sakhno, “On the exactness of measurement of angular coordinates with antenna arrays,” Radiotekhn. Élektr., 1982.– Vol. 27, No. 11. – Pp. 2258 - 2260 V. A. Varyuhin, V. I. Pokrovskii, V. F. Sakhno, “Modified likelihood function in the problem of the source angular coordinate determination using an antenna array”, Dokl. Akad. Nauk SSSR, No.270:5 (1983), 1092–1094 V.A. Varyukhin, V.I. Pokrovskii, and V.F. Sakhno, “On the exactness of measurements of angular coordinates of several sources with the help of an antenna array,” Radiotekhn. Élektr., 1984.– Vol. 29, No. 4. – Pp. 660 – 665. R.S. Sudakov and V.A. Varyukhin, “Linear combinations of inverse matrices and the method of least squares,” in: Stochastic Models of Systems [in Russian], AS UkrSSR, Military Air Defense Academy, Kiev, 1986. V.A. Varyukhin, V.I. Pokrovskii, and V.F. Sakhno, “Quasisolutions of overdetermined incompatible systems,” in: Stochastic Models of Systems [in Russian], AS UkrSSR, Military Air Defense Academy, Kiev, 1986. V.A. Varyukhin, V.I. Pokrovskii, and V.F. Sakhno, “A modified likelihood function in the problems of multichannel analysis,” in: Stochastic Models of Systems [in Russian], AS UkrSSR, Military Air Defense Academy, Kiev, 1986 V.A. Varyukhin, Fundamental Theory of Multichannel Analysis (VA PVO SV, Kyiv, 1993) [in Russian]. Selected Awards Order of the Red Star Order of the Red Star Order of the Patriotic War, 1st class Order of the Patriotic War, 2nd class Medal "For Battle Merit" Medal "For the Victory over Germany in the Great Patriotic War 1941–1945" Order "For Service to the Homeland in the Armed Forces of the USSR", 3rd class Gallery See also Digital antenna array References External links Varyuhin, V A. People on the Math-Net.Ru Varyukhin V.A. Mendeley Profile Varyukhin V.A. in the Encyclopedia of Modern Ukraine (in Ukrainian) Archive TsAMO about Varyukhin V.A. 1921 births 2007 deaths People from Vinnytsia Oblast Soviet military personnel of World War II from Ukraine Soviet scientists Soviet systems scientists Systems engineers Soviet mathematicians Soviet computer scientists Soviet inventors Soviet Army officers Soviet colonels Soviet Air Defence Force officers Soviet military engineers Ukrainian generals Major generals of Ukraine Electronics engineers Radar signal processing Recipients of the Order of the Red Star Recipients of the Order "For Service to the Homeland in the Armed Forces of the USSR", 3rd class
Vladimir Varyukhin
Engineering
2,073
11,789,067
https://en.wikipedia.org/wiki/Compatible%20ink
Compatible ink (or compatible toner) is manufactured by third-party manufacturers and is designed to work in designated printers without infringing on patents of printer manufacturers. Compatible inks and toners may come in a variety of packaging including sealed plastic wraps or taped plastic wraps. Regardless of packaging, compatible products are generally priced lower than original equipment manufacturer (OEM) brand inks and toners. While there has been considerable debate and litigation involving the ink and toner patents of printer manufacturers, third-party manufacturers continue to thrive. Manufacturers of compatible ink and toner products currently control about 25% the ink and toner market well over $8 Billion annually. Types Compatible ink is manufactured for several types of machines including fax machines, laser printers, inkjet printers, multifunction printers, and copiers. Aside from compatible products, three other sources of consumables are also available to supply these machines, including OEM brand ink and toner, remanufactured toner and ink cartridges, and refilled ink and toner cartridges. Compatible ink manufacturers differentiate their product by using all new parts, whereas other ink replacements recycle used OEM parts. Compatible ink and toner products tend to offer greater value than original, genuine OEM ink and toner cartridges. Reducing cost for the end user, ink and toner manufactured by third-party manufacturers is classified as compatible when consisting of new parts for a third party printer. Comparison of performance, quality and reliability The performance of a printer cartridge needs to be measured by parameters like: mechanism of printing (toner and ink-jet) which impacts the resolution and print-rate, print quality, the percentage of useful pages (standard required e.g. business use) printed by the cartridge. page yield (number of pages printed per cartridge) printer compatibility etc. A comparison between OEM and compatible cartridges for a specific printer needs to take into account the above parameters. For example, a remanufactured cartridge may for example be purchased cheaper, but may not print out as many useful pages. Reliability and consistency associated with an OEM cartridge may be more important than price, for example, when printing output for important business. One independent test in 2004 on using a compatible ink for one type of printer showed little or no difference in quality between the compatible and OEM products. All types of compatible ink cartridges are different and vary from supplier to supplier. This is due to the type of ink in the printer, the chips (or no chip) on the cartridge and the actual manufacture of the cartridge itself. In terms of comparisons with suppliers, prices, quality and comparisons with original oem cartridges. This can vary also by manufacturer and printer. Some compatible cartridges will work perfectly in some printers. See also Vendor lock-in Life-cycle assessment References Inks Printing materials Competition (economics)
Compatible ink
Physics
580
63,824,616
https://en.wikipedia.org/wiki/NGC%202950
NGC 2950 is a lenticular galaxy in the northern constellation of Ursa Major, about 50 million light years from the Milky Way and receding with a heliocentric radial velocity of 1,329 km/s. It was discovered in 1790 by the Anglo-German astronomer William Herschel. NGC 2950 is a field galaxy, it is not part of a galaxy cluster or galaxy group, and thus is gravitationally isolated. Nine certain and four possible dwarf galaxies have been identified around NGC 2950. The morphological classification of this galaxy is RSB0(r), indicating a barred lenticular galaxy (SB0) with outer (R) and inner (r) ring structures. It hosts two nested stellar bars; the rotation frequency of the secondary bar is higher than that of the primary one. Double bars of this type are relatively common, having been found in ~30% of barred lenticulars. The inner bar appears to be counter-rotating relative to the outer bar, with the two passing cleanly through each other. The stellar mass of the galaxy is while the halo mass is . References External links Lenticular galaxies Field galaxies Ursa Major 2950 27765 5176
NGC 2950
Astronomy
242
19,822,852
https://en.wikipedia.org/wiki/Residential%20cluster%20development
A residential cluster development, or open space development, is the grouping of residential properties on a development site to use the extra land as open space, recreation or agriculture. It is increasingly becoming popular in subdivision development because it allows the developer to spend much less on land and obtain much the same price per unit as for detached houses. The shared garden areas can be a source of conflict, however. Claimed advantages include more green/public space, closer community, and an optimal storm water management. Cluster development often encounters planning objections. According to William H. Whyte, the author of “Cluster Development” there are two types of cluster development: townhouse development and super development. Examples of townhouse development include Morrell Park, Philadelphia, Pennsylvania, Hartshone in Richmond, and Dudley Square in Shreveport. Examples of super development include Reston, Virginia, Crofton, Maryland, and Americana Fairfax in Virginia. Background In many ways, cluster development has been practiced since the earliest communities from the medieval village to the New England town. However, it was formalized as a modern concept only by the onset of suburban sprawl and ubiquity of detached house developments. The idea of a cluster development was created as the alternative to the conventional subdivision. The first conscious application of a cluster development was in Radburn, New Jersey, in 1928. It was based on English planning and Ebenezer Howard’s Garden City movement but used principles of cluster development. Following Radburn, many other towns in New Jersey applied those principles to their planning notably the village green in Hillsborough, New Jersey, and Brunswick Hill in South Brunswick. In the rest of the country the use of cluster development grew in principally in Maryland and Virginia, notably in Reston and American Fairfax County. Currently cluster development is applied all over the United States. There is particularly a strong push for it in the Midwestern States that have had significant problems with urban sprawl, such as Minnesota, Illinois, Ohio, and Wisconsin. Cluster development, also known as conservation development, is a site planning approach that is an alternative to conventional subdivision development. It is a practice of low-impact development that groups residential properties in a proposed subdivision closer together in order to utilize the rest of the land for open space, recreation or agriculture. Cluster development differs from a planned unit development (PUD), which contains a mix of residential, commercial, industrial, or other uses, but the cluster development primarily focuses on residential areas. Purpose The purpose of cluster development is to: promote integrated site design that is considerate to the natural features and topography protect environmentally sensitive areas of the development site, as well as permanently preserve important natural features, prime agricultural land, and open space minimize non-point source pollution through reducing the area of impervious surfaces on site encourage saving costs on infrastructure and maintenance through practices such as decreasing the area that needs to be paved and the decreasing distance that utilities need to be run the primary purpose is to create more area for open space, recreation and more social interaction Benefits Thanks to there being more porous ground coverings and fewer impervious surfaces such as asphalt and concrete, the risk of flooding and erosion from stormwater is reduced. Economical benefits of cluster development can include there being less infrastructure to build— fewer roads, sewers, and utility lines. The higher density of the clusters of housing also tends to mean more efficiency for services such as public transit, and can also promote increased bicycle usage and the encouragement of pedestrians. The extra open space made available by this type of development leaves room for parks, trails, and community-supported agriculture. Issues Following World War II, migration from the cities to suburbs became a dominant trend in America. People were acquiring any land they could find; as a result developers attempted to squeeze as many lots as they could on development sites. Communities then developed zoning regulations to limit the number of units and density allowed on a site. Though this zoning protected land for communities and to an extent preserved land from development, it was what ultimately led to the suburban sprawl as we know it today. It is this zoning that cluster development attempts to amend, and is the primary issue it faces. Most municipalities have established zoning which restricts developers, planning boards and communities to use only this conventional subdivision development. Thus, the practice of traditional development is difficult to change because of the set standard, familiarly of the procedure, and the fear of undertaking something new. In response to this, groups such as the American Planning Association (APA) have developed a model ordinance that provides the framework for cluster development. This ordinance is not difficult to implement administratively, but politically, it is problematic because of conservative resistance. People's perception of personal space has a large part to do with this resistance. Many cases people chose to live in suburbs with the intention of having a large lot property; therefore, it is hard to convince those individuals to live closer together. Convincing people to accept small lot sizes and higher density living remains one of the biggest obstacles of cluster development. This obstacle can be mostly overcome with proper site design, which grants homes unobstructed views and effective private space. As well as educating people about the benefits of having better community and open space can serve as encouragement to change perceptions. An additional obstacle to cluster development is the difficulty for creating small lot sizes when no municipal sewer system is in place. When septic systems are used, enough land needs to be available on each building lot for a leach field (sometimes land is required for two leach fields, the additional land set aside as a back-up). The amount of land needed is in proportion to the size of the septic system and the soil conditions, which must allow for the percolation of wastewater safely into the ground. In areas where well water is used, additional lot area may be required to sufficiently separate the well from the leach field. This can lead to minimum lot sizes of or more, making cluster development difficult. However, installing a package wastewater treatment plant (WWTP) for the development (which acts as a small cluster plant, eliminating the need for individual septic tanks) or using biofilters with each septic system make smaller lot sizes acceptable. In addition, providing a package WWTP reduces the diameter and depth of collection sewer lines for the cluster, thus reducing the overall cost of infrastructure. The final primary issue with cluster development is the issue of dealing with open, recreational, and agricultural space. These areas serve as benefits in many respects but are also issues that are required to be dealt with. The maintenance of open and recreational space requires the formation of home owners associations that necessitate fees for taxes, insurance and general upkeep. This would not be necessary under a typical subdivision, but people would have their own maintenance expenses. As to agriculture: people enjoying living next to it until there is a need to apply fertilizer or pesticides. This fact cannot be avoided, but through proper use of cluster development, there can be wider gaps and barriers between agricultural land and residential properties, which would limit exposure to unwanted byproducts. Application The model ordinance for cluster development is section 4.7 in the Smart Growth Codes, issued by the American Planning Association. Along with introducing the concept of residential cluster development, the ordinance outlines the process of application, site planning and implementation. The primary requisites for application of cluster development are that all principal or accessory uses are allowed and that multifamily dwelling, duplexes, and townhouses are permitted. As well the application of maximal lot coverage, floor area ratios, building height, and parking requirements to the entire site as opposed to the individual lot. Provisions of a cluster development require that the site is at least 2 to and there is no minimum to lot dimensions; furthermore each house can be no more than from the street with yard that is at least . There also needs to be the ability to place more than one principal building on each lot, and lastly no less than 25% of the site is used for open space. Included in the application, the site plan is required to consist of the street and sidewalk layout, the maximum number and type of dwelling units proposed, and how much area they will occupy, with calculations; as well as the area of parking, open space, and other accessories. To calculate the permitted amount of dwellings, one must measure the gross area of the site in acres and tenths of an acre, then subtract the gross area of the public and private streets and public dedicated improvement; the remainder will be the build able area. Then divide the net build able area by the smallest minimum lot size; round this number to the nearest lower number and the figure will be the maximum number of units. Design features There are various distinct design features in cluster development notably: the consideration of natural features/topography, smaller lot size, the use of cul-de-sacs, and the use of certain waste/storm water management techniques. Along with site design, waste/storm water management design features are a principle aspect of cluster development. By the maximizing of over land water flow and the strategic use of landforms and plants to slow, hold, and treat runoff, most stormwater can be handled. As well, there many options to deal with wastewater. Techniques such as community drain fields, irrigation systems, and package plants can dramatically reduce the cost of infrastructure and improve the environment. References book review See also Urban planning
Residential cluster development
Engineering
1,913
18,829,483
https://en.wikipedia.org/wiki/Formal%20scheme
In mathematics, specifically in algebraic geometry, a formal scheme is a type of space which includes data about its surroundings. Unlike an ordinary scheme, a formal scheme includes infinitesimal data that, in effect, points in a direction off of the scheme. For this reason, formal schemes frequently appear in topics such as deformation theory. But the concept is also used to prove a theorem such as the theorem on formal functions, which is used to deduce theorems of interest for usual schemes. A locally Noetherian scheme is a locally Noetherian formal scheme in the canonical way: the formal completion along itself. In other words, the category of locally Noetherian formal schemes contains all locally Noetherian schemes. Formal schemes were motivated by and generalize Zariski's theory of formal holomorphic functions. Algebraic geometry based on formal schemes is called formal algebraic geometry. Definition Formal schemes are usually defined only in the Noetherian case. While there have been several definitions of non-Noetherian formal schemes, these encounter technical problems. Consequently, we will only define locally noetherian formal schemes. All rings will be assumed to be commutative and with unit. Let A be a (Noetherian) topological ring, that is, a ring A which is a topological space such that the operations of addition and multiplication are continuous. A is linearly topologized if zero has a base consisting of ideals. An ideal of definition for a linearly topologized ring is an open ideal such that for every open neighborhood V of 0, there exists a positive integer n such that . A linearly topologized ring is preadmissible if it admits an ideal of definition, and it is admissible if it is also complete. (In the terminology of Bourbaki, this is "complete and separated".) Assume that A is admissible, and let be an ideal of definition. A prime ideal is open if and only if it contains . The set of open prime ideals of A, or equivalently the set of prime ideals of , is the underlying topological space of the formal spectrum of A, denoted Spf A. Spf A has a structure sheaf which is defined using the structure sheaf of the spectrum of a ring. Let be a neighborhood basis for zero consisting of ideals of definition. All the spectra of have the same underlying topological space but a different structure sheaf. The structure sheaf of Spf A is the projective limit . It can be shown that if f ∈ A and Df is the set of all open prime ideals of A not containing f, then , where is the completion of the localization Af. Finally, a locally noetherian formal scheme is a topologically ringed space (that is, a ringed space whose sheaf of rings is a sheaf of topological rings) such that each point of admits an open neighborhood isomorphic (as topologically ringed spaces) to the formal spectrum of a noetherian ring. Morphisms between formal schemes A morphism of locally noetherian formal schemes is a morphism of them as locally ringed spaces such that the induced map is a continuous homomorphism of topological rings for any affine open subset U. f is said to be adic or is a -adic formal scheme if there exists an ideal of definition such that is an ideal of definition for . If f is adic, then this property holds for any ideal of definition. Examples For any ideal I and ring A we can define the I-adic topology on A, defined by its basis consisting of sets of the form a+In. This is preadmissible, and admissible if A is I-adically complete. In this case Spf A is the topological space Spec A/I with sheaf of rings instead of . A=k[[t]] and I=(t). Then A/I=k so the space Spf A a single point (t) on which its structure sheaf takes value k[[t]]. Compare this to Spec A/I, whose structure sheaf takes value k at this point: this is an example of the idea that Spf A is a 'formal thickening' of A about I. The formal completion of a closed subscheme. Consider the closed subscheme X of the affine plane over k, defined by the ideal I=(y2-x3). Note that A0=k[x,y] is not I-adically complete; write A for its I-adic completion. In this case, Spf A=X as spaces and its structure sheaf is . Its global sections are A, as opposed to X whose global sections are A/I. See also formal holomorphic function Deformation theory Schlessinger's theorem References External links formal completion Algebraic geometry Scheme theory
Formal scheme
Mathematics
1,005
31,806,890
https://en.wikipedia.org/wiki/Snub%2024-cell%20honeycomb
In four-dimensional Euclidean geometry, the snub 24-cell honeycomb, or snub icositetrachoric honeycomb is a uniform space-filling tessellation (or honeycomb) by snub 24-cells, 16-cells, and 5-cells. It was discovered by Thorold Gosset with his 1900 paper of semiregular polytopes. It is not semiregular by Gosset's definition of regular facets, but all of its cells (ridges) are regular, either tetrahedra or icosahedra. It can be seen as an alternation of a truncated 24-cell honeycomb, and can be represented by Schläfli symbol s{3,4,3,3}, s{31,1,1,1}, and 3 other snub constructions. It is defined by an irregular decachoron vertex figure (10-celled 4-polytope), faceted by four snub 24-cells, one 16-cell, and five 5-cells. The vertex figure can be seen topologically as a modified tetrahedral prism, where one of the tetrahedra is subdivided at mid-edges into a central octahedron and four corner tetrahedra. Then the four side-facets of the prism, the triangular prisms become tridiminished icosahedra. Symmetry constructions There are five different symmetry constructions of this tessellation. Each symmetry can be represented by different arrangements of colored snub 24-cell, 16-cell, and 5-cell facets. In all cases, four snub 24-cells, five 5-cells, and one 16-cell meet at each vertex, but the vertex figures have different symmetry generators. See also Regular and uniform honeycombs in 4-space: Tesseractic honeycomb 16-cell honeycomb 24-cell honeycomb Truncated 24-cell honeycomb 5-cell honeycomb Truncated 5-cell honeycomb Omnitruncated 5-cell honeycomb References T. Gosset: On the Regular and Semi-Regular Figures in Space of n Dimensions, Messenger of Mathematics, Macmillan, 1900 Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Model 133 , o4s3s3s4o, s3s3s *b3s4o, s3s3s *b3s *b3s, o3o3o4s3s, s3s3s4o3o - sadit - O133 5-polytopes Honeycombs (geometry)
Snub 24-cell honeycomb
Physics,Chemistry,Materials_science
701
15,385,424
https://en.wikipedia.org/wiki/Gate%20%28cytometry%29
A gate in cytometry is a set of value limits (boundaries) that serve to isolate a specific group of cytometric events from a large set. Gates can be defined by discrimination analysis, or can simply be drawn around a given set of data points on a printout and then converted to a computer-useful form. Gates can be implemented with a physical blinder. Gates may be used either to selectively gather data or to segregate data for analysis. Division Gates are divided mathematically into inclusive gates and exclusive gates. Inclusive gates select data that falls within the limits set, while exclusive gates select data that falls outside the limits. Live gate A live gate is a term used for a process that prevents the acquisition by the computer of non-selected data from the flow cytometer. External links "General Flow Cytometry Glossary and Cell Cycle Analysis Terminology" The Janis V. Giorgi Flow Cytometry Laboratory, UCLA Osborne, G. W. (2000) "Regions and Gates" Flow Cytometry Software Workshop: 2000, page 3 Flow cytometry Technology systems
Gate (cytometry)
Chemistry,Technology,Engineering,Biology
231
1,075,379
https://en.wikipedia.org/wiki/Vitali%E2%80%93Hahn%E2%80%93Saks%20theorem
In mathematics, the Vitali–Hahn–Saks theorem, introduced by , , and , proves that under some conditions a sequence of measures converging point-wise does so uniformly and the limit is also a measure. Statement of the theorem If is a measure space with and a sequence of complex measures. Assuming that each is absolutely continuous with respect to and that a for all the finite limits exist Then the absolute continuity of the with respect to is uniform in that is, implies that uniformly in Also is countably additive on Preliminaries Given a measure space a distance can be constructed on the set of measurable sets with This is done by defining where is the symmetric difference of the sets This gives rise to a metric space by identifying two sets when Thus a point with representative is the set of all such that Proposition: with the metric defined above is a complete metric space. Proof: Let Then This means that the metric space can be identified with a subset of the Banach space . Let , with Then we can choose a sub-sequence such that exists almost everywhere and . It follows that for some (furthermore if and only if for large enough, then we have that the limit inferior of the sequence) and hence Therefore, is complete. Proof of Vitali-Hahn-Saks theorem Each defines a function on by taking . This function is well defined, this is it is independent on the representative of the class due to the absolute continuity of with respect to . Moreover is continuous. For every the set is closed in , and by the hypothesis we have that By Baire category theorem at least one must contain a non-empty open set of . This means that there is and a such that implies On the other hand, any with can be represented as with and . This can be done, for example by taking and . Thus, if and then Therefore, by the absolute continuity of with respect to , and since is arbitrary, we get that implies uniformly in In particular, implies By the additivity of the limit it follows that is finitely-additive. Then, since it follows that is actually countably additive. References Theorems in measure theory
Vitali–Hahn–Saks theorem
Mathematics
434
20,513,034
https://en.wikipedia.org/wiki/Polyhedron%20%28journal%29
Polyhedron is a peer-reviewed scientific journal covering the field of inorganic chemistry. It was established in 1955 as the Journal of Inorganic and Nuclear Chemistry and is published by Elsevier. Abstracting and indexing Polyhedron is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.052. References External links Inorganic chemistry journals Elsevier academic journals Academic journals established in 1955 English-language journals
Polyhedron (journal)
Chemistry
91
13,285,816
https://en.wikipedia.org/wiki/Potassium%20canrenoate
Potassium canrenoate (INN, JAN) or canrenoate potassium (USAN) (brand names Venactone, Soldactone), also known as aldadiene kalium, the potassium salt of canrenoic acid, is an aldosterone antagonist of the spirolactone group. Like spironolactone, it is a prodrug, and is metabolized to active canrenone in the body. Potassium canrenoate is notable in that it is the only clinically used antimineralocorticoid which is available for parenteral administration (specifically intravenous) as opposed to oral administration. In the UK, it is unlicensed and only used for short term diuresis in oedema or heart failure in neonates or children under specialist initiation and monitoring. See also Canrenoic acid Canrenone References 11β-Hydroxylase inhibitors Antimineralocorticoids CYP17A1 inhibitors Pregnanes Potassium compounds Prodrugs Progestogens Spirolactones Steroidal antiandrogens
Potassium canrenoate
Chemistry
229
1,789,793
https://en.wikipedia.org/wiki/Shir%C5%8D%20Ishii
Surgeon General was a Japanese microbiologist and army medical officer, who served as the director of Unit 731, a biological warfare unit of the Imperial Japanese Army. Ishii led the development and application of biological weapons at Unit 731 in Manchukuo during the Second Sino-Japanese War from 1937 to 1945, including the bubonic plague attacks at Chinese cities of Changde and Ningbo, and planned the Operation Cherry Blossoms at Night biological attack against the United States. Emperor Showa rewarded him with a special service medal for his work. Ishii and his colleagues also engaged in human experimentation, resulting in the deaths of over 10,000 subjects, most of them civilians or prisoners of war. Biography Early years Shirō Ishii was born in Shibayama in Chiba Prefecture, Japan, the fourth son of Katsuya Ishii, a wealthy landowner and sake maker. The Ishii family was the community's largest landholder and exercised a feudal dominance over the local village and surrounding hamlets. Ishii attended the Chiba Middle School (now Chiba Prefectural Chiba High School) in Chiba City and the Fourth Higher School (now Kanazawa University), a higher school in Kanazawa, Ishikawa Prefecture. Some of his classmates regarded him as brash, abrasive and arrogant. His daughter Harumi felt that Shiro had been "unjustly condemned", saying "my father was a very warm-hearted person...he was so bright that people sometimes could not catch up with the speed of his thinking and that made him irritated, and he shouted at them." In 1916, Ishii enrolled at Faculty of Medicine, Kyoto Imperial University. He graduated in 1920, and married the daughter of Akari Torasaburō, the university's president, in the same year. In 1921, Ishii was commissioned into the Imperial Japanese Army as a military surgeon with the rank of Army Surgeon, First Class (surgeon lieutenant). In 1922, Ishii was assigned to the 1st Army Hospital and Army Medical School in Tokyo, where his work impressed his superiors enough to enable him to return to Kyoto Imperial University to pursue post-graduate medical schooling in 1924. During his studies, Ishii would often grow bacteria "pets" in multiple petri dishes, and his odd practice of raising bacteria as companions rather than as research subjects made him notable to the staff of the university. He did not get along well with his classmates; they would become infuriated as a result of his "pushy behaviour" and "indifference". One of his mentors, Professor Ren Kimura, recalled that Ishii had an odd habit of doing his laboratory work in the middle of the night, using laboratory equipment that had been carefully cleaned by his classmates earlier. His classmates would "really be mad when they came in and found the laboratory equipment dirty the next morning". In 1925, Ishii was promoted to Army Surgeon, Second Class (surgeon captain). Biological warfare project By 1927, Ishii was advocating for the creation of a Japanese bio-weapons program, and in 1928 began a two-year tour of the West, where he did extensive research on the effects of biological warfare and chemical warfare developments from World War I onwards. Ishii's travels were highly successful and helped win him the patronage of Sadao Araki, the Japanese Minister of the Army. Ishii also received the backing of Araki's ideological rival in the army, Major-General Tetsuzan Nagata, who was later considered Ishii's "most active supporter" at the Khabarovsk War Crime Trials. In January 1931, Ishii received promotion to Senior Army Surgeon, Third Class (surgeon major). According to Ishii's followers, Ishii was extremely loyal to the Emperor and had an "enthusiastic personality" and "daring and carefree attitude", with eccentric work habits such as working late at night in the lab after hanging out with friends at town. He was also known for his heavy drinking, womanizing and embezzling habits, which were tolerated by his colleagues. Ishii was described as a vehement nationalist, and this helped him gain access to the people who could provide him funds. In 1935, Ishii was promoted to Senior Army Surgeon, Second Class (surgeon lieutenant-colonel). On August 1, 1936, Ishii would be given formal control over Unit 731 and its research facilities. A former member of Unit 731 recalled in 1998 that when he first met Ishii in Tokyo, he was surprised at his commander's appearance: "Ishii was slovenly dressed. His uniform was covered with food stains and ashes from numerous cigarettes. His officer's sword was poorly fastened and dragged on the floor". However, in Manchuria, Ishii would transform into a different character: "he was dressed immaculately. His uniform was spotless, and his sword was tied correctly". As the leader of Unit 731, Ishii conducted a variety of experiments, including vivisections, testing biological weapons on Chinese villages, poisoning by toxins and gases and forcing inmates to inflict syphilis on each other. Ishii also reportedly showed Hideki Tojo, who would later become Prime Minister in 1941, films of the experiments over several years. Tojo considered them "unpleasant" and eventually stopped watching them. Further promotions for Ishii would follow: he was promoted to Senior Army Surgeon, First Class (surgeon colonel) in 1938, Assistant Surgeon General (surgeon Major General) in March 1941, and Surgeon General (surgeon Lieutenant General) in March 1945. Emperor Showa rewarded him with a special service medal. Towards the end of the war, Ishii would develop a plan to spread plague fleas along the populated west coast of the US, known as Operation Cherry Blossoms at Night. This was targeted for September 22 but the plan was not realized due to the surrender of Japan on August 15, 1945. Ishii and the Japanese government attempted to cover up the facilities and experiments, but ultimately failed with their secret university lab in Tokyo and their main lab in Harbin, China. The Japanese Army's Unit 731 War Crimes Exhibition Hall (731罪证陈列馆) in Harbin stands to this day as a museum to the unit and the atrocities they committed. Estimates for the number of people killed by Japanese biological warfare range as high as 300,000. Ishii was later granted immunity in the International Military Tribunal for the Far East by the United States government in exchange for information and research for the U.S. biological warfare program. War crimes immunity Ishii was arrested by United States authorities during the Occupation of Japan at the end of World War II and, along with other leaders, was supposed to be thoroughly interrogated by Soviet authorities. Instead, Ishii and his team managed to negotiate and receive immunity in 1946 from Japanese war crimes prosecution before the Tokyo tribunal in exchange for their full disclosure. Although the Soviet authorities wished the prosecutions to take place, the United States objected after the reports of a team of military microbiologists headed by Lieutenant Colonel Murray Sanders stated that the information was "absolutely invaluable”; it "could never have been obtained in the United States because of scruples attached to experiments on humans" and "the information was obtained fairly cheaply." On May 6, 1947, Douglas MacArthur wrote to Washington that "additional data, possibly some statements from Ishii probably can be obtained by informing Japanese involved that information will be retained in intelligence channels and will not be employed as 'War Crimes' evidence." Ishii's immunity deal was concluded in 1948 and he was never prosecuted for any war crimes or crimes against humanity. After being granted immunity, Ishii was hired by the U.S. government to lecture American officers at Fort Detrick on the uses of bioweapons and the findings made by Unit 731. During the Korean War, Ishii reportedly traveled to Korea to take part in the U.S. Army's alleged biological warfare activities. On 22 February 1952, Ishii was explicitly named in a statement made by North Korean Foreign Minister Pak Hon-yong, claiming that he, along with other "Japanese bacteriological war criminals", had been involved in "systematically spreading large quantities of bacteria-carrying insects by aircraft in order to disseminate contagious diseases over our frontline positions and our rear". However, whether the U.S. Army actually used biological weapons against Chinese or North Korean forces, or whether such allegations were mere propaganda, is disputed by historians. After returning to Japan, Ishii opened a clinic, performing examinations and treatments for free. He kept a diary, but it did not make reference to any of his wartime activities with Unit 731. Death In his last years, Ishii could not speak clearly; he was uncomfortable and on pain medication, speaking in a harsh voice. He died on October 9, 1959, from laryngeal cancer at the age of 67 at a hospital in Shinjuku, Tokyo. Ishii's funeral was chaired by Masaji Kitano, his second-in-command at Unit 731. According to his daughter, Ishii became a Roman Catholic shortly before his death. Ishii's daughter, Harumi Ishii, recalled in an interview that shortly before his death, Ishii's medical condition worsened: On screen Ishii was portrayed by Min Ji-hwan in the MBC TV series Eyes of Dawn, and portrayed by Gang Wang in the 1988 film Men Behind The Sun. See also Josef Mengele Operation Paperclip Khabarovsk War Crime Trials Nobusuke Kishi Sources Citations References Barenblatt, Daniel. A Plague Upon Humanity: the Secret Genocide of Axis Japan's Germ Warfare Operation, HarperCollins, 2004. Gold, Hal. Unit 731 Testimony, Charles E Tuttle Co., 1996. Williams, Peter and Wallace, David. Unit 731: Japan's Secret Biological Warfare in World War II, Free Press, 1989. Harris, Sheldon H. Factories of Death: Japanese Biological Warfare 1932–45 and the American Cover-Up, Routledge, 1994. Endicott, Stephen and Hagerman, Edward. The United States and Biological Warfare: Secrets from the Early Cold War and Korea, Indiana University Press, 1999. Handelman, Stephen and Alibek, Ken. Biohazard: The Chilling True Story of the Largest Covert Biological Weapons Program in the World – Told from Inside by the Man Who Ran It, Random House, 1999. Harris, Robert and Paxman, Jeremy. A Higher Form of Killing: The Secret History of Chemical and Biological Warfare, Random House, 2002. Barnaby, Wendy. The Plague Makers: The Secret World of Biological Warfare, Frog Ltd, 1999. Yang Yan-Jun and Tam Yue-Him. Unit 731: Laboratory of the Devil, Auschwitz of the East: Japanese Biological Warfare in China 1933-45. Fonthill Media, 2018. 1892 births 1959 deaths Imperial Japanese Army generals of World War II Japanese military doctors Japanese Christians Japanese human subject research Japanese biological weapons program Japanese war criminals Japanese Roman Catholics Members of the Kwantung Army Military personnel from Chiba Prefecture Converts to Christianity Kyoto University alumni Deaths from esophageal cancer in Japan Combat medics People related to biological warfare
Shirō Ishii
Biology
2,355
8,958,365
https://en.wikipedia.org/wiki/Dacrymyces%20spathularia
Dacrymyces spathularia is a species of fungus in the family Dacrymycetaceae. Basidiocarps (fruit bodies) are gelatinous, frequently spathulate (spoon-shaped), and grow on wood, mainly in the tropics and subtropics. The fungus is edible and is commercially cultivated for use as an additive in the food industry. Taxonomy The species was first described as Merulius spathularius by German-American mycologist Lewis David de Schweinitz based on a collection from North Carolina in the United States. It was moved to the newly created genus Dacryopinax by American mycologist G. W. Martin in 1948 in recognition of its fruit bodies' frequently spathulate shape. Microscopically, however, the species is not typical of the genus and this has been confirmed by recent molecular research, based on cladistic analysis of DNA sequences. Dacryopinax spathularia is not closely related to the type species (Dacryopinax elegans) and belongs elsewhere. It has been placed in a widely defined Dacrymyces, but this latter genus still awaits a comprehensive revision. Description The fruit bodies of Dacrymyces spathularia are gregarious, often clustered, and have a distinct stipe (stem) and fertile head that is flattened and fan-like (spathulate) or less commonly palmate. They are tough-gelatinous to cartilaginous and yellow to orange, usually tall and between 0.3–1.2 cm wide. Microscopically, the species has cylindrical basidiospores that become septate at maturity, measuring 7–11.5 by 3.5–4.5 μm. Similar species It resembles Guepiniopsis buccina and young specimens of Dacrymyces chrysospermus. Habitat and distribution Dacrymyces spathularia grows on both rotting coniferous and broadleaf wood; it has even been reported to grow on polyester rugs. It is widely distributed in Asia, Africa, Australia and the Pacific, North and South America, but is not known from Europe. Uses Dacryopinax spathularia is edible. The species is commercially cultivated to produce long-chain glycolipids used as a natural preservative in soft drinks. The process involves fermentation of Dacryopinax spathularia using glucose as a carbon source in aerobic submerged culture. In China fruit bodies are called guìhuā'ěr (桂花耳, literally "sweet osmanthus ear," referring to their resemblance to osmanthus flowers). They are sometimes included in a vegetarian dish called Buddha's delight. References External links Dacrymycetes Edible fungi Fungi in cultivation Fungi of Asia Fungi of South America Fungi of North America Fungi of Australia Fungi of Africa Fungi of Colombia Fungi described in 1822 Taxa named by Lewis David de Schweinitz Fungus species
Dacrymyces spathularia
Biology
616
73,167,663
https://en.wikipedia.org/wiki/Tcr-seq
TCR-Seq (T-cell Receptor Sequencing) is a method used to identify and track specific T cells and their clones. TCR-Seq utilizes the unique nature of a T-cell receptor (TCR) as a ready-made molecular barcode. This technology can apply to both single cell sequencing technologies and high throughput screens Background T-cell Receptor (TCR) T cells are a part of the adaptive immune system and play a critical role in protecting the body from foreign pathogens. T-cell receptors (TCRs) are a group of membrane proteins found on the surface of T cells which can bind to foreign antigens. TCRs interact with major histocompatibility complexes (MHC) on cell surfaces to recognize antigens. They are heterodimers made up of predominantly α and β chains (or more rarely δ and γ chains) and consist of a variable region and a constant region. Variable regions are produced through a process called VDJ recombination, which results in unique amino acid sequences for α, β, and γ chains. The result is that each TCR is unique and recognizes a specific antigen Complementarity Determining Regions (CDRs) Complementarity determining regions (CDRs) are a part of the TCR and play an essential role in TCR-MHC interactions. CDR1 and CDR2 are encoded by V genes, while CDR3 is made from the region between V and J genes or between D and J genes (termed "VDJ genes" when referred to together). CDR3 is the most variable of the CDRs, and is in direct contact with the antigen. As such, CDR3 is used as the “barcode region” to identify unique T cell populations, as it is highly unlikely for two T cells to have the same CDR3 sequence unless they came from the same parental T cell. Clonality VDJ recombination produces such a vast amount of unique TCRs that many receptors never encounter the antigen they are best suited for. When a foreign antigen is present in the body, the few T cells that recognize that antigen are positively selected for so that the body has an adequate number of T cells to mount an effective immune response. The selected T cells rapidly divide and differentiate into effector T-cells through a process called clonal expansion, which retains the TCR sequence (including the CDR3 sequence) that originally recognized the antigen TCR-Seq uses the unique nature of the TCR - in particular CDR3 - as a molecular barcode to track T cells through a variety of processes like differentiation and proliferation, which can be used for a wide variety of purposes. Methods Bulk vs Single-Cell Sequencing TCR sequencing can be performed in on pooled cell populations (“bulk sequencing”) or single cells (“single cell sequencing”). Bulk sequencing is useful to explore entire TCR repertoires - all the TCRs within an individual or a sample - and to generate comparisons between repertoires of different individuals. This method can sequence millions of cells in a single experiment. However, one major disadvantage is that bulk sequencing cannot determine which TCR chains pair together, only the frequency within the repertoire. The large amount of TCRs sampled also means that lower abundance TCRs may not be detected Single cell sequencing can determine TCR chain pairs, making them more useful for identifying specific TCRs. Some major disadvantages of this technique are its high costs, limited capacity of a few thousand cells, and the necessity of live cells which may be more challenging to obtain Target Sequences Any TCR chain can be sequenced, although the α and β chain are more commonly chosen due to their abundance in the T cell population. In particular, the β chain is of interest due to its higher diversity and specificity compared to other chains. The presence of a D gene component in the β chain which is not present in the α chain allows more diverse combinations. As well, β chains are unique to each T cell, which can be used to identify distinct T cell populations within a sample To perform TCR-sequencing, polymerase chain reaction (PCR) amplification is performed on the CDR3 region as a measure of unique T cells within a population. The CDR3 region is chosen over CDR1 and CDR2 as it is directly responsible for antigen interactions and is generally unique to TCRs from the same lineage, which allows identification of distinct populations Library Preparation The goal of this step is to generate a library of transcripts to be sequenced. There are 3 major ways of generating a library for TCR sequencing. Multiplex DNA Multiplex PCR can be employed on both genomic DNA (gDNA) or RNA which has been converted to double-stranded complementary DNA (cDNA). Primer pools with primer pairs targeting J and V alleles are used to amplify the CDR3 region of the TCR transcript. The transcript goes through two or more rounds of PCR to amplify the region of interest, then adaptors are ligated onto either end of the resulting transcript. This method is among the most used in the generation of libraries for TCR-seq as it can capture a great deal of the diversity of the TCR through the primer pool. However, as it is near-impossible to optimize PCR conditions for all the primers in the pool, multiplex DNA can result in amplification bias where some CDR3 regions with primers that bind poorly may not be amplified. This means the abundance of amplified segments may not correspond with the actual abundance within the cell Target Enrichment In-Solution This method can use genomic gDNA or RNA converted to cDNA. The starting material is first processed to generate DNA or cDNA transcripts with indexed adaptors on the 5’ and 3’ ends. These transcripts are then incubated with RNA baits designed to bind to regions of interest, which is generally the CDR3 region. These baits, which are normally bound to magnetic beads, can be isolated using a magnet. This allows the isolation of transcripts of the CDR3 region which can then amplified using PCR. Target enrichment using RNA baits requires fewer PCR amplification steps, which may decrease amplification bias. However, the efficiency of the capture by magnets may affect the diversity of the amplified transcripts. 5’-RACE Rapid Amplification of cDNA Ends (RACE) is a method that uses RNA transcripts for generation of the library. Although RACE can be applied with the 3' or the 5' end, the 5' end more commonly used for TCR-seq. This method revolves around the addition of a common 5' adaptor sequence to the transcript, which can be a done a few different ways. One method is to add on the adapter following reverse transcription. During the generation of the reverse DNA strand from the RNA template, a forward primer adds a sequence complementary to the 5'adapter, leading to template switching This allows a 5' adapter to be incorporated into the cDNA when the complementary sequence is generated. Primers can be designed to amplify the entire region from the adaptor to the constant region, then adaptor ligation can be performed in a second PCR reaction. As all the different transcripts now share an identical adapter, they can be amplified using a single primer pair. As such, this method decreases amplification bias and improves the ability to detect more uncommon TCR populations with greater certainty. However, as TCR transcription levels differ between cells, this method cannot provide an accurate measurement of the number of different T cell types in the sample based on the level of RNA transcripts alone Sequencing Following generation of the library, the products can be sequenced, generally via Next Generation Sequencing (NGS). Usage of machines capable of longer reads and maintains read quality at the 3’end is important, as the CDR3 region is at the 3’end of an approximately 500 base pair transcript The error rate of NGS presents a challenge for analysis of TCR repertoires. Small variations in the TCR can change their specificity towards antigens, and as such may be interest to researchers. However, errors in sequencing can generate a minor change that may be interpreted as a low-frequency, distinct TCR population, which is a problem when analyzing changes in TCR repertoires. Efforts have been made to establish thresholds to remove low abundance reads from analysis, as well as to develop algorithms to correct these errors Applications Generally, the data collected from TCR-seq is used to compare TCR repertoires, either between the same patient at different timepoints, or between different patients. Recent studies examined the characteristics of a healthy repertoire, and found a high degree of variation in TCR β chain levels and types, though a subset is shared across different individuals. However, this diversity has yet to be shown to strongly correlate with any conditions of interest, such as rates of infection or chance of cancer relapse, suggesting further research is necessary. Infectious Diseases Clonal expansion of T cells allow the immune system to deal with a variety of infection disease with high specificity. Thus, understanding changes that occur to the T cell repertoire following disease infection can early diagnosis, disease monitoring, and therapeutic development Acquired Immunodeficiency Syndrome (AIDS) is a devastating disease caused by Human Immunodeficiency Virus (HIV) infection, which results in the death of CD4+ T cells. and dysfunctional CD8+ T cells. Recent studies have suggested that increased TCR diversity may decrease HIV diversity and limit disease progression. Sequencing of the TCR would also increase understanding of the progression of AIDS and predict morbidity. Additionally, sequencing the TCR repertoire of individuals with natural defense against AIDs infection could help development of a vaccine to limit further spread of the disease Cancer Cancer is the uncontrolled proliferation of malignant cells which can spread throughout the body. This is caused by mutations within the cancer cell, which often leads to expression of mutant proteins termed neoantigens. Identification of these neoantigens has great therapeutic benefit, as they can be exploited to target cancer cells without harming normal cells. As CD8+ T cells can recognize some neoantigens in their TCR, sequencing of TCR repertoires can help identify potential cancer biomarkers. In addition to biomarker identification, sequencing of the TCR repertoire can also track changes in cancer progression, assess responses to immunotherapy, and evaluate the tumour microenvironment for conditions that may make it permissible to cancer growth See also NOMe-seq PLAC-Seq References DNA sequencing Molecular biology techniques
Tcr-seq
Chemistry,Biology
2,202