id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
14,719,160 | https://en.wikipedia.org/wiki/Primer-E%20Primer | Plymouth Routines In Multivariate Ecological Research (PRIMER) is a statistical package that is a collection of specialist univariate, multivariate, and graphical routines for analyzing species sampling data for community ecology. Types of data analyzed are typically species abundance, biomass, presence/absence, and percent area cover, among others. It is primarily used in the scientific community for ecological and environmental studies.
Multivariate routines include:
grouping (CLUSTER)
sorting (MDS)
principal component identification (PCA)
hypothesis testing (ANOSIM)
sample discrimination (SIMPER)
trend correlation (BEST)
comparisons (RELATE)
diversity, dominance, and distribution calculating
Permutational multivariate analysis of variance (PERMANOVA)
Routines can be resource intensive due to their non-parametric and permutation-based nature. Programmed in the VB.Net environment.
References
See also
Comparison of statistical packages
List of statistical packages
Statistical software
Ecology | Primer-E Primer | [
"Mathematics",
"Biology"
] | 190 | [
"Statistical software",
"Ecology",
"Mathematical software"
] |
14,719,430 | https://en.wikipedia.org/wiki/Iobenguane | Iobenguane, or MIBG, is an aralkylguanidine analog of the adrenergic neurotransmitter norepinephrine (noradrenaline), typically used as a radiopharmaceutical. It acts as a blocking agent for adrenergic neurons. When radiolabeled, it can be used in nuclear medicinal diagnostic and therapy techniques as well as in neuroendocrine chemotherapy treatments.
It localizes to adrenergic tissue and thus can be used to identify the location of tumors such as pheochromocytomas and neuroblastomas. With iodine-131 it can also be used to treat tumor cells that take up and metabolize norepinephrine.
Usage and mechanism
MIBG is absorbed by and accumulated in granules of adrenal medullary chromaffin cells, as well as in pre-synaptic adrenergic neuron granules. The process in which this occurs is closely related to the mechanism employed by norepinephrine and its transporter in vivo. The norepinephrine transporter (NET) functions to provide norepinephrine uptake at the synaptic terminals and adrenal chromaffin cells. MIBG, by bonding to NET, finds its roles in imaging and therapy.
Metabolites and excretion
Less than 10% of the administered MIBG gets metabolized into m-iodohippuric acid (MIHA), and the mechanism for how this metabolite is produced is unknown.
Diagnostic imaging
MIBG concentrates in endocrine tumors, most commonly neuroblastoma, paraganglioma, and pheochromocytoma. It also accumulates in norepinephrine transporters in adrenergic nerves in the heart, lungs, adrenal medulla, salivary glands, liver, and spleen, as well as in tumors that originate in the neural crest. When labelled with iodine-123 it serves as a whole-body, non-invasive scintigraphic screening for germ-line, somatic, benign, and malignant neoplasms originating in the adrenal glands. It can detect both intra and extra-adrenal disease. The imaging is highly sensitive and specific.
Iobenguane concentrates in presynaptic terminals of the heart and other autonomically innervated organs. This enables the possible non-invasive use as an in vivo probe to study these systems.
Alternatives to imaging with 123I-MIBG, for certain indications and under clinical and research use, include the positron-emitting isotope iodine-124, and other radiopharmaceuticals such as 68Ga-DOTA and 18F-FDOPA for positron emission tomography (PET). 123I-MIBG imaging on a gamma camera can offer significantly higher cost-effectiveness and availability compared to PET imaging, and is particularly effective where 131I-MIBG therapy is subsequently planned, due to their directly comparable uptake.
Side effects
Side effects post imaging are rare but can include tachycardia, pallor, vomiting, and abdominal pain.
Radionuclide therapy
MIBG can be radiolabelled with the beta emitting radionuclide 131I for the treatment of certain pheochromocytomas, paragangliomas, carcinoid tumors, neuroblastomas, and medullary thyroid cancer.
Thyroid precautions
Thyroid blockade with (nonradioactive) potassium iodide is indicated for nuclear medicine scintigraphy with iobenguane/mIBG. This competitively inhibits radioiodine uptake, preventing excessive radioiodine levels in the thyroid and minimizing risk of thyroid ablation (in treatment with 131I). The minimal risk of thyroid cancer is also reduced as a result.
The dosing regime for the FDA-approved commercial 123I-MIBG product Adreview is potassium iodide or Lugol's solution containing 100 mg iodide, weight adjusted for children and given an hour before injection. EANM guidelines, endorsed by the SNMMI, suggest a variety of regimes in clinical use, for both children and adults.
Product labeling for diagnostic 131I iobenguane recommends giving potassium iodide one day before injection and continuing 5 to 7 days following. 131I iobenguane used for therapeutic purposes requires a different pre-medication duration, beginning 24–48 hours before iobenguane injection and continuing 10–15 days after injection.
Clinical trials
Iobenguane I 131 for cancers
Iobenguane I 131, marketed under the trade name Azedra, has had a clinical trial as a treatment for malignant, recurrent or unresectable pheochromocytoma and paraganglioma, and the FDA approved it on July 30, 2018. The drug is developed by Progenics Pharmaceuticals.
References
External links
Adrenergic receptor antagonists
Diagnostic endocrinology
Guanidines
3-Iodophenyl compounds
Radiopharmaceuticals | Iobenguane | [
"Chemistry"
] | 1,079 | [
"Medicinal radiochemistry",
"Guanidines",
"Functional groups",
"Radiopharmaceuticals",
"Chemicals in medicine"
] |
14,719,595 | https://en.wikipedia.org/wiki/Robert%20S.%20Williamson | Robert Stockton Williamson (January 21, 1825 – November 10, 1882) was an American soldier and engineer, noted for conducting surveys for the transcontinental railroad in California and Oregon. Inducted into the Army Corps of Engineers in 1861, he had a distinguished record serving in the American Civil War, winning two brevet promotions. When the US Army Corps of Engineers established its San Francisco District office in 1866, he was appointed as the first commander of the office. Formally promoted to the rank of lieutenant colonel in 1869, he retired in 1871, because of health problems, and died in San Francisco in 1882.
Early life and career
Williamson was born in Oxford, New York and lived in Elizabeth, New Jersey. He was named after Commodore Robert F. Stockton, a family friend. He joined the Navy in 1843 as a master's mate under Stockton on the USS Princeton, the first screw-driven steam ship in the Navy. Williamson was detached from the ship 10 days before one of its guns exploded, killing several people.
It was through Stockton's influence that Williamson was appointed to the United States Military Academy. He graduated fifth in his class in 1848 and appointed a second lieutenant in the Corps of Topographical Engineers. He was assigned to conduct surveys for proposed routes for the transcontinental railroad in California and Oregon, leading surveys of the Sierra Nevada above the Feather River alongside William Horace Warner. In 1853, War Secretary Jefferson Davis chose Williamson to lead surveys of California's southern Sierra and mountains near Los Angeles for the Pacific Railroad. His work was published in volume 5 of the War Department's Reports of Explorations and Surveys. Williamson was then assigned to the staff of the commanding general of the Department of the Pacific, and was the engineer in charge of the military roads in southern Oregon.
Civil War
After the outbreak of the American Civil War, Williamson was commissioned with the rank of Captain into the 1st Battalion of Engineers, and was the Chief Topographical Engineer in North Carolina. He was brevetted Major on March 14, 1862, for service at the Battle of New Bern, and brevetted a Lieutenant Colonel at the Battle of Fort Macon on April 26, 1862.
He was then assigned as Chief Topographical Engineer for the Army of the Potomac. Williamson returned to California as the Chief Topographical Engineer of the Department of the Pacific. He was formally promoted to the rank of Major on May 7, 1863.
In 1863, Williamson transferred to the Corps of Engineers and served as lighthouse engineer for the Pacific Coast. He also worked on defenses and harbors along the coast.
Postbellum
In 1866, Major Williamson was appointed Commander and Officer-in-Charge when the U. S. Army Corps of Engineers established its San Francisco District Office in 1866. This office was then mainly responsible for engineering related to rivers and harbors along the entire Pacific coast, from Canada to Mexico, and included Hawaii. He remained in this position until 1871.
He was formally promoted to Lieutenant Colonel on February 2, 1869, just before submitting his survey on improvements to San Pedro Bay, California. This proposed construction of a jetty, the first federal harbor works at the site of the future Port of Los Angeles. The project would enhance shipping and also help entice the Southern Pacific Railroad to build to the harbor rather than to San Diego.
In 1870, he was elected as a member to the American Philosophical Society.
He retired from the Army as a lieutenant colonel in 1882, due to illness. Williamson had suffered from bad health for the last 20 years of his life and died of tuberculosis in San Francisco, California. He was buried at the Masonic Cemetery in San Francisco.
Legacy
In California, Mount Williamson is named for him.
Williamson Mountain and the Williamson River in Oregon are named in his honor.
A western North American woodpecker, the Williamson's sapsucker, and the mountain whitefish, Prosopium williamsoni, are named after him.
Williamson Valley (Arizona) is named after him.
Notes
References
External links
Report Upon the Removal of Blossom Rock San Francisco Harbor, California. Williamson, R. S. and W. H. Heuer. 1870.
1825 births
1882 deaths
Engineers from Elizabeth, New Jersey
United States Army Corps of Topographical Engineers
United States Military Academy alumni
United States Army officers
19th-century American explorers
Union army officers
People from Oxford, New York
Engineers from New York (state)
Burials at Masonic Cemetery (San Francisco)
Military personnel from New Jersey | Robert S. Williamson | [
"Engineering"
] | 898 | [
"United States Army Corps of Topographical Engineers",
"Civil engineering organizations"
] |
14,719,776 | https://en.wikipedia.org/wiki/Copper%20sweetening | Copper sweetening is a petroleum refining process using a slurry of clay and cupric chloride to oxidize mercaptans. The resulting disulfides are less odorous and usually very viscous, and are usually removed from the lower-boiling fractions and left in the heavy fuel oil fraction.
Copper sweetening introduces trace amount of copper into the resulting products, which tends to have detrimental effects as it leads to formation of gummy residues. Other sources of copper include contact with refinery parts made of copper and copper alloys. Copper is one of the most active instability promoters, and concentrations as low as 0.1 ppm can have marked negative effect. To combat these, metal deactivators are added to some fuels.
References
See also
Sour crude oil
Sweet crude oil
Oil refining
Chemical processes | Copper sweetening | [
"Chemistry"
] | 165 | [
"Petroleum stubs",
"Petroleum technology",
"Chemical processes",
"Petroleum",
"Oil refining",
"nan",
"Chemical process engineering",
"Chemical process stubs"
] |
3,106,440 | https://en.wikipedia.org/wiki/Rankine%20vortex | The Rankine vortex is a simple mathematical model of a vortex in a viscous fluid. It is named after its discoverer, William John Macquorn Rankine.
The vortices observed in nature are usually modelled with an irrotational (potential or free) vortex. However, in a potential vortex, the velocity becomes infinite at the vortex center. In reality, very close to the origin, the motion resembles a solid body rotation. The Rankine vortex model assumes a solid-body rotation inside a cylinder of radius and a potential vortex outside the cylinder. The radius is referred to as the vortex-core radius. The velocity components of the Rankine vortex, expressed in terms of the cylindrical-coordinate system are given by
where is the circulation strength of the Rankine vortex. Since solid-body rotation is characterized by an azimuthal velocity , where is the constant angular velocity, one can also use the parameter to characterize the vortex.
The vorticity field associated with the Rankine vortex is
At all points inside the core of the Rankine vortex, the vorticity is uniform at twice the angular velocity of the core; whereas vorticity is zero at all points outside the core because the flow there is irrotational.
In reality, vortex cores are not always circular; and vorticity is not exactly uniform throughout the vortex core.
See also
Kaufmann (Scully) vortex – an alternative mathematical simplification for a vortex, with a smoother transition.
Lamb–Oseen vortex – the exact solution for a free vortex decaying due to viscosity.
Burgers vortex
References
External links
Streamlines vs. Trajectories in a Translating Rankine Vortex: an example of a Rankine vortex imposed on a constant velocity field, with animation.
Equations of fluid dynamics
Vortices | Rankine vortex | [
"Physics",
"Chemistry",
"Mathematics"
] | 369 | [
"Equations of fluid dynamics",
"Equations of physics",
"Vortices",
"Fluid dynamics",
"Dynamical systems"
] |
3,106,703 | https://en.wikipedia.org/wiki/RE%20%28complexity%29 | In computability theory and computational complexity theory, RE (recursively enumerable) is the class of decision problems for which a 'yes' answer can be verified by a Turing machine in a finite amount of time. Informally, it means that if the answer to a problem instance is 'yes', then there is some procedure that takes finite time to determine this, and this procedure never falsely reports 'yes' when the true answer is 'no'. However, when the true answer is 'no', the procedure is not required to halt; it may go into an "infinite loop" for some 'no' cases. Such a procedure is sometimes called a semi-algorithm, to distinguish it from an algorithm, defined as a complete solution to a decision problem.
Similarly, co-RE is the set of all languages that are complements of a language in RE. In a sense, co-RE contains languages of which membership can be disproved in a finite amount of time, but proving membership might take forever.
Equivalent definition
Equivalently, RE is the class of decision problems for which a Turing machine can list all the 'yes' instances, one by one (this is what 'enumerable' means).
Each member of RE is a recursively enumerable set and therefore a Diophantine set.
To show this is equivalent, note that if there is a machine that enumerates all accepted inputs, another machine that takes in a string can run and accept if the string is enumerated. Conversely, if a machine accepts when an input is in a language, another machine can enumerate all strings in the language by interleaving simulations of on every input and outputting strings that are accepted (there is an order of execution that will eventually get to every execution step because there are countably many ordered pairs of inputs and steps).
Relations to other classes
The set of recursive languages (R) is a subset of both RE and co-RE. In fact, it is the intersection of those two classes, because we can decide any problem for which there exists a recogniser and also a co-recogniser by simply interleaving them until one obtains a result. Therefore:
.
Conversely, the set of languages that are neither RE nor co-RE is known as NRNC. These are the set of languages for which neither membership nor non-membership can be proven in a finite amount of time, and contain all other languages that are not in either RE or co-RE. That is:
.
Not only are these problems undecidable, but neither they nor their complement are recursively enumerable.
In January of 2020, a preprint announced a proof that RE was equivalent to the class MIP* (the class where a classical verifier interacts with multiple all-powerful quantum provers who share entanglement); a revised, but not yet fully reviewed, proof was published in Communications of the ACM in November 2021. The proof implies that the Connes embedding problem and Tsirelson's problem are false.
RE-complete
RE-complete is the set of decision problems that are complete for RE. In a sense, these are the "hardest" recursively enumerable problems. Generally, no constraint is placed on the reductions used except that they must be many-one reductions.
Examples of RE-complete problems:
Halting problem: Whether a program given a finite input finishes running or will run forever.
By Rice's theorem, deciding membership in any nontrivial subset of the set of recursive functions is RE-hard. It will be complete whenever the set is recursively enumerable.
proved that all creative sets are RE-complete.
The uniform word problem for groups or semigroups. (Indeed, the word problem for some individual groups is RE-complete.)
Deciding membership in a general unrestricted formal grammar. (Again, certain individual grammars have RE-complete membership problems.)
The validity problem for first-order logic.
Post correspondence problem: Given a list of pairs of strings, determine if there is a selection from these pairs (allowing repeats) such that the concatenation of the first items (of the pairs) is equal to the concatenation of the second items.
Determining if a Diophantine equation has any integer solutions.
co-RE-complete
co-RE-complete is the set of decision problems that are complete for co-RE. In a sense, these are the complements of the hardest recursively enumerable problems.
Examples of co-RE-complete problems:
The domino problem for Wang tiles.
The satisfiability problem for first-order logic.
See also
Knuth–Bendix completion algorithm
List of undecidable problems
Polymorphic recursion
Risch algorithm
Semidecidability
References
Complexity classes
Undecidable problems | RE (complexity) | [
"Mathematics"
] | 1,005 | [
"Mathematical problems",
"Undecidable problems",
"Computational problems"
] |
3,106,763 | https://en.wikipedia.org/wiki/R%20%28complexity%29 | In computational complexity theory, R is the class of decision problems solvable by a Turing machine, which is the set of all recursive languages (also called decidable languages).
Equivalent formulations
R is equivalent to the set of all total computable functions in the sense that:
a decision problem is in R if and only if its indicator function is computable,
a total function is computable if and only if its graph is in R.
Relationship with other classes
Since we can decide any problem for which there exists a recogniser and also a co-recogniser by simply interleaving them until one obtains a result, the class is equal to RE ∩ co-RE.
References
Blum, Lenore, Mike Shub, and Steve Smale, (1989), "On a theory of computation and complexity over the real numbers: NP-completeness, recursive functions and universal machines", Bulletin of the American Mathematical Society, New Series, 21 (1): 1-46.
External links
Complexity classes
Computability theory | R (complexity) | [
"Mathematics"
] | 218 | [
"Computability theory",
"Mathematical logic"
] |
3,106,825 | https://en.wikipedia.org/wiki/Three-center%20four-electron%20bond | The 3-center 4-electron (3c–4e) bond is a model used to explain bonding in certain hypervalent molecules such as tetratomic and hexatomic interhalogen compounds, sulfur tetrafluoride, the xenon fluorides, and the bifluoride ion. It is also known as the Pimentel–Rundle three-center model after the work published by George C. Pimentel in 1951, which built on concepts developed earlier by Robert E. Rundle for electron-deficient bonding. An extended version of this model is used to describe the whole class of hypervalent molecules such as phosphorus pentafluoride and sulfur hexafluoride as well as multi-center π-bonding such as ozone and sulfur trioxide.
There are also molecules such as diborane (B2H6) and dialane (Al2H6) which have three-center two-electron bond (3c-2e) bonds.
History
While the term "hypervalent" was not introduced in the chemical literature until 1969, Irving Langmuir and G. N. Lewis debated the nature of bonding in hypervalent molecules as early as 1921. While Lewis supported the viewpoint of expanded octet, invoking s-p-d hybridized orbitals and maintaining 2c–2e bonds between neighboring atoms, Langmuir instead opted for maintaining the octet rule, invoking an ionic basis for bonding in hypervalent compounds (see Hypervalent molecule, valence bond theory diagrams for PF5 and SF6).
In a 1951 seminal paper, Pimentel rationalized the bonding in hypervalent trihalide ions (, X = F, Br, Cl, I) via a molecular orbital (MO) description, building on the concept of the "half-bond" introduced by Rundle in 1947. In this model, two of the four electrons occupy an all in-phase bonding MO, while the other two occupy a non-bonding MO, leading to an overall bond order of 0.5 between adjacent atoms (see Molecular orbital description).
More recent theoretical studies on hypervalent molecules support the Langmuir view, confirming that the octet rule serves as a good first approximation to describing bonding in the s- and p-block elements.
Examples of molecules exhibiting three-center four-electron bonding
σ 3c–4e
Triiodide
Xenon difluoride
Krypton difluoride
Radon difluoride
Argon fluorohydride
Bifluoride
SN2 reaction transition state and activated complex
Symmetric hydrogen bond
π 3c–4e
Carboxylates
Amides
Ozone
Azide
Allyl anion
Structure and bonding
Molecular orbital description
The σ molecular orbitals (MOs) of triiodide can be constructed by considering the in-phase and out-of-phase combinations of the central atom's p orbital (collinear with the bond axis) with the p orbitals of the peripheral atoms. This exercise generates the diagram at right (Figure 1). Three molecular orbitals result from the combination of the three relevant atomic orbitals, with the four electrons occupying the two MOs lowest in energy – a bonding MO delocalized across all three centers, and a non-bonding MO localized on the peripheral centers. Using this model, one sidesteps the need to invoke hypervalent bonding considerations at the central atom, since the bonding orbital effectively consists of two 2-center-1-electron bonds (which together do not violate the octet rule), and the other two electrons occupy the non-bonding orbital.
Valence bond (natural bond orbital) description
In the natural bond orbital viewpoint of 3c–4e bonding, the triiodide anion is constructed from the combination of the diiodine (I2) σ molecular orbitals and an iodide (I−) lone pair. The I− lone pair acts as a 2-electron donor, while the I2 σ* antibonding orbital acts as a 2-electron acceptor. Combining the donor and acceptor in in-phase and out-of-phase combinations results in the diagram depicted at right (Figure 2). Combining the donor lone pair with the acceptor σ* antibonding orbital results in an overall lowering in energy of the highest-occupied orbital (ψ2). While the diagram depicted in Figure 2 shows the right-hand atom as the donor, an equivalent diagram can be constructed using the left-hand atom as the donor. This bonding scheme is succinctly summarized by the following two resonance structures: I—I···I− ↔ I−···I—I (where "—" represents a single bond and "···" represents a "dummy bond" with formal bond order 0 whose purpose is only to indicate connectivity), which when averaged reproduces the I—I bond order of 0.5 obtained both from natural bond orbital analysis and from molecular orbital theory.
More recent theoretical investigations suggest the existence of a novel type of donor-acceptor interaction that may dominate in triatomic species with so-called "inverted electronegativity"; that is, a situation in which the central atom is more electronegative than the peripheral atoms. Molecules of theoretical curiosity such as neon difluoride (NeF2) and beryllium dilithide (BeLi2) represent examples of inverted electronegativity. As a result of unusual bonding situation, the donor lone pair ends up with significant electron density on the central atom, while the acceptor is the "out-of-phase" combination of the p orbitals on the peripheral atoms. This bonding scheme is depicted in Figure 3 for the theoretical noble gas dihalide NeF2.
SN2 transition state modeling
The valence bond description and accompanying resonance structures A—B···C− ↔ A−···B—C suggest that molecules exhibiting 3c–4e bonding can serve as models for studying the transition states of bimolecular nucleophilic substitution reactions.
See also
Hypervalent molecule
Three-center two-electron bond
References
Chemical bonding | Three-center four-electron bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,269 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
3,107,415 | https://en.wikipedia.org/wiki/Itaconic%20acid | Itaconic acid is an organic compound with the formula . With two carboxyl groups, it is classified as a dicarboxylic acid. It is a non-toxic white solid that is soluble in water and several organic solvents. It plays several roles in biology.
Reactions
Upon heating itaconic acid converts to its anhydride.
As a dicarboxylic acid, itaconic acid has two pKa's. At pH levels above 7, itaconic acid exists as its double negatively charged form, termed itaconate.
As an α,β-unsaturated carbonyl compound, itaconic acid is a good Michael acceptor. Thus, nucleophiles add across the C=C bond.
(R = organic group).
In this was the fire retarding 9,10-Dihydro-9-oxa-10-phosphaphenanthrene-10-oxide is incorporated into polymers.
Production
In 1836, Samuel Baup discovered itaconic acid as a by-product in a dry distillation of citric acid. In the late 1920s, itaconic acid was isolated from a fungus in the Aspergillus genus of fungi The dry distillation forms itaconic anhydride, which then is hydrolyzed. Since the 1960s, however, it has been produced commercially by fermenting glucose, molasses, or another abundant carbon sources by a fungus such as Aspergillus itaconicus, Aspergillus terreus, or Ustilago maydis have also been investigated. One generally accepted route by which fungi make itaconate is through the tricarboxylic acid cycle pathway. This pathway forms cis-aconitate which is converted to itaconate by cis-aconitate-decarboxylase. Animal cells also make itaconate by an enzyme-catalyzed reaction from cis-aconitate, an intermediate metabolite in the tricarboxylic acid cycle, (i.e., TCA cycle). The itaconate-producing reaction is stimulated when the TCA cycle is suppressed.
Ustilago maydis makes itaconic acid from trans-aconitate, catalyzed by aconitate delta-isomerase. The trans-aconitate product is decarboxylated to itaconate by trans-aconitate decarboxylase (i.e., TAD1, an enzyme found in Ustilago maydis) Itaconate has also been obtained by fermenting the fungi Yarrowia lipolytica with glucose, various species of Candida fungi with glucose, Ustilago vetiveriae fungus with glycerol, and various species of Aspergillus niger fungi with glucose, sorbitol, or sorbitol plus xylose mixture. Fermenting Escherichia coli bacteria with glucose, xylose, glycerol, or starch and Corynebacterium glutamicum bacteria with glucose or urea also affords itaconic acid. Ustilago maydis has, however, been genetically engineered to increase its itaconic acid production,
History
In the 1930s itaconate was shown to have bactericidal actions. In 2011, Strelko et al. reported that itaconate was produced by two mammalian immortalized cell lines, cultured mouse VM-M3 brain tumor cells and RAW 264.7 mouse macrophages, and by macrophages isolated from mice. This group also showed that stimulation of mouse macrophages with the bacterial toxin, lipopolysaccharide (i.e., LPS, also termed endotoxin), increased their production and secretion of itaconate. In 2013, Michelucci et al. revealed the biosynthesis pathway that makes itaconate in mammals. These publications were followed by numerous others focused on the biology of itaconate and certain itaconate-like compounds as regulars of various cellular responses in animals and possibly humans.
Biology of Itaconate
Biological studies focus on itaconate's physiological and pathological functions.
Cells making itaconate
The major cell types that normally make itaconate in response to stressful conditions are hematological cells such as the macrophages located in various tissues and the monocytes located in the bone marrow and blood. These cells are phagocytes, i.e., cells that ingulf microorganisms, dead or seriously injured cells, and foreign particles all of which cause inflammatory responses. Itaconate is also produced by certain myeloid-derived suppressor cells such as highly mature neutrophils which are often termed granulocyte myeloid-derived suppressor cells or granulocyte MDSCs. Unlike other types of itaconate-forming cells, however, these neutrophils, which are phagocytes, tend to retain rather than release itaconate to the extracellular space.
Itaconate-forming metabolic pathway
Itaconate is a by-product of the tricarboxylic acid cycle, consisting of eight successive enzyme-catalyzed biochemical reactions that occur in the cell's mitochondria. When cis-aconitate, accumulates, aconitate decarboxylase (also termed ACOD1, cis-aconitate decarboxylase) metabolizes cis-aconitate to itaconate and carbon dioxide (CO2) in the following decarboxylation reaction:
cis-aconitate → itaconate + CO2
This itaconate is transported across the mitochondrial membrane into the cell's cytosol by the mitochondrial dicarboxylate carrier protein, mitochondrial 2-oxoglutarate/malate carrier protein, and citrate–malate shuttle. The cytosolic itaconate may then move form the cytosol through the patients' cells' surface membranes to the extracellular space (this trans-membrane movement may involve a specific transport protein such as the major facilitator superfamily transport protein (i.e., MfsA) in fungi.) This itaconate has mostly anti-inflammatory actions. It acts on its parent cell, other cells, and certain microorganism by stimulating or inhibiting the activity of various response-regulating pathways in its parent cell, other cells, and bacteria. Itaconate's actions on its parent and other cells were considered as entirely independent of any receptor. Itaconate stimulates certain mammalian cells by activating the OXGR1 receptor.
OXGR1 receptor
OXGR1 (also known as GPR99) is a G protein-coupled receptor that was identified in 2004 as a receptor for the tricarboxylic cycle intermediate, α-ketoglutarate. In 2013, it was found to also be a receptor for leukotriene E4 and to lesser extents leukotriene C4 and D4. Among a set of cultured human embryonic kidney HEK 293 cells made to express any one of 351 different human G protein-coupled receptors, only cells expressing OXGR1 responded to itaconate by raising their cytosolic Ca2+ levels. HEK 293 cells expressing any of the other 350 receptors did not consistently alter their cytosolic Ca2+ levels in response to itaconate. Respiratory epithelium cells isolated from control mice (i.e., these cells naturally express OXGR1) but not from Oxgr1 gene knockout mice (i.e., these cells lacked OXRG1) responded to itaconic acid by raising their cytosolic Ca2+ levels and stimulating their mucociliary clearance (equivalent to stimulating the secretion of mucus). Application of itaconate in the noses of control mice but not Oxgr1 gene knockout mice stimulated nasal secretion of mucus. Oxgr1 gene knockout mice and Irg1 gene knockout mice (mice lacking the itaconate-producing protein, IRG1) that were intranasally infected with Pseudomonas aeruginosa had increased numbers of these bacteria in their lung tissue and bronchoalveolar lavage fluid (i.e., airway washing) than control mice that respectively expressed OXGR1 and IRG1. α-ketoglutarate and itaconate, which have similar structures, activate OXGR1-expressing HEK293 cells at similar concentrations, i.e., between 200–300 μM/liter. These findings indicate that itaconate stimulates human HEK 293 and mouse respiratory epithelial cells by activating their OXGR1 receptors. Since OXGP1 is expressed in a wide range of tissues and mediates the allergic and inflammatory responses to the cited leukotrienes, it may be involved in the inflammatory responses detailed in the following "Actions of itaconate and its analogs" section. That is, itaconate, like succinate (see previous paragraph), may stimulate cells by receptor-dependent and receptor independent mechanisms. Future studies need to determine the extent to which OXGR1 contributes to the various actions of itaconate and itaconate-like compounds (see next section) as well as the potencies of each of these agents in activating OXGR1.
Itaconate and itaconate-like compounds
4-Octyl itaconate, dimethyl itaconate, and 4-ethyl itaconate have been used to mimic the biological effects of itaconate. These functional analogs of itaconate are often used in place of itaconate because of their presumed greater ability to pass through the surface membranes of, and thereby enter, cells. In should be noted that many studies have examined the actions of itaconate analogs rather than itaconate itself and that itaconate and these three analogs have on occasion shown significantly different biological activities.
The anionic forms of mesaconic and citraconic acids, i.e., mesaconate and citraconate, are isomers of itaconate that differ from itaconate by the location of their internal carbon to carbon double bonds (i.e., C=C). The two isomers have some but not all of the biological activities of itaconate. (Meconate is a natural product made by mouse macrophages.) Other compounds have been synthesized that enter cells and then breakdown into itaconate plus a second inflammation-inhibiting agent, carbon monoxide. These compounds, termed itaCORMs, activate some of the anti-inflammatory pathways activated by itaconate but also to have the anti-inflammatory activity of carbon monoxide in suppressing production of the pro-inflammatory cytokine, interleukin-23. The itaCORMs require further study. Analyses of itaconate as well as each of the itaconate analogs, itaconate isomers, and itaCORM may be useful for selecting the agent(s) best suited to treat the human disorders which preclinical studies suggest are improved by itaconate or an itaconate-like compound(s).
Dietary sources of itaconate and its isomers
Itaconic acid and its two isomers, mesaconic and citraconic acids, were found in rye and wheat breads with appreciably higher concentrations of itaconic and citraconic acids in their crusts (i.e., outer bread layer) than crumbs (i.e., soft inner part of the bread). Based on the average consumption of bread and bread-related baked goods in Germany, the daily intake of itaconate plus its two isomers was estimated to be from 7 to 20 micrograms. Rats have been shown to absorb the itaconic acid that was added to their diet.
Actions of itaconate and its analogs
Itaconate and its analogs can operate concurrently through multiple pathways to induce their effects. Relevant to this, future studies must determine the role of the newly defined receptor for itaconate, OXGR1, in contributing to the mediation of the following actions of itaconate and itaconate-like compounds.
Inhibit succinate dehydrogenase
Succinate dehydrogenase (i.e., SDH) is an enzyme complex of six proteins in the mitochondrial tricarboxylic acid cycle that metabolizes succinate to fumarate. (Although bacteria generally lack mitochondria, their surface membranes have a similar SDH system.) Itaconate inhibits SDH's activity thereby blocking succinate's oxidation to fumarate and causing succinate levels to increase. Itaconate has been reported to increase succinate levels in a wide variety of cells including cultured mouse RAW264.7 macrophages, macrophages differentiated from human monocytes, Huh7 human liver carcinoma cells, human MCF-7 breast cancer cells, human A549 lung adenocarcinoma cells, and the brain neurons and astrocytes generated from rat embryo brain tissue. This succinate stimulates various responses in its parent and other cells as detailed elsewhere (see SUCNR1 and succinic acid).
Inactivate KEAP1
KEAP1 (i.e., Kelch-like ECH-associated protein 1) resides in the cytoplasm of cells. It binds the transcription factor nuclear factor erythroid 2-related factor 2 (i.e., NFE2L2 or Nrf2) thereby holding it in the cytosol and unable to enter the cell nucleus where it would inhibit the expression of certain genes. Retention of Nrf2 in the cell's cytosol also promotes its degradation by E3 ubiquitin ligase. Nrf2: a) inhibits its target genes from expressing their pro-inflammatory cytokines, Interleukin 1 beta, i.e., IL-1β (which is enzymatically cleaved to its active form by caspase 1) and tumor necrosis factor; b) inhibits its target genes expression of hypoxia-inducible factor 1-alpha which is converted enzymatically to an active form that stimulates the pro-inflammatory actions of macrophages (i.e., by inducing them to assume the MI macrophage subtype), dendritic cells, T cells, and neutrophils; and c) increases the cellular and tissue levels of pro-inflammatory reactive oxygen species. 4-Octyl itaconate, dimethyl itaconate, and itaconate inactivate KEAP1 thereby increasing Nrf2's entry into the cell nucleus and inhibiting production of the cited pro-inflammatory cytokines and various reactive oxygen species.
In a model of intracellular inflammation, LPS stimulated mouse bone marrow-derived macrophages to increase their levels of IL-1β, tumor necrosis factor, hypoxia-inducible factor 1-alpha, and reactive oxygen species. 4-Octyl itaconate suppressed all of these LPS-induced responses. It also reduced the production of IL-1β and tumor necrosis factor in LPS-stimulated human peripheral blood monocytes. And, in a model of LPS-induced septic shock, mice injected intraperitoneally with LPS plus 4-octyl itaconate had fewer physical symptoms of shock, lower serum levels of the pro-inflammatory cytokines, IL-1β and tumor necrosis factor, unchanged levels of the anti-inflammatory cytokine interleukin 10, and longer survival times compared to mice treated with LPS but not 4-octyl itaconate. Thus, the inhibitory effects of 4-octyl itaconate, dimethyl itaconate, and itaconate on cells appear due to their inactivation of KEAP1 and resulting movement of cytosolic Nrf2 into the cell nucleus where it inhibits its target genes from producing reactive oxygen species and the cited inflammation-promoting proteins. This mechanism may also underlie 4-octyl itaconate's ability to reduce the severity of LPS-induced shock in mice.
Inhibit NLRP3
The NLRP3-containing inflammasome, like the other types of inflammasomes, is a cytosolic multiprotein complex that when activated promotes inflammatory reactions. The NLRP3-containing inflammasome forms in response to danger signals (e.g., LPS, pathogens, etc.). These signals cause cytosolic NLRP3 (i.e., NLR family pyrin domain containing 3) to bind PYCARD (i.e., apoptosis-associated speck-like protein containing a CARD) which in turn binds and activates the enzyme caspase 1 to form the functional NLRP3-containing inflammasome. This inflammasome's activated caspase 1 cleaves a) the protein precursors of IL-1β and interleukin 18 into their active pro-inflammatory cytokine forms and b) gasdermin D (also termed GSDMD) into its active form that triggers its parent cell's pyroptosis response. Pyroptosis is a form of programmed cell death which causes parent cell swelling, Lysis (i.e., the breakdown of their surface membranes), and the release of IL-1β and interleukin 18 into the extracellular space where they stimulate other cells to mount inflammatory responses.
In one study, cultured bone marrow-derived mouse macrophages were treated with LPS for 3 hours, 4-octyl itaconate or buffer for the next 45 minutes, nigericin or adenosine triphosphate (both agents activate NLRP3) for the next 45 minutes, and then assayed for extracellular IL-1β, interleukin 18, gasdermin D, and a protein not released by cells unless they had died, lactate dehydrogenase. Compared to cells not treated with 4-octyl itaconate, 4-octyl itaconate-treated cells released less IL-1β, interleukin 18, gasdermin D, and lactate dehydrogenase. Thus, 4-octyl itaconate suppressed the release of the two pro-inflammatory cytokines by, and reduced the death rate of, these cells. Dimethyl itaconate and itaconate likewise inhibited these cells from releasing IL-1β (release of the other proteins not reported). Similar results occurred in studies on mononuclear cells isolated from the blood of persons who did or did not have the cryopyrin-associated periodic syndrome, i.e., CAPS. CAPS is an autoinflammatory disease due to any one of several mutations in the NLRP3 gene; these mutations cause cells to release excessive amounts of IL-1β. 4-Octyl itaconate inhibited the release of IL-1β from LPS- or Pam3CSK4-stimulated (Pam3CSK4a mimics LPS's actions), nigericin-activated mononuclear cells isolated from the blood of persons who did or did not have CAPS. Finally, the injection of monosodium urate crystals (a form of uric acid that activates the NLRP3 inflammasome) into the peritoneum of mice caused peritonitis (i.e., inflammation of the serous membrane that lines the abdominal cavity and the cavity's organs (e.g., intestines, liver, etc.). Injection of 4-octyl itaconate along with the uric acid crystals significantly reduced this inflammation response as indicated by the lower levels of IL-1β and another pro-inflammatory cytokine, interleukin 6 (i.e., IL-6), and fewer inflammation-inducing neutrophils in the peritoneum compared to 4-octyl itaconate-untreated mice. These studies indicate that itaconate, dimethyl itaconate, and 4-octyl itaconate inhibit NLRP3 and thereby the formation of the active NLRP3 inflammasome. This inhibition appears responsible for the ability of itaconate, dimethyl itaconate, and 4-octyl itaconate to suppress the pro-inflammatory responses of mouse macrophages and human mononuclear cells to LPS as well as the ability of 4-octyl itaconate to suppress the peritoneal inflammatory response of mice to urate crystals.
Increase ATF3 levels
ATF3 (i.e., cyclic AMP-dependent transcription factor ATF-3) is a transcription factor that inhibits the NFKBIZ gene's expression of NF-kappa-B inhibitor zeta (i.e., IκBζ), a protein located in the cell nucleus that promotes the production of certain pro-inflammatory cytokines such as IL-6, interferon gamma, and granulocyte-macrophage colony-stimulating factor. Itaconate and dimethyl itaconate stimulate the production of ATF3 thereby suppressing the cellular levels of IκBζ and IL-6 as well as IL-6-promoted inflammatory responses.
Studies have shown that: a) Atf3 gene knockout embryonic mouse fibroblasts and bone marrow-derived mouse macrophages (these cells lack ATF3 protein) had higher levels of IκBζ and pro-inflammatory cytokines (including IL-6 in the macrophage study) than control (i.e., ATF3 protein-expressing) fibroblasts and macrophages; b) Irg1 gene knockout peritoneal macrophages (i.e., macrophages lacking the itaconate-forming enzyme, IRK1) had lower levels of ATF3 than control mice but 4-oleyl itaconate treatment increased their ATF3 levels; c) dimethyl itaconate inhibited the ability of LPS to increase the levels of IκBζ protein and IL-6 in mouse bone marrow-derived macrophages; d) Atf3 gene knockout mice with experimentally-induced inflammation of their hearts caused by either myocardial infarction due to the ligation of their left anterior descending coronary artery or by intraperitoneal injections of the heart-injuring drug, doxorubicin, developed greater levels of cardiac tissue inflammation, larger cardiac infarction (i.e., dead tissue) sizes, more cardiac fibrosis, poorer cardiac function, and higher blood serum levels of IL-6 than ATF3-expressing control mice; and e) 4-octyl itaconate reduced the IL-6 serum levels, cardiac inflammation, cardiac fibrosis, infarction size, and cardiac dysfunction caused by myocardial infarction or doxorubicin in Atf3 gene knockout mice. These findings suggest that 4-octyl itaconate and dimethyl itaconate have anti-inflammatory actions in these cited models of inflammation and do so by increasing ATF3 and/or decreasing IκBζ levels which in turn reduces the levels of inflammation-promoting cytokines.
Inhibit Tet methylcytosine dioxygenase 2
Tet methylcytosine dioxygenase 2 (i.e., TET2) is an enzyme that is activated by the tricarboxylic acid cycle intermediate metabolite, α-ketoglutarate. Itaconate blocks this activation. Activated TET2 hydroxylates, i.e. adds a hydride group (notated as OH−), to the methyl group (notated as -) of 5-methylcytosine on the cytosine (i.e., C) in the CpG sites of the DNA in its target genes. The 5-hydroxymethylcytosine DNA formed by this hydroxylation may inhibit or stimulate some of these target genes' production of the proteins they direct to be made (see Gene expression). In addition, TET2 binds to two histone deacetylases, HDAC1 and HDAC2, which are thereby activated. The gene expression-regulating and HDAC1/2 activation effects of itaconate have anti-inflammatory actions. For example, they suppress the levels of the proinflammatory cytokines, IL-6 and IL-1β, in dendritic cells and macrophages.
Studies have shown that: a) itaconate blocked α-ketoglutarate from binding to and thereby activating the isolated TET2 protein in a cell-free system; b) TET2 gene knockout bone marrow-derived macrophages (i.e., BMDMs) had far lower levels of hydroxymethylcytosine in their DNA than control macrophages; c) itaconate and 4-octyl itaconate lowered the amount of hydroxymethylcytosine in the DNA of control but not in TET2 gene knockout BMDMs; d) LPS stimulation of mouse macrophage RAW264.7 cells (these cells express TET2) caused increases in their levels of the messenger RNA (and presumably therefore the protein levels) of three proinflammatory chemokines (i.e., proteins that among other functions mobilize inflammation-promoting leukocytes), CXCL9, CXCL10, and CXCL11, but did not do so in Tet2 gene knockout RAW264.7 cells; e) itaconate reduced the ability of LPS to stimulate rises in the messenger RNA levels for IL-6 and IL-1β in RAW264.7 cells; f) 4-octyl itaconate reduced the ability of LPS to raise the messenger RNA levels of IκBζ, Il-6, CXCL9, CXCL10, and CXCL11 in the RAW264 cells; g) in a model of LPS-induced septic shock, LPS-treated Irg1 gene knockout mice (i.e., mice lacking the itaconate-forming protein, IRG1), had higher serum levels of IL-6, greater lung damage, and poorer survival times than control (i.e. IRG1-expressing) LPS-treated mice; h) compared to LPS-treated control mice, LPS-treated mice that were made to express an inactive TET2 protein (termed Tet2HxD) in place of active TET2 protein had lower serum levels of pro-inflammatory cytokines IL-6 and tumor necrosis factor, lower serum levels of the proinflammatory chemokine CXCL9, lower serum levels of alanine transaminase and aspartate transaminase (i.e., liver proteins that are released in the circulation by damaged livers), less severe pulmonary edema and lung tissue injury, and longer survival times; and i) the intraperitoneal injection of itaconate 12 hours before LPS treatment of in mice expressing active TET2 likewise had lower serum levels of IL-6, tumor necrosis factor, CXCL9, alanine transaminase, and aspartate transaminase, less severe pulmonary edema and lung tissue injury, and longer survival times. These findings indicate that 4-octyl itaconate and itaconate inhibit the activation of TET2 and thereby the production of various proinflammatory cytokines and chemokines. At least some of these itaconate and 4-octyl itaconate actions appear to suppress the sepsis shock-like actions of LPS in mice. Further studies are needed to determine in itaconate and/or itaconate-like compounds suppress other inflammatory conditions. (Since TET2 inactivating gene mutations in humans have been associated the development of various cancers such as acute myeloid leukemia, the possibility that itaconate's inhibition of TET2's catalytic activity may lead to these cancers requires investigation.)
Inhibit interleukin 17A
Interleukin 17 (i.e., IL-17) refers to any one of 6 closely related subtypes, IL-17A to IL17F. IL-17A is a pro-inflammatory cytokine that is commonly elevated in cells undergoing inflammatory responses. (Some studies used the term IL-17 when referring to IL-17A or when the subtype of IL-17 measured was undefined.) Excessive IL-17A production appears to contribute to the development of various autoimmune diseases by stabilizing the messenger RNA for IκBζ and thereby increasing cellular levels of IκBζ protein and IL-6.
A study focusing on models of the skin autoimmune disease psoriasis reported that: a) cultured mouse and human keratinocytes, i.e., skin cells, treated with IL-17A increased their levels of IκBζ; b) pretreatment of these skin cells with dimethyl itaconate inhibited this increase; c) the application of imiquimod to the skin of mouse ears daily for 7 days caused psoriasis-like ear skin scaling (i.e., thickening of the skin's stratum corneum due to dry or greasy laminated masses of keratin) and edema in control mice but not do so in mice treated injected intraperitoneally with dimethyl itaconate 24 hours before application of imiquimod; and d) analysis of the ear skin of these mice found significant stimulation of various IκBζ-targeted genes in control mice but not in dimethyl itaconate-treated mice. These results suggest that dimethyl itaconate inhibited IL-17A's ability to increase IκBζ levels and thereby reduced the levels of IL-6 in mouse and human keratinocytes; this mechanism may have been responsible for the ability of dimethyl itaconate to block the psoriasis-like skin response of mice to imiquimod. Elevated levels of IL-17 (assumed to be IL-17A unless future studies define it as another IL-17 subtype) occur in the cells involved in other human autoimmune inflammatory disorders besides psoriasis. These other disorders include ankylosing spondylitis; rheumatoid arthritis; spondyloarthritis diseases (i.e., rheumatoid factor-antibody negative ankylosing spondylitis, psoriatic spondylitis, certain forms of reactive arthritis, inflammatory bowel disease-associated spondylitis, and unclassifiable spondylitis); Crohn's disease; ulcerative colitis, and Sjögren's syndrome. The effects of itaconate or one of its analogs in animal models of these autoimmune diseases should be examined in a manner similar to the studies in psoriasis.
Antibacterial actions
Itaconate can act directly on certain types of bacteria to limit their growth and disease-causing abilities. The enzyme isocitrate lyase is required for the glyoxylate cycle to operate in many bacteria. This cycle is a vital metabolic pathway that uses compounds containing 2 carbon atoms such as acetate to meet bacterial carbon needs when simple sugars, e.g., glucose, are unavailable. Itaconate inhibits isocitrate lyase and thereby the functioning of the glycolate cycle and the growth of cultured and/or phagocytosed Staphylococcus aureus (including multiple drug resistant Staphyoocccus aureus), Vogesella indigofera (also termed Pseudomonas indigofera), Legionella pneumophila, Mycobacterium avium, Salmonella enterica, Coxiella burnetii, Francisella tularensis, and Acinetobacter baumannii.
Studies examining the effects of itaconate and itaconate-like compounds on phagocytosed bacterial have reported that: a) mouse bone marrow-derived macrophages exposed to live or heat-killed Staphylococcus aureus rapidly (i.e., within 1 hour) developed increases in their levels of IRG1 and IRG1's metabolite, itaconate; b ) human Müller retinal glia IO-M1 cells exposed to these live or heat-killed bacterial likewise showed rapid increases in their IRG1 levels (itaconate not measured); c) 4-octyl itaconate and dimethyl itaconate suppressed the growth of Staphylococcus aureus in mouse bone marrow-derived macrophages and Müller retinal glial IO-M1 cells by inhibiting these cells formation of the NLRP3 inflammasome and thereby the production of pro-inflammatory cytokines such as IL-1β; and d) itaconate suppressed the growth of Salmonella typhimurium in mouse macrophage-like RAW264.7 cells by stimulating these cells to produce reactive oxygen species. In a study of bacteria-induced endophthalmitis (i.e., eye inflammation): a) mice injected with live Staphylococcus aureus into their eye's aqueous humor developed increased retina tissue levels of the itaconate-forming enzyme, IRG1, as well as itaconate; b) Irg1 gene knockout mice (i.e., mice lacking IRG1 protein) that had interocular injections of these bacteria developed severer disease than control (i.e., IRG1-exressing) mice receiving these bacteria injections; c) Mice intraocularly injected with these bacteria plus itaconate, 4-octyl itaconate, or dimethyl itaconate developed less severe eye damage and fewer interocular bacteria than mice injected with these bacteria without getting injected with itaconate or the itaconate analogs; d) adding antibiotics to the itaconate treatment further reduced the severity of these eye infections; and e) analysis of the aqueous humor in the eyes of 22 patients with bacterial eye infections (i.e., 12 gram-positive and 10 gram-negative bacteria) found significantly higher levels of itaconate than those in the eyes of 10 patients with non-infectious eye problems (e.g., retinal detachment). These findings suggest that itaconate functions to suppress the growth of the cited bacteria in mice and may also do so in humans. They also support studies to determine if itaconate or itaconate-like compounds are useful for treating human Staphylococcus aureus eye infections, other types of bacterial eye infections in animals and humans, and animal and human infections in other tissue sites besides the eye. It should be noted, however, that Staphylococcus aureus and at least one other bacterial species, Pseudomonas aeruginosa, can use host cell-derived itaconate to form a biofilm that covers their surfaces and thereby increases their survival and pathogenicity.
Antiviral actions
Itaconate suppresses the growth of certain disease-causing viruses. Zika virus causes the mosquito-transmitted human disease, Zika fever. The virus produces symptomatic disease in only 20% of infected humans. These symptoms, which are usually mild, include rashes, fevers, conjunctivitis, muscle pains, joint pains, malaise, and headaches lasting for 2–7 days. However, the virus can cause severe nervous system birth defects in babies when it is transmitted from infected mothers to their embryos. These "congenital zika syndrome" defects include microcephaly, craniosynostosis (i.e., premature closure of the skull's fontanels), cerebellar hypoplasia, ventriculomegaly, and various other nervous system malformations. Zika virus also causes severe non-congenital nervous system inflammatory disorders such as the Guillain-Barré syndrome, encephalitis, disseminated encephalomyelitis, and transverse myelitis; in rare cases, it also causes cerebrovascular strokes. As of 2023, there were no vaccines or antiviral medications available to treat Zika fever. In cell culture studies, human A549 lung adenocarcinoma cells and Huh7 human hepatocyte-derived cancer cells were treated with buffer or 4-octyl itaconate for 2 days and then infected with Zika virus for 4 days. 4-Octyl itaconate suppressed the growth of this virus in both cancer cell types. In a model of neurological Zika disease, mice were injected intracranially with Zika virus plus or minus 4-octyl itaconate. 4-Ocyl itaconate significantly reduced the number of brain tissue Zika viruses. This study also indicated that the antiviral action of 4-octyl itaconate was associated with its inhibition of the succinate dehydrogenase enzyme and the resulting rises in brain tissue levels of succinate. Further studies are needed to determine if itaconate and/or its analogs will prove useful for treating Zika fever in humans.
4-Octyl itaconate also suppresses the proliferation of COVID-19. Treating cultured Vero cells (i.e., cells originally isolated from an African green monkey) with 4-octyl itaconate before infecting them with SARS-CoV-2 (strain #291.3 FR-4286) greatly reduced their content of this virus's RNA, the number of viral particles released by the Vero cells, and the number of Vero cells killed by the virus. 4-Octyl itaconate had similar anti-viral effects on cultured SARS-CoV-2-infected human lung cancer Calu-3 cells, human epithelial NuLi cells, and human airway epithelial cells. Further studies strongly suggested that these anti-viral actions of 4-octyl itaconate were due to its stimulating increases in the activity of the Nrf2 transcription factor (see the above section termed "Inhibit KEAP1"). Studies have also been conducted on cultured cells challenged with other disease-causing viruses. One or more of the itaconate analogs was shown to inhibit the growth of: a) Herpes simplex viruses types 1 and 2 in cultured human HaCaT keratinocyte skin cells; b) Vaccinia virus in human HaCaT T keratinocyte skin cells and mouse bone marrow-derived macrophages; and c) Zika virus in A549 and Huh-7 cells (see previous paragraph). Notably, however, 4-octyl itaconate enhanced rather than inhibited the growth of vesicular stomatitis virus in cultured 4T1 mouse breast cancer and 786-O human kidney carcinoma cells; it also reduced the inflammatory response to, and improved the survival of, influenza A virus but did not inhibit this virus's growth in mice.
Anti-cancer actions
Individuals with inflammatory bowel diseases, i.e., ulcerative colitis and Crohn's disease, have an increased risk of developing cancer in the afflicted areas of their colons and other parts of their gastrointestinal tracts. In a murine model of inflammatory bowel disease leading to colon cancer, mice were given an intraperitoneal injection of the cancer-causing agent azomethane on day 0, on day 5 were given an intraperitoneal injection of dimethyl itaconate or the vehicle used to dissolve dimethyl itaconate, on days 5 through 9 were given drinking water containing the colitis-causing agent dextran sodium sulfate, and on days 10 through 25 were given normal drinking water. After repeating this cycle three times, the mice were euthanized. Compared to mice treated with the vehicle, mice treated with dimethyl itaconate showed; a) less thickened and hyperplastic colons; b) fewer inflammatory cells in their colons; c) lower colon tissue levels of the proinflammatory cytokines, IL-1β and IL-6 as well as the proinflammatory chemokines, CCL2, CCL17, and Interleukin 8; and d) far fewer colon tumors. These findings indicate that dimethyl itaconate inhibited colon inflammatory responses to dextran sodium sulfate and presumably thereby colon cancer responses to azomethane in mice. They also support further preclinical studies to determine if itaconate-like compounds suppress human inflammation-related colon cancers.
Retinoblastoma is a cancer that develops in the retinas. The retinoblastomas of patients often become resistant to carboplatin as well as other chemotherapy drugs such as etoposide and vincristine, i.e., they are multiple drug resistant retinoblastomas. 4-Octyl itaconate induces Y79-CR cells to die, apparently by ferroptosis, i.e., it increased these cells ferrous and lipid peroxidation levels. Nude mice (i.e., immunodefient mice) were implanted with Y79-CR or Y79 cells in the subcutaneously issue of their flanks; one week later were interperitoneally injected with 4-octyl itaconate or the vehicle used to dissolve 4-octyl itaconate once every other day for 2 weeks; and were euthanized 21 days later. Tumor masses in mice given Y79-CL cells were far less in 4-octyl itaconate-treated than vehicle-treated mice. Also, the differences in tumor masses between 4-octyl itaconate-treated and vehicle-treated mice transplanted with Y79 cells were much less than that in mice transplanted with Y19-CR cells. These results indicate that 4-octyl itaconate selectively kills multiple drug resistant Y79-CR cells that are cultured or implanted in mice and does so by triggering ferroptosis. They also support studies to learn if itaconate and itaconate-like compounds would be useful for treating humans with carboplatin-resistant or other forms of multiple drug resistant retinoblastomas and perhaps other multiple drug resistant cancers.
Thymic carcinoma is a form of thymus gland cancer. In more advanced cases, it is commonly treated with platinum-based antineoplastic drugs and lenvatinib, an inhibitor of vascular endothelial growth factor receptors. However, patients often are or develop resistant to these drugs. Consequently, other agents are being evaluated as treatments for thymic carcinomas. Dimethyl itaconate decreases the proliferation of cultured Ty82 human thymic carcinoma cells but had relatively little effect on the proliferation of cultured non-cancerous human fibroblasts. Dimethyl itaconate treatment of the Ty82 cells decreased the activity of their mTOR protein as well as PI3K/AKT/mTOR pathway (This pathway promotes the development and/or progression of many cancers including some thymus gland cancers.) Temsirolimus, a specific inhibitor of mTOR, mimicked the action of dimethyl itaconate in suppressing the proliferation of Ty82 cells. These findings suggest that dimethyl itaconate inhibits the proliferation of Ty82 mouse cells by suppressing the activity of their mTOR protein and I3K/AKT/mTOR pathway. Further studies are needed to determine the effects of dimethyl itaconate, other itaconate-like compounds, and/or itaconate treating animals models of thymic carcinomas.
Varying actions of itaconate and its analogs
One study reported that dimethyl itaconate and 4-octyl itaconate stimulated mouse bone marrow-derived macrophages to produce pro-interferon-β (i.e., the precursor to the proinflammatory cytokine IFN-β as well as to secrete IL-6, interleukin 10, and IFN-β whereas itaconate and 4-ethyl itaconate had far less ability to or did not stimulate these responses. This result suggests that future studies should examine the actions of itaconate along with those of each of its analogs.
>
Itaconic acid's chemical structure consists of one unsaturated double bond and two carboxyl groups (see carboxylic acid}. This structure renders it readily converted to many valuable bio-based materials (i.e., materials derived from a living or once-living organism). For many years, these materials were commonly produced in the large amounts needed for industrial purposes from various types of carbohydrates. Itaconic acid has also been used to make these materials. In doing so, it is a comonomer, i.e., a precursor monomer, that is readily polymerized to various desired polymers that are further altered to form some of the same or similar products made from the polymerization of carbohydrates. The products made from itaconate include synthetic styrene-butadiene-based rubber, synthetic latexes, various plastics, superabsorbent polymers that absorb large amounts of liquids (for use in, e.g., baby diapers), unsaturated polyester resins that are used to make glass fiber-reinforced plastics (e.g., fiberglass), detergents, and biofuels (i.e., fuels made from organic materials such as itaconic acid). It is also converted to methyl methacrylate, a product that has many commercial and some medical applications (see uses of methyl methacrylate). Fields using the products of itaconate include those that manufacture paint, lacquers (i.e., coatings for covering the surfaces of various objects), plasticizers, plastics, chemical fibers, hygienic materials, construction materials, and environmentally-friendly fuels that can be substituted for pollution-causing, non-renewable fuels such as coal, oil, and natural gas. Itaconic acid itself may be mass-produced if it or any of the analogs synthesized from it are found to be useful for treating medical disorders.
The demand for itaconic acid has grown to such an extent that it is projected to reach a market value of 177 million dollars per year in United States of American currency by 2028. Consequently, alternate methods for making products with properties similar or identical to those made from itaconic acid by using less costly substitutes for itaconic acid and/or methods that are more productive, less expensive, and/or more environmental-friendly than those used for itaconic acid are being evaluated. Betulin, for example, is an abundant, naturally occurring diol triterpene that is readily isolated from the bark of birch trees. Betulin forms polymers that have some of the biochemical properties found in itaconate polymers. Consequently, betulin is being studied to determine if it can be used in place of itaconic acid to form products with properties similar to those made from itaconic acid but doing so in economically and/or environmentally more favorable ways.
References
Enoic acids
Dicarboxylic acids
Monomers
Vinylidene compounds | Itaconic acid | [
"Chemistry",
"Materials_science"
] | 10,062 | [
"Monomers",
"Polymer chemistry"
] |
3,107,599 | https://en.wikipedia.org/wiki/Chromium%28III%29%20fluoride | Chromium(III) fluoride is an inorganic compound with the chemical formula . It forms several hydrates. The compound is a green crystalline solid that is insoluble in common solvents, but the hydrates (violet) and (green) are soluble in water. The anhydrous form sublimes at 1100–1200 °C.
Structures
Like almost all compounds of chromium(III), these compounds feature octahedral Cr centres. In the anhydrous form, the six coordination sites are occupied by fluoride ligands that bridge to adjacent Cr centres. In the hydrates, some or all of the fluoride ligands are replaced by water.
Production
Chromium(III) fluoride is produced from the reaction of chromium(III) oxide and hydrofluoric acid:
The anhydrous form is produced from hydrogen fluoride and chromic chloride:
Another method of synthesis of involves thermal decomposition of (ammonium hexafluorochromate(III)):
A mixed valence compound (chromium(II,III) fluoride) is also known.
Uses
Chromium(III) fluoride finds some applications as a mordant in textiles and as a corrosion inhibitor. Chromium(III) fluoride catalyzes the fluorination of chlorocarbons by HF.
References
Fluorides
Metal halides
Chromium(III) compounds | Chromium(III) fluoride | [
"Chemistry"
] | 305 | [
"Inorganic compounds",
"Fluorides",
"Metal halides",
"Salts"
] |
3,107,628 | https://en.wikipedia.org/wiki/Cobalt%28III%29%20fluoride | Cobalt(III) fluoride is the inorganic compound with the formula . Hydrates are also known. The anhydrous compound is a hygroscopic brown solid. It is used to synthesize organofluorine compounds.
The related cobalt(III) chloride is also known but is extremely unstable. Cobalt(III) bromide and cobalt(III) iodide have not been synthesized.
Structure
Anhydrous
Anhydrous cobalt trifluoride crystallizes in the rhombohedral group, specifically according to the aluminium trifluoride motif, with a = 527.9 pm, α = 56.97°. Each cobalt atom is bound to six fluorine atoms in octahedral geometry, with Co–F distances of 189 pm. Each fluoride is a doubly bridging ligand.
Hydrates
A hydrate is known. It is conjectured to be better described as .
There is a report of an hydrate , isomorphic to .
Preparation
Cobalt trifluoride can be prepared in the laboratory by treating with fluorine at 250 °C:
+ 3/2 → +
In this redox reaction, and are oxidized to and , respectively, while is reduced to . Cobalt(II) oxide (CoO) and cobalt(II) fluoride () can also be converted to cobalt(III) fluoride using fluorine.
The compound can also be formed by treating with chlorine trifluoride or bromine trifluoride .
Reactions
decomposes upon contact with water to give oxygen:
4 + 2 H2O → 4 HF + 4 Co + O2
It reacts with fluoride salts to give the anion [CoF6]3−, which is also features high-spin, octahedral cobalt(III) center.
Applications
is a powerful fluorinating agent. Used as slurry, converts hydrocarbons to the perfluorocarbons:
2 + R-H → 2 Co + R-F + HF
Co is the byproduct.
Such reactions are sometimes accompanied by rearrangements or other reactions. The related reagent KCoF4 is more selective.
Gaseous
In the gas phase, is calculated to be planar in its ground state, and has a 3-fold rotation axis (point group D3h). The ion has a ground state of 3d6 5D. The fluoride ligands split this state into, in energy order, 5A', 5E", and 5E' states. The first energy difference is small and the 5E" state is subject to the Jahn-Teller effect, so this effect needs to be considered to be sure of the ground state. The energy lowering is small and does not change the energy order. This calculation was the first treatment of the Jahn-Teller effect using calculated energy surfaces.
References
External links
National Pollutant Inventory - Cobalt fact sheet
National Pollutant Inventory - Fluoride and compounds fact sheet
Fluorides
Metal halides
Cobalt(III) compounds
Fluorinating agents | Cobalt(III) fluoride | [
"Chemistry"
] | 646 | [
"Inorganic compounds",
"Salts",
"Fluorinating agents",
"Metal halides",
"Reagents for organic chemistry",
"Fluorides"
] |
3,107,845 | https://en.wikipedia.org/wiki/Klee%27s%20measure%20problem | In computational geometry, Klee's measure problem is the problem of determining how efficiently the measure of a union of (multidimensional) rectangular ranges can be computed. Here, a d-dimensional rectangular range is defined to be a Cartesian product of d intervals of real numbers, which is a subset of Rd.
The problem is named after Victor Klee, who gave an algorithm for computing the length of a union of intervals (the case d = 1) which was later shown to be optimally efficient in the sense of computational complexity theory. The computational complexity of computing the area of a union of 2-dimensional rectangular ranges is now also known, but the case d ≥ 3 remains an open problem.
History and algorithms
In 1977, Victor Klee considered the following problem: given a collection of n intervals in the real line, compute the length of their union. He then presented an algorithm to solve this problem with computational complexity (or "running time") — see Big O notation for the meaning of this statement. This algorithm, based on sorting the intervals, was later shown by Michael Fredman and Bruce Weide (1978) to be optimal.
Later in 1977, Jon Bentley considered a 2-dimensional analogue of this problem: given a collection of n rectangles, find the area of their union. He also obtained a complexity algorithm, now known as Bentley's algorithm, based on reducing the problem to n 1-dimensional problems: this is done by sweeping a vertical line across the area. Using this method, the area of the union can be computed without explicitly constructing the union itself. Bentley's algorithm is now also known to be optimal (in the 2-dimensional case), and is used in computer graphics, among other areas.
These two problems are the 1- and 2-dimensional cases of a more general question: given a collection of n d-dimensional rectangular ranges, compute the measure of their union. This general problem is Klee's measure problem.
When generalized to the d-dimensional case, Bentley's algorithm has a running time of . This turns out not to be optimal, because it only decomposes the d-dimensional problem into n (d-1)-dimensional problems, and does not further decompose those subproblems. In 1981, Jan van Leeuwen and Derek Wood improved the running time of this algorithm to for d ≥ 3 by using dynamic quadtrees.
In 1988, Mark Overmars and Chee Yap proposed an algorithm for d ≥ 3. Their algorithm uses a particular data structure similar to a kd-tree to decompose the problem into 2-dimensional components and aggregate those components efficiently; the 2-dimensional problems themselves are solved efficiently using a trellis structure. Although asymptotically faster than Bentley's algorithm, its data structures use significantly more space, so it is only used in problems where either n or d is large. In 1998, Bogdan Chlebus proposed a simpler algorithm with the same asymptotic running time for the common special cases where d is 3 or 4.
In 2013, Timothy M. Chan developed a simpler algorithm that avoids the need for dynamic data structures and eliminates the logarithmic factor, lowering the best known running time for d ≥ 3 to .
Known bounds
The only known lower bound for any d is , and optimal algorithms with this running time are known for d=1 and d=2. The Chan algorithm provides an upper bound of for d ≥ 3, so for d ≥ 3, it remains an open question whether faster algorithms are possible, or alternatively whether tighter lower bounds can be proven. In particular, it remains open whether the algorithm's running time must depend on d. In addition, the question of whether there are faster algorithms that can deal with special cases (for example, when the input coordinates are integers within a bounded range) remains open.
The 1D Klee's measure problem (union of intervals) can be solved in where p denotes the number of piercing points required to stab all intervals (the union of intervals pierced by a common point can be calculated in linear time by computing the extrema).
Parameter p is an adaptive parameter that depends on the input configuration, and the piercing algorithm yields an adaptive algorithm for Klee's measure problem.
See also
Convex volume approximation, an efficient algorithm for convex bodies
References and further reading
Important papers
.
.
.
.
.
.
.
Secondary literature
Franco P. Preparata and Michael I. Shamos (1985). Computational Geometry (Springer-Verlag, Berlin).
Klee's Measure Problem, from Professor Jeff Erickson's list of open problems in computational geometry. (Accessed November 8, 2005, when the last update was July 31, 1998.)
References
Computational geometry
Measure theory
Mathematical problems | Klee's measure problem | [
"Mathematics"
] | 978 | [
"Computational geometry",
"Mathematical problems",
"Computational mathematics"
] |
3,107,902 | https://en.wikipedia.org/wiki/Author%20citation%20%28zoology%29 | In zoological nomenclature, author citation is the process in which a person is credited with the creation of the scientific name of a previously unnamed taxon. When citing the author of the scientific name, one must fulfill the formal requirements listed under the International Code of Zoological Nomenclature ("the Code"). According to Article 51.1 of the Code, "The name of the author does not form part of the name of a taxon and its citation is optional, although customary and often advisable." However, recommendation 51A suggests, "The original author and date of a name should be cited at least once in each work dealing with the taxon denoted by that name. This is especially important and has a unique character between homonyms and in identifying species-group names which are not in their native combinations." For the sake of information retrieval, the author citation and year appended to the scientific name, e.g. genus-species-author-year, genus-author-year, family-author-year, etc., is often considered a "de-facto" unique identifier, although this usage may often be imperfect.
Rank
The Code recognizes three groups of names, according to rank:
family-group names at the ranks of superfamily, family, subfamily, tribe, subtribe (any rank below superfamily and above genus).
genus-group names at the ranks of genus and subgenus.
species-group names at the ranks of species and subspecies.
Within each group, the same authorship applies regardless of the taxon level to which the name (with, in the case of a family-group name, the appropriate ending) is applied. For example, the taxa that the red admiral butterfly can be assigned to is as follows:
Family: Nymphalidae Swainson, 1827
Subfamily: Nymphalinae Swainson, 1827
Tribe: Nymphalini Swainson, 1827
Genus: Vanessa Fabricius, 1807
Subgenus: Vanessa (Vanessa) Fabricius, 1807
Species: Vanessa atalanta (Linnaeus, 1758)
Subspecies: Vanessa atalanta atalanta (Linnaeus, 1758)
The parentheses around the author citation indicate that this was not the original taxonomic placement. In this case, Linnaeus published the name as Papilio atalanta Linnaeus, 1758.
Identity of the author(s)
In the first attempt to provide international rules for zoological nomenclature in 1895, the author was defined as the author of the scientific description, and not as the person who provided the name (published or unpublished), this was the usual practice in the nomenclature of various animal groups before. As a result, some disciplines such as malacology required a change in authorship regarding their taxonomic names as they had been attributed to persons who had never published a scientific work.
This new rule was not sufficiently precise, so in the following decades, taxonomic practice continued to diverge among disciplines and authors. The ambiguity led a member of the ICZN Commission in 1974 to provide a more clear interpretation in the second edition of the Code (effective since 1961). Here, a suggestion was made that the author is defined as the individual who "publishes the name and the qualifying conditions...or equally clearcut attribution of name and description" .
The current view among some taxonomists restricts authorship for a taxonomic name to the person who wrote the textual scientific content of the original description. The author of an image is not recognized as a co-author of a name, even if the image was the only basis provided for making the name available.
If a true author of a written text is not directly recognizable in the original publication, they are not the author of a name (but the author of the work is). The text could actually be written by a different person. Some authors have copied text passages from unpublished sources without acknowledging them. In Art. 50.1.1 all these persons are excluded from the authorship of a name if they were not explicitly mentioned in the work itself as being the responsible persons for making a name available.
Most taxonomists also accept Art. 50.1.1 that the author of a cited previously published source, from which text passages were copied, is not acknowledged as the author of a name.
In some cases, the author of the description can differ from the author of the work. This must be explicitly indicated in the original publication, either by a general statement ("all zoological descriptions in this work were written by Smith") or by an individual statement ("the following three descriptions were provided by Jiménez," "this name shall be attributed to me and Wang because she contributed to the description").
In the 1800s it was the usual style to eventually set an abbreviation of another author immediately below the text of the description to indicate authorship. This is commonly accepted today; if the description is attributed to a different person, then that person is the author.
When the name of a different author was only set behind the new name in the headline (and not repeated below the description to indicate that description had been written by that person), this was a convention to indicate authorship only for the new name and not for the description. These authorships for names are not covered by Art. 50.1 and are not accepted. Only authorship for the description is accepted.
Prior to 1900-1920, there were several different conventions concerning authorship which is why we frequently find other authors than today for zoological names in early zoological literature. Art. 50.1 has been an accepted model since the mid-1900s. It eliminated the need to research who the true author was and all readers could verify and determine the name of the author in the original work itself.
Examples to illustrate practical use
In citing the name of an author, the surname is given in full, not abbreviated. The date (true year) of publication in which the name was established is added. If desired, a comma is placed between the author and date (the comma is not prescribed under the Code, it contains no additional information. However, it is included in examples therein and also in the ICZN Official Lists and Indexes).
Balaena mysticetus Linnaeus, 1758
the bowhead whale was described and named by Linnaeus in his Systema Naturae of 1758
Anser albifrons (Scopoli, 1769)
the white-fronted goose was first described (by Giovanni Antonio Scopoli), as Branta albifrons Scopoli, 1769. It is currently placed in the genus Anser, so author and year are set in parentheses. The taxonomist who first placed the species in Anser is not recorded (and much less cited), the two different genus-species combinations are not regarded as synonyms.
An author can have established a name dedicated to oneself. This is rare and against unwritten conventions, but is not restricted under the Code.
Xeropicta krynickii (Krynicki, 1833)
a terrestrial gastropod from Ukraine was first described as Helix krynickii Krynicki, 1833, who originally attributed the name to another person Andrzejowski. But the description was written by Krynicki, and Andrzejowski had not published this name before.
Spelling of the name of the author
In a strict application of the Code, the taxon name author string components "genus," "species," and "year" can only have one combination of characters. The major problem in zoology for consistent spellings of names is the author. The Code gives neither a guide nor a detailed recommendation.
Unlike in botany, it is not recommended to abbreviate the name of the author in zoology. If a name was established by more than three authors, it is allowed to give only the first author, followed by the term "et. al." (and others).
There are no approved standards for the spellings of authors in zoology, and unlike in botany, no one has ever proposed such standards for zoological authors.
It is generally accepted that the name of the author shall be given in the nominative singular case if originally given in a different case and that the name of the author should be spelled in Latin script. There are no commonly accepted conventions on how to transcribe the names of authors if given in non-Latin script.
It is also widely accepted that the names of authors must be spelled with diacritic marks, ligatures, spaces, and punctuation marks. The first letter is normally spelled in upper-case, however, initial capitalization and usage of accessory terms can be inconsistent (e.g. de Wilde/De Wilde, d'Orbigny/D'Orbigny, Saedeleer/De Saedeleer, etc.). Co-authors are separated by commas; the last co-author should be separated by "&". In Chinese and Korean names only the surname is generally cited.
Examples:
Pipadentalium Yoo, 1988 (Scaphopoda)
Sinentomon Yin, 1965 (Protura)
Belbolla huanghaiensis Huang & Zhang, 2005 (Nematoda)
Apart from these, there are no commonly accepted conventions. The author can either be spelled following a self-made standard (Linnaeus 1758, Linnaeus 1766), or as given in the original source which implies that names of persons are not always spelled consistently (Linnæus 1758, Linné 1766), or we are dealing with composed data sets without any consistent standard.
Inferred and anonymous authorships
In some publications, the author responsible for new names and nomenclatural acts is not stated directly in the original source, but can sometimes be inferred from reliable external evidence. Recommendation 51D of the Code states: "...if the authorship is known or inferred from external evidence, the name of the author, if cited, should be enclosed in square brackets to show the original anonymity".
Initials
If the same surname is common to more than one author, initials are sometimes given (for example "A. Agassiz" vs. "L. Agassiz", etc.), but there are no standards concerning this procedure, and not all animal groups/databases use this convention. Although initials are often regarded as useful to disambiguate different persons with the same surname, this does not work in all situations (for example "W. Smith", "C. Pfeiffer", "G. B. Sowerby" and other names occur more than once), and in the examples given in the Code and also the ICZN Official Lists and Indexes, initials are not used.
Implications for information retrieval
For a computer, O. F. Müller, O. Müller, and Müller are different strings, even the differences between O. F. Müller, O.F. Müller, and OF Müller can be problematic. Fauna Europaea is a typical example of a database where combined initials O.F. and O. F. are read as entirely different strings so those who try to search for all taxonomic names described by Otto Friedrich Müller have to know (1) that the submitted data by the various data providers contained several versions (O. F. Müller, O.F. Müller, Müller, and O. Müller), and (2) that in many databases, the search function will not find O.F. Müller if you search for O. F. Müller or Müller, not to mention alternative orthographies of this name such as Mueller or Muller.
Thus, the usage of (e.g.) genus-species-author-year, genus-author-year, family-author-year, etc. as "de facto" unique identifiers for biodiversity informatics purposes can present problems on account of variation in cited author surnames, presence/absence/variations in cited initials, and minor variants in the style of presentation, as well as variant cited authors (responsible person/s) and sometimes, cited dates for what may be in fact the same nomenclatural act in the same work. In addition, in a small number of cases, the same author may have created the same name more than once in the same year for different taxa, which can then only be distinguished by reference to the title, page, and sometimes line of the work in which each name appears.
In Australia, a program was created (TAXAMATCH) that provides a tool to indicate in a preliminary manner whether two variants of a taxon name should be accepted as identical or not according to the similarity of the cited author strings. The authority matching function of TAXAMATCH assigns a moderate-to-high similarity to author strings with minor orthographic and/or date differences, such as "Medvedev & Chernov, 1969" vs. "Medvedev & Cernov, 1969", or "Schaufuss, 1877" vs. "L. W. Schaufuss, 1877", or even "Oshmarin, 1952" vs. "Oschmarin in Skrjabin & Evranova, 1952", and a low similarity to author citations which are very different (for example "Hyalesthes Amyot, 1847" vs. "Hyalesthes Signoret, 1865") and are more likely to represent different publication instances, and therefore possibly also different taxa. The program also understands standardized abbreviations as used in botany and sometimes in zoology as well; for example, "Rchb." for Reichenbach, but may still fail for non-standard abbreviations (such as "H. & A. Ad." for H. & A. Adams, where the normal citation would in fact be "Adams & Adams"). Non-standard abbreviations must then be picked up by subsequent manual inspection after the use of an algorithmic approach to pre-sort the names to be matched into groups of either more or less similar names and cited authorities. However, author names that are spelled very similarly but in fact represent different persons, and who independently authored identical taxon names, will not be adequately separated by this program; examples include "O. F. Müller 1776" vs. "P. L. S. Müller 1776", "G. B. Sowerby I 1850" vs. "G. B. Sowerby III 1875" and "L. Pfeiffer 1856" vs. "K. L. Pfeiffer 1956", so additional manual inspection is also required, especially for known problem cases such as those given above.
A further cause of errors that would not be detected by such a program include authors with multi-part surnames which are sometimes inconsistently applied in the literature, and works where the accepted attribution has changed over time. For example, genera published in the anonymously authored work "Museum Boltenianum sive catalogus cimeliorum..." published in 1798 were for a long time ascribed to Bolten, but are now considered to have been authored by Röding according to a ruling by the ICZN in 1956. Analogous problems are encountered attempting to cross-link medical records by patient name; for relevant discussion see record linkage.
Author of a nomen nudum
A new name mentioned without description or indication or figure is a nomen nudum. A nomen nudum has no authorship nor date and is not an available name. If it is desired or necessary to cite the author of such an unavailable name, the nomenclatural status of the name should be made evident.
Sensu names
A "sensu" name (sensu = "in the sense of", should not be written in italics) is a previously established name that was used by an author in an incorrect sense (for example for a species that was misidentified). Technically this is only a subsequent use of a name, not a new name, and it has no own authorship. Taxonomists often created unwritten rules for authorships of sensu names to record the first and original source for a misidentification of an animal, but this is not in accordance with the Code.
Example:
For a West Alpine snail Pupa ferrari Porro, 1838, Hartmann (1841) used the genus Sphyradium Charpentier, 1837, which Charpentier had established for some similar species. Westerlund argued in 1887 that this species should be placed in another genus, and proposed the name Coryna for Pupa ferrari and some other species. Pilsbry argued in 1922, Westerlund had established Coryna as a new replacement name for Sphyradium, sensu Hartmann, 1841 (therefore "sensu" should not be written in italics, the term Sphyradium sensu Hartmann, 1841 would be misunderstood as a species name). But since a sensu name is not an available name with its own author and year, Pilsbry's argument is not consistent with the ICZN Code's rules.
See also
Author citation (botany)
Glossary of scientific naming
List of authors of names published under the ICZN
Wikispecies: Taxon authorities
References
External links
Zoological nomenclature | Author citation (zoology) | [
"Biology"
] | 3,508 | [
"Zoological nomenclature",
"Biological nomenclature"
] |
3,107,924 | https://en.wikipedia.org/wiki/Dog-tooth | In architecture, a dog-tooth or dogtooth pattern is an ornament found in the mouldings of medieval work of the commencement of the 12th century, which is thought to have been introduced by the Crusaders. The earliest example is found in the hall at Rabbath Ammon in Moab in Jordan (c. 614) built by the Sassanians, where it decorates the arch moulding of the blind arcades and the string courses. The pattern consists of four flower petals forming a square or diamond shape with central elements. The petals have the form of the pointed conical canine tooth, eye tooth or cuspid.
In the apse of a church at Murano, near Venice, it is similarly employed. In the 12th and 13th centuries it was further elaborated with carving, losing therefore its primitive form, but constituting a most beautiful decorative feature. In Elgin Cathedral in Scotland, the dogtooth ornament in the archivolt becomes a four-lobed leaf, and in Stone church in Kent, a much more enriched type of flower. The term has been supposed to originate in a resemblance to the dog tooth violet, but the original idea of a projecting tooth is a sufficient explanation.
See also
Ball flower
Dentil, also means "tooth", but under cornices
References
Ornaments (architecture)
Visual motifs | Dog-tooth | [
"Mathematics"
] | 273 | [
"Symbols",
"Visual motifs"
] |
3,107,984 | https://en.wikipedia.org/wiki/Author%20citation%20%28botany%29 | In botanical nomenclature, author citation is the way of citing the person or group of people who validly published a botanical name, i.e. who first published the name while fulfilling the formal requirements as specified by the International Code of Nomenclature for algae, fungi, and plants (ICN). In cases where a species is no longer in its original generic placement (i.e. a new combination of genus and specific epithet), both the authority for the original genus placement and that for the new combination are given (the former in parentheses).
In botany, it is customary (though not obligatory) to abbreviate author names according to a recognised list of standard abbreviations.
There are differences between the botanical code and the normal practice in zoology. In zoology, the publication year is given following the author names and the authorship of a new combination is normally omitted. A small number of more specialized practices also vary between the recommendations of the botanical and zoological codes.
Introduction
In biological works, particularly those dealing with taxonomy and nomenclature but also in ecological surveys, it has long been the custom that full citations to the place where a scientific name was published are omitted, but a short-hand is used to cite the author of the name, at least the first time this is mentioned. The author name is frequently not sufficient information, but can help to resolve some difficulties. Problems include:
The name of a taxon being referred to is ambiguous, as in the case of homonyms such as Darlingtonia Torr., a genus of carnivorous plants, vs. Darlingtonia DC., a genus of leguminous plants.
The publication of the name may be in a little-known journal or book. The author name may sometimes help to resolve this.
The name may not have been validly published, but the supposed author name may be helpful to locate the publication or manuscript in which it was listed.
Rules and recommendations for author citations in botany are covered by Articles 46–50 of the International Code of Nomenclature (ICN). As stated in Article 46 of the botanical Code, in botany it is normal to cite only the author of the taxon name as indicated in the published work, even though this may differ from the stated authorship of the publication itself.
Basic citation
The simplest form of author citation in botany applies when the name is cited in its original rank and its original genus placement (for binomial names and below), where the original author (or authors) are the only name/s cited, and no parentheses are included.
The Latin term "et" or the ampersand symbol "&" can be used when two authors jointly publish a name.Recommendation 46C.1
In many cases the author citation will consist of two parts, the first in parentheses, e.g.:
Helianthemum coridifolium (Vill.) Cout.
This form of author citation indicates that the epithet was originally published in another genus (in this case as Cistus coridifolius) by the first author, Dominique Villars (indicated by the enclosing parentheses), but moved to the present genus Helianthemum by the second (revising) author (António Xavier Pereira Coutinho). Alternatively, the revising author changed the rank of the taxon, for example raising it from subspecies to species (or vice versa), from subgenus to Section, etc.Article 49 (Again, the latter is in contrast to the situation in zoology, where no authorship change is recognized within family-group, genus-group, and species-group names, thus a change from subspecies to species, or subgenus to genus, is not associated with any change in cited authorship.)
Abbreviation
When citing a botanical name including its author, the author's name is often abbreviated. To encourage consistency, the International Code of Nomenclature for algae, fungi, and plants ICN recommendsRecommendation 46A, Note 1 the use of Brummitt & Powell's Authors of Plant Names (1992), where each author of a botanical name has been assigned a unique abbreviation. These standard abbreviations can be found at the International Plant Names Index.
For example, in:
Rubus L.
the abbreviation "L." refers to the famous botanist Carl Linnaeus who described this genus on p. 492 of his Species Plantarum in 1753.
Rubus ursinus Cham. & Schldl.
the abbreviation "Cham." refers to the botanist Adelbert von Chamisso and "Schldl." to the botanist Diederich Franz Leonhard von Schlechtendal; these authors jointly described this species (and placed it in the genus Rubus) in 1827.
Usage of the term "ex"
When "ex" is a component of the author citation, it denotes the fact that an initial description did not satisfy the rules for valid publication, but that the same name was subsequently validly published by a second author or authors (or by the same author in a subsequent publication).Article 46.4 However, if the subsequent author makes clear that the description was due to the earlier author (and that the earlier author accepted the name), then no "ex" is used, and the earlier author is listed alone. For example:
Andropogon aromaticus Sieber ex Schult.
indicates that Josef Schultes validly published this name (in 1824 in this instance), but his description attributed the name to Franz Sieber (in botany, the author of the earlier name precedes the later, valid one; in zoology, this sequence, where present, is reversed).
Examples
The following forms of citation are all equally correct:
Rubus ursinus Cham. & Schldl.
Rubus ursinus Cham. et Schldl.
Rubus ursinus von Chamisso & von Schlechtendal
Rubus ursinus von Chamisso et von Schlechtendal
As indicated above, either the original or the revising author may involve multiple words, as per the following examples from the same genus:
Helianthemum sect. Atlanthemum (Raynaud) G.López, Ortega Oliv. & Romero García
Helianthemum apenninum Mill. subsp. rothmaleri (Villar ex Rothm.) M.Mayor & Fern.Benito
Helianthemum conquense (Borja & Rivas Goday ex G.López) Mateo & V.J.Arán Resó
Usage of the ancillary term "in"
The ancillary term "in" is sometimes employed to indicate that the authorship of the published work is different from that of the name itself, for example:
Verrucaria aethiobola Wahlenb. in Acharius, Methodus, Suppl.: 17. 1803
Article 46.2 Note 1 of the Botanical Code indicates that in such cases, the portion commencing "in" is in fact a bibliographic citation and should not be used without the place of publication being included, thus the preferred form of the name+author alone in this example would be Verrucaria aethiobola Wahlenb., not Verrucaria aethiobola Wahlenb. in Acharius. (This is in contrast to the situation in zoology, where either form is permissible, and in addition a date would normally be appended.)
Authorship of subsidiary ranks
According to the botanical Code it is only necessary to cite the author for the lowest rank of the taxon in question, i.e. for the example subspecies given above (Helianthemum apenninum subsp. rothmaleri) it is not necessary (or even recommended) to cite the authority of the species ("Mill.") as well as that of the subspecies, though this is found in some sources. The only exception to this rule is where the nominate variety or subspecies of a species is cited, which automatically will inherit the same authorship of its parent taxon,Article 26.1 thus:
Rosa gallica L. var. gallica, not "Rosa gallica var. gallica L."
Emending authors
As described in Article 47 of the botanical code, on occasion either the diagnostic characters or the circumscription of a taxon may be altered ("emended") sufficiently that the attribution of the name to the original taxonomic concept as named is insufficient. The original authorship attribution is not altered in these cases, but a taxonomic statement can be appended to the original authorship using the abbreviation "emend." (for emendavit), as per these examples given in the Code:
Phyllanthus L. emend. Müll. Arg
Globularia cordifolia L. excl. var. (emend. Lam.).
(In the second example, "excl. var.", abbr. for exclusis varietatibus, indicates that this taxonomic concept excludes varieties which other workers have subsequently included.)
Other indications
Other indications which may be encountered appended to scientific name authorship include indications of nomenclatural or taxonomic status (e.g. "nom. illeg.", "sensu Smith", etc.), prior taxonomic status for taxa transferred between hybrid and non-hybrid status ("(pro sp.)" and "(pro hybr.)", see Article 50 of the botanical Code), and more. Technically these do not form part of the author citation but represent supplementary text, however they are sometimes included in "authority" fields in less well constructed taxonomic databases. Some specific examples given in Recommendations 50A–F of the botanical Code include:
Carex bebbii Olney, nomen nudum (alternatively: nom. nud.)
for a taxon name published without an acceptable description or diagnosis
Lindera Thunb., Nov. Gen. Pl.: 64. 1783, non Adans. 1763
for a homonym—indicating in this instance that Carl Peter Thunberg's "Lindera" is not the same taxon as that named previously by Michel Adanson, the correspondence of the two names being coincidental
Bartlingia Brongn. in Ann. Sci. Nat. (Paris) 10: 373. 1827, non Rchb. 1824 nec F.Muell. 1882
as above, but two prior (and quite possibly unrelated) homonyms noted, the first by Ludwig Reichenbach, the second by Ferdinand von Mueller
Betula alba L. 1753, nom. rej.
for a taxon name rejected (normally in favour of a later usage) and placed on the list of rejected names forming an appendix to the botanical Code (the alternative name conserved over the rejected name would be cited as "nom. cons.")
Ficus exasperata auct. non Vahl
this is the preferred syntax for a name that has been misapplied by a subsequent author or authors ("auct." or "auctt.") such that it actually represents a different taxon from the one to which Vahl's name correctly applies
Spathiphyllum solomonense Nicolson in Am. J. Bot. 54: 496. 1967, "solomonensis"
indicating that the epithet as originally published was spelled solomonensis, but the spelling here is in an altered form, presumably for Code compliance or some other legitimate reason.
See also
Specific to botany
Botanical name
International Code of Nomenclature for Cultivated Plants
Correct name (botany)
Hybrid name (botany)
List of botanists by author abbreviation
More general
Author citation (zoology)
Biological classification
Binomial nomenclature
Nomenclature codes
Glossary of scientific naming
References
External links
IPNI Author Query page
Botanical nomenclature | Author citation (botany) | [
"Biology"
] | 2,413 | [
"Botanical terminology",
"Botanical nomenclature",
"Biological nomenclature"
] |
3,108,036 | https://en.wikipedia.org/wiki/Network%20Access%20Identifier | In computer networking, the Network Access Identifier (NAI) is a standard way of identifying users who request access to a network. The standard syntax is "user@realm". Sample NAIs include (from RFC 4282):
bob
joe@example.com
fred@foo-9.example.com
fred.smith@example.com
fred_smith@example.com
fred$@example.com
fred=?#$&*+-/^smith@example.com
eng.example.net!nancy@example.net
eng%nancy@example.net
@privatecorp.example.net
\(user\)@example.net
alice@xn--tmonesimerkki-bfbb.example.net
Network Access Identifiers were originally defined in RFC 2486, which was superseded by RFC 4282, which has been superseded by RFC 7542. The latter RFC is the current standard for the NAI. NAIs are commonly found as user identifiers in the RADIUS and Diameter network access protocols and the EAP authentication protocol.
The Network Access Identifier (NAI) is the user identity submitted by the client during network access authentication.
It is used mainly for two purposes:
The NAI is used when roaming, to identify the user.
To assist in the routing of the authentication request to the user's authentication server.
See also
Diameter
EAP
RADIUS
Request for Comments
External links
Internet Standards | Network Access Identifier | [
"Technology"
] | 298 | [
"Computing stubs",
"Computer network stubs"
] |
3,108,062 | https://en.wikipedia.org/wiki/Bioelectromagnetics | Bioelectromagnetics, also known as bioelectromagnetism, is the study of the interaction between electromagnetic fields and biological entities. Areas of study include electromagnetic fields produced by living cells, tissues or organisms, the effects of man-made sources of electromagnetic fields like mobile phones, and the application of electromagnetic radiation toward therapies for the treatment of various conditions.
Biological phenomena
Bioelectromagnetism is studied primarily through the techniques of electrophysiology. In the late eighteenth century, the Italian physician and physicist Luigi Galvani first recorded the phenomenon while dissecting a frog at a table where he had been conducting experiments with static electricity. Galvani coined the term animal electricity to describe the phenomenon, while contemporaries labeled it galvanism. Galvani and contemporaries regarded muscle activation as resulting from an electrical fluid or substance in the nerves. Short-lived electrical events called action potentials occur in several types of animal cells which are called excitable cells, a category of cell include neurons, muscle cells, and endocrine cells, as well as in some plant cells. These action potentials are used to facilitate inter-cellular communication and activate intracellular processes. The physiological phenomena of action potentials are possible because voltage-gated ion channels allow the resting potential caused by electrochemical gradient on either side of a cell membrane to resolve..
Several animals are suspected to have the ability to sense electromagnetic fields; for example, several aquatic animals have structures potentially capable of sensing changes in voltage caused by a changing magnetic field, while migratory birds are thought to use magnetoreception in navigation.
Bioeffects of electromagnetic radiation
Most of the molecules in the human body interact weakly with electromagnetic fields in the radio frequency or extremely low frequency bands. One such interaction is absorption of energy from the fields, which can cause tissue to heat up; more intense fields will produce greater heating. This can lead to biological effects ranging from muscle relaxation (as produced by a diathermy device) to burns. Many nations and regulatory bodies like the International Commission on Non-Ionizing Radiation Protection have established safety guidelines to limit EMF exposure to a non-thermal level. This can be defined as either heating only to the point where the excess heat can be dissipated, or as a fixed increase in temperature not detectable with current instruments like 0.1 °C. However, biological effects have been shown to be present for these non-thermal exposures; Various mechanisms have been proposed to explain these, and there may be several mechanisms underlying the differing phenomena observed.
Many behavioral effects at different intensities have been reported from exposure to magnetic fields, particularly with pulsed magnetic fields. The specific pulseform used appears to be an important factor for the behavioural effect seen; for example, a pulsed magnetic field originally designed for spectroscopic MRI, referred to as Low Field Magnetic Stimulation, was found to temporarily improve patient-reported mood in bipolar patients, while another MRI pulse had no effect. A whole-body exposure to a pulsed magnetic field was found to alter standing balance and pain perception in other studies.
A strong changing magnetic field can induce electrical currents in conductive tissue such as the brain. Since the magnetic field penetrates tissue, it can be generated outside of the head to induce currents within, causing transcranial magnetic stimulation (TMS). These currents depolarize neurons in a selected part of the brain, leading to changes in the patterns of neural activity. In repeated pulse TMS therapy or rTMS, the presence of incompatible EEG electrodes can result in electrode heating and, in severe cases, skin burns. A number of scientists and clinicians are attempting to use TMS to replace electroconvulsive therapy (ECT) to treat disorders such as severe depression and hallucinations. Instead of one strong electric shock through the head as in ECT, a large number of relatively weak pulses are delivered in TMS therapy, typically at the rate of about 10 pulses per second. If very strong pulses at a rapid rate are delivered to the brain, the induced currents can cause convulsions much like in the original electroconvulsive therapy. Sometimes, this is done deliberately in order to treat depression, such as in ECT.
Effects of electromagnetic radiation on human health
While health effects from extremely low frequency (ELF) electric and magnetic fields (0 to 300 Hz) generated by power lines, and radio/microwave frequencies (RF) (10 MHz - 300 GHz) emitted by radio antennas and wireless networks have been well studied, the intermediate range (300 Hz to 10 MHz) has been studied far less. Direct effects of low power radiofrequency electromagnetism on human health have been difficult to prove, and documented life-threatening effects from radiofrequency electromagnetic fields are limited to high power sources capable of causing significant thermal effects and medical devices such as pacemakers and other electronic implants. However, many studies have been conducted with electromagnetic fields to investigate their effects on cell metabolism, apoptosis, and tumor growth.
Electromagnetic radiation in the intermediate frequency range has found a place in modern medical practice for the treatment of bone healing and for nerve stimulation and regeneration. It is also approved as cancer therapy in form of tumor treating fields, using alternating electric fields in the frequency range of 100–300 kHz. However, the efficacy of this method remains contentious among medical experts. Since some of these methods involve magnetic fields that invoke electric currents in biological tissues and others only involve electric fields, they are strictly speaking electrotherapies albeit their application modi with modern electronic equipment have placed them in the category of bioelectromagnetic interactions.
See also
Bioelectrogenesis
Biomagnetism
Bioelectricity
Bioelectrochemistry
Bioelectrodynamics
Biophotonics
Biophysics
Electric fish
Electrical brain stimulation
Electroencephalography
Electromagnetic radiation and health
Electromyography
Electrotaxis
Kirlian photography
Magnetobiology
Magnetoception
Magnetoelectrochemistry
Mobile phone radiation and health
Radiobiology
Specific absorption rate
Transcutaneous electrical nerve stimulation
Notes
References
Organizations
The Bioelectromagnetics Society (BEMS)
European BioElectromagnetics Association (EBEA)
Society for Physical Regulation in Biology and Medicine (SPRBM) (formerly the Bioelectrical Repair and Growth Society, BRAGS)
International Society for Bioelectromagnetism (ISBEM)
The Bioelectromagnetics Lab at University College Cork, Ireland
Institute of Bioelectromagnetism
Vanderbilt University, Living State Physics Group, archived page
Ragnar Granit Institute.
Institute of Photonics and Electronics AS CR, Department of Bioelectrodynamics.
Books
Becker, Robert O.; Andrew A. Marino, Electromagnetism and Life, State University of New York Press, Albany, 1982. .
Becker, Robert O.; The Body Electric: Electromagnetism and the Foundation of Life, William Morrow & Co, 1985. .
Becker, Robert O.; Cross Currents: The Promise of Electromedicine, the Perils of Electropollution, Tarcher, 1989. .
Binhi, V.N., Magnetobiology: Underlying Physical Problems. San Diego: Academic Press, 2002. .
Brodeur Paul; Currents of Death, Simon & Schuster, 2000. .
Carpenter, David O.; Sinerik Ayrapetyan, Biological Effects of Electric and Magnetic Fields, Volume 1 : Sources and Mechanisms, Academic Press, 1994. .
Carpenter, David O.; Sinerik Ayrapetyan, Biological Effects of Electric and Magnetic Fields : Beneficial and Harmful Effects (Vol 2), Academic Press, 1994. .
Chiabrera A. (Editor), Interactions Between Electromagnetic Fields and Cells, Springer, 1985. .
Habash, Riadh W. Y.; Electromagnetic Fields and Radiation: Human Bioeffects and Safety, Marcel Dekker, 2001. .
Horton William F.; Saul Goldberg, Power Frequency Magnetic Fields and Public Health, CRC Press, 1995. .
Mae-Wan, Ho; et al., Bioelectrodynamics and Biocommunication, World Scientific, 1994. .
Malmivuo, Jaakko; Robert Plonsey, Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields, Oxford University Press, 1995. .
O'Connor, Mary E. (Editor), et al., Emerging Electromagnetic Medicine, Springer, 1990. .
Journals
Bioelectromagnetics
Bioelectrochemistry
European Biophysics Journal
International Journal of Bioelectromagnetism, ISBEM, 1999–present, ()
BioMagnetic Research and Technology archive (no longer publishing)
Biophysics, English version of the Russian "Biofizika" ()
Radiatsionnaya Bioliogiya Radioecologia ("Radiation Biology and Radioecology", in Russian) ()
External links
A brief history of Bioelectromagnetism, by Jaakko and Plonsey.
Direct and Inverse Bioelectric Field Problems
Human body meshes for MATLAB, Ansoft/ANSYS HFSS, Octave (surface meshes from real subjects, meshes for Visible Human Project)
Physiology
Radiobiology
Electrophysiology
ru:Магнитобиология | Bioelectromagnetics | [
"Chemistry",
"Biology"
] | 1,914 | [
"Radiobiology",
"Radioactivity",
"Physiology"
] |
3,108,072 | https://en.wikipedia.org/wiki/Galleting | Galleting, sometimes known as garreting or garneting, is an architectural technique in which spalls (small pieces of stone) are pushed into wet mortar joints during the construction of a masonry building. The term comes from the French word galet, which means "pebble." In general, the word "galleting" refers to the practice while the word "gallet" refers to the spall. Galleting was mostly used in England, where it was common in South East England and the county of Norfolk.
Description
Galleting is mainly used in stone masonry buildings constructed out of sandstone or flint. The technique varies depending on which of these materials is used. In sandstone buildings, the spalls are often a different type of sandstone than the one used in the wall, though sometimes they are pieces of the same stone. For example, carstone, also known as ironstone, is a type of sandstone that is commonly used for galleting. In sandstone buildings, the spalls are usually shaped into small cubes about half an inch in diameter and are flush with the stone. In flint buildings, the edges of thin slivers of flint are commonly pushed into the mortar, so that the surface of the wall is uneven and the edges of the flint spalls jut out from the wall. In some cases, these techniques are combined such that flint walls are galleted with sandstone spalls or vice versa, however it is uncommon. Although it is also uncommon, galleting has been used in brick masonry construction, where sandstone spalls are generally used over flint ones. More eclectic materials used as gallets include brick, tile, beach pebbles, glass, and oyster shells. In higher status buildings, galleting was superseded by square knapping the flints to produce flat, squared stones that produced a surface with little exposed mortar.
It is unclear whether galleting performs a practical, structural function or is an aesthetic application. It is possible that galleting is used when the local stone is not an easily worked freestone, which means that the stone is more irregular and therefore requires thick mortar joints. In this case, gallets would serve as wedges to provide structural support to the stone and would shield the mortar from weather. It is also possible that galleting does not reinforce the mortar and was used purely for aesthetic reasons. Scholarship has also suggested that galleting was neither a structural nor an aesthetic practice, but rather a superstitious one in an attempt to protect a building from witches and other evil influences. However, Historic Scotland Technical Advice Note 1, regarding use of lime mortars, 1995, CLEARLY states "...numerous small pinning stones which contributed to the overall stability of the masonry, reduced the quantity of expensive lime required and minimised the effects of drying shrinkage in the mortar".
Location
In England, galleting can be found almost exclusively in the South East between the North and South Downs, where sandstone is common, and in the county of Norfolk, where flint is common. Given that these locations are not contiguous, much has been debated about the origin and spread of the practice, with some attributing its geographical prevalence to the particularities of the stonemason trade.
Most scholarship focuses on the use of galleting in England. However, there is evidence that it was used in rural Pennsylvania and Maryland as well as in Philadelphia, Vienna, Austria, the Azores, Paris, and Barcelona.
Period of use
There is some debate about when galleting was most commonly practiced. Some sources associate the technique with late medieval building construction, while others suggest that galleting was used mostly in the 17th and 18th centuries before declining in popularity over the course of the 19th century. Historical records indicate that parts of Windsor Castle (n.d.), Eton College (c. 1441), and the Tower of London (c. 1514) were galleted with flint or oyster shells. This suggests that galleting may have been first used in more prestigious buildings and was later adopted in less prestigious buildings once timber framing was supplanted by masonry construction.
Examples
Sevenoaks School
Knole House
Ightham Mote
Tigbourne Court
Norwich Guildhall
Strangers' Hall in Norwich
The village of Heacham in Norfolk boasts examples of a wide variety of types of galleting.
St James' Episcopal Church, Philadelphia, PA (U.S.)
Hancock's Resolution, Anne Arundel County, MD (U.S.)
The greenhouse at Bartram's Garden, Philadelphia, PA (U.S.)
Bradford Friends Meeting House, West Bradford Township, Chester County, PA (U.S.)
Sully Stone Dairy, Sully Historic Site, Chantilly, Virginia (U.S.)
References
Architectural elements
Masonry | Galleting | [
"Technology",
"Engineering"
] | 984 | [
"Building engineering",
"Construction",
"Architectural elements",
"Components",
"Masonry",
"Architecture"
] |
3,108,096 | https://en.wikipedia.org/wiki/Modillion | A modillion is an ornate bracket, more horizontal in shape and less imposing than a corbel. They are often seen underneath a cornice which helps to support them. Modillions are more elaborate than dentils (literally translated as small teeth). All three are selectively used as adjectival historic past participles (corbelled, modillioned, dentillated) as to what co-supports or simply adorns any high structure of a building, such as a terrace of a roof (a flat area of a roof), parapet, pediment/entablature, balcony, cornice band or roof cornice. Modillions occur classically under a Corinthian or a Composite cornice but may support any type of eaves cornice. They may be carved or plain.
See also
Glossary of architecture
Gallery
References
Architectural elements | Modillion | [
"Technology",
"Engineering"
] | 178 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
3,108,161 | https://en.wikipedia.org/wiki/Parallelizable%20manifold | In mathematics, a differentiable manifold of dimension n is called parallelizable if there exist smooth vector fields
on the manifold, such that at every point of the tangent vectors
provide a basis of the tangent space at . Equivalently, the tangent bundle is a trivial bundle, so that the associated principal bundle of linear frames has a global section on
A particular choice of such a basis of vector fields on is called a parallelization (or an absolute parallelism) of .
Examples
An example with is the circle: we can take V1 to be the unit tangent vector field, say pointing in the anti-clockwise direction. The torus of dimension is also parallelizable, as can be seen by expressing it as a cartesian product of circles. For example, take and construct a torus from a square of graph paper with opposite edges glued together, to get an idea of the two tangent directions at each point. More generally, every Lie group G is parallelizable, since a basis for the tangent space at the identity element can be moved around by the action of the translation group of G on G (every translation is a diffeomorphism and therefore these translations induce linear isomorphisms between tangent spaces of points in G).
A classical problem was to determine which of the spheres Sn are parallelizable. The zero-dimensional case S0 is trivially parallelizable. The case S1 is the circle, which is parallelizable as has already been explained. The hairy ball theorem shows that S2 is not parallelizable. However S3 is parallelizable, since it is the Lie group SU(2). The only other parallelizable sphere is S7; this was proved in 1958, by Friedrich Hirzebruch, Michel Kervaire, and by Raoul Bott and John Milnor, in independent work. The parallelizable spheres correspond precisely to elements of unit norm in the normed division algebras of the real numbers, complex numbers, quaternions, and octonions, which allows one to construct a parallelism for each. Proving that other spheres are not parallelizable is more difficult, and requires algebraic topology.
The product of parallelizable manifolds is parallelizable.
Every orientable closed three-dimensional manifold is parallelizable.
Remarks
Any parallelizable manifold is orientable.
The term framed manifold (occasionally rigged manifold) is most usually applied to an embedded manifold with a given trivialisation of the normal bundle, and also for an abstract (that is, non-embedded) manifold with a given stable trivialisation of the tangent bundle.
A related notion is the concept of a π-manifold. A smooth manifold is called a π-manifold if, when embedded in a high dimensional euclidean space, its normal bundle is trivial. In particular, every parallelizable manifold is a π-manifold.
See also
Chart (topology)
Differentiable manifold
Frame bundle
Kervaire invariant
Orthonormal frame bundle
Principal bundle
Connection (mathematics)
G-structure
Notes
References
Differential topology
Fiber bundles
Manifolds
Vector bundles | Parallelizable manifold | [
"Mathematics"
] | 625 | [
"Space (mathematics)",
"Topological spaces",
"Topology",
"Differential topology",
"Manifolds"
] |
3,108,498 | https://en.wikipedia.org/wiki/Ingress%20cancellation | Ingress cancellation is a method for removing narrowband noise from an electromagnetic signal using a digital filter. This type of filter is used on hybrid fiber-coaxial broadband networks.
If a carrier appears in the middle of the upstream data signal, ingress cancellation can remove the interfering carrier without causing packet loss.
Ingress cancellation also removes one or more carriers that are higher in amplitude than the data signal. Ingress cancellation eventually will break if the in-channel ingress gets too high.
References
See also
Distortion
Electromagnetic interference
Ingress filtering
Noise reduction
Digital electronics | Ingress cancellation | [
"Engineering"
] | 113 | [
"Electronic engineering",
"Digital electronics"
] |
3,108,522 | https://en.wikipedia.org/wiki/Stunted%20projective%20space | In mathematics, a stunted projective space is a construction on a projective space of importance in homotopy theory, introduced by . Idea includes collapsing a part of conventional projective space to a point.
More concretely, in a real projective space, complex projective space or quaternionic projective space
where can be either , or . One can find (in many ways) copies of
where, . The corresponding stunted projective space is then
where, the notation implies that the has been identified to a point. This makes a topological space that is no longer a manifold. The importance of this construction was realised when it was shown that real stunted projective spaces arose as Spanier–Whitehead duals of spaces of Ioan James, so-called quasi-projective spaces, constructed from Stiefel manifolds. Their properties were therefore linked to the construction of frame fields on spheres.
In this way the question on vector fields on spheres was reduced to a question on stunted projective spaces:
For , is there a degree one mapping on the 'next cell up' (of the first dimension not collapsed in the stunting) that extends to the whole space?
Frank Adams showed that this could not happen, completing the proof.
In later developments spaces and stunted lens spaces have also been used.
References
Homotopy theory
Differential topology | Stunted projective space | [
"Mathematics"
] | 267 | [
"Topology",
"Differential topology"
] |
3,108,602 | https://en.wikipedia.org/wiki/Spanier%E2%80%93Whitehead%20duality | In mathematics, Spanier–Whitehead duality is a duality theory in homotopy theory, based on a geometrical idea that a topological space X may be considered as dual to its complement in the n-sphere, where n is large enough. Its origins lie in Alexander duality theory, in homology theory, concerning complements in manifolds. The theory is also referred to as S-duality, but this can now cause possible confusion with the S-duality of string theory. It is named for Edwin Spanier and J. H. C. Whitehead, who developed it in papers from 1955.
The basic point is that sphere complements determine the homology, but not the homotopy type, in general. What is determined, however, is the stable homotopy type, which was conceived as a first approximation to homotopy type. Thus Spanier–Whitehead duality fits into stable homotopy theory.
Statement
Let X be a compact neighborhood retract in . Then and are dual objects in the category of pointed spectra with the smash product as a monoidal structure. Here is the union of and a point, and are reduced and unreduced suspensions respectively.
Taking homology and cohomology with respect to an Eilenberg–MacLane spectrum recovers Alexander duality formally.
References
Homotopy theory
Duality theories | Spanier–Whitehead duality | [
"Mathematics"
] | 280 | [
"Mathematical structures",
"Category theory",
"Duality theories",
"Geometry"
] |
3,108,737 | https://en.wikipedia.org/wiki/Hermitian%20function | In mathematical analysis, a Hermitian function is a complex function with the property that its complex conjugate is equal to the original function with the variable changed in sign:
(where the indicates the complex conjugate) for all in the domain of . In physics, this property is referred to as PT symmetry.
This definition extends also to functions of two or more variables, e.g., in the case that is a function of two variables it is Hermitian if
for all pairs in the domain of .
From this definition it follows immediately that: is a Hermitian function if and only if
the real part of is an even function,
the imaginary part of is an odd function.
Motivation
Hermitian functions appear frequently in mathematics, physics, and signal processing. For example, the following two statements follow from basic properties of the Fourier transform:
The function is real-valued if and only if the Fourier transform of is Hermitian.
The function is Hermitian if and only if the Fourier transform of is real-valued.
Since the Fourier transform of a real signal is guaranteed to be Hermitian, it can be compressed using the Hermitian even/odd symmetry. This, for example, allows the discrete Fourier transform of a signal (which is in general complex) to be stored in the same space as the original real signal.
If f is Hermitian, then .
Where the is cross-correlation, and is convolution.
If both f and g are Hermitian, then .
See also
Types of functions
Calculus | Hermitian function | [
"Mathematics"
] | 307 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical analysis stubs",
"Calculus",
"Mathematical objects",
"Mathematical relations",
"Types of functions"
] |
3,108,880 | https://en.wikipedia.org/wiki/User%E2%80%93network%20interface | In telecommunications, a user–network interface (UNI) is a demarcation point between the responsibility of the service provider and the responsibility of the subscriber. This is distinct from a network-to-network interface (NNI) that defines a similar interface between provider networks.
Specifications defining a UNI
Metro Ethernet Forum
The Metro Ethernet Forum's Metro Ethernet Network UNI specification defines a bidirectional Ethernet reference point for Ethernet service delivery.
Optical Internetworking Forum
The Optical Internetworking Forum defines a UNI software interface for user systems to request a network connection from an ASON/GMPLS control plane.
See also
Network termination
External links
Metro Ethernet Forum
Network management | User–network interface | [
"Engineering"
] | 139 | [
"Computer networks engineering",
"Network management"
] |
3,108,888 | https://en.wikipedia.org/wiki/Alexander%20duality | In mathematics, Alexander duality refers to a duality theory initiated by a result of J. W. Alexander in 1915, and subsequently further developed, particularly by Pavel Alexandrov and Lev Pontryagin. It applies to the homology theory properties of the complement of a subspace X in Euclidean space, a sphere, or other manifold. It is generalized by Spanier–Whitehead duality.
General statement for spheres
Let be a non-empty compact, locally contractible subspace of the sphere of dimension n. Let be the complement of in . Then if stands for reduced homology or reduced cohomology, with coefficients in a given abelian group, there is an isomorphism
for all . Note that we can drop local contractibility as part of the hypothesis if we use Čech cohomology, which is designed to deal with local pathologies.
Applications
This is useful for computing the cohomology of knot and link complements in . Recall that a knot is an embedding and a link is a disjoint union of knots, such as the Borromean rings. Then, if we write the link/knot as , we have
,
giving a method for computing the cohomology groups. Then, it is possible to differentiate between different links using the Massey products.
For example, for the Borromean rings , the homology groups are
Combinatorial Alexander duality
Let be an abstract simplicial complex on a vertex set of size .
The Alexander dual of is defined as the simplicial complex on whose faces are complements of non-faces of . That is
.
Note that .
Alexander duality implies the following combinatorial analog (for reduced homology and cohomology, with coefficients in any given abelian group):
for all .
Indeed, this can be deduced by letting be the -skeleton of the full simplex on (that is, is the family of all subsets of size at most ) and showing that the geometric realization is homotopy equivalent to .
Björner and Tancer presented an elementary combinatorial proof and summarized a few generalizations.
Alexander duality for constructible sheaves
For smooth manifolds, Alexander duality is a formal consequence of Verdier duality for sheaves of abelian groups. More precisely, if we let denote a smooth manifold and we let be a closed subspace (such as a subspace representing a cycle, or a submanifold) represented by the inclusion , and if is a field, then if is a sheaf of -vector spaces we have the following isomorphism
,
where the cohomology group on the left is compactly supported cohomology. We can unpack this statement further to get a better understanding of what it means. First, if is the constant sheaf and is a smooth submanifold, then we get
,
where the cohomology group on the right is local cohomology with support in . Through further reductions, it is possible to identify the homology of with the cohomology of . This is useful in algebraic geometry for computing the cohomology groups of projective varieties, and is exploited for constructing a basis of the Hodge structure of hypersurfaces of degree using the Jacobian ring.
Alexander's 1915 result
Referring to Alexander's original work, it is assumed that X is a simplicial complex.
Alexander had little of the modern apparatus, and his result was only for the Betti numbers, with coefficients taken modulo 2. What to expect comes from examples. For example the Clifford torus construction in the 3-sphere shows that the complement of a solid torus is another solid torus; which will be open if the other is closed, but this does not affect its homology. Each of the solid tori is from the homotopy point of view a circle. If we just write down the Betti numbers
1, 1, 0, 0
of the circle (up to , since we are in the 3-sphere), then reverse as
0, 0, 1, 1
and then shift one to the left to get
0, 1, 1, 0
there is a difficulty, since we are not getting what we started with. On the other hand the same procedure applied to the reduced Betti numbers, for which the initial Betti number is decremented by 1, starts with
0, 1, 0, 0
and gives
0, 0, 1, 0
whence
0, 1, 0, 0.
This does work out, predicting the complement's reduced Betti numbers.
The prototype here is the Jordan curve theorem, which topologically concerns the complement of a circle in the Riemann sphere. It also tells the same story. We have the honest Betti numbers
1, 1, 0
of the circle, and therefore
0, 1, 1
by flipping over and
1, 1, 0
by shifting to the left. This gives back something different from what the Jordan theorem states, which is that there are two components, each contractible (Schoenflies theorem, to be accurate about what is used here). That is, the correct answer in honest Betti numbers is
2, 0, 0.
Once more, it is the reduced Betti numbers that work out. With those, we begin with
0, 1, 0
to finish with
1, 0, 0.
From these two examples, therefore, Alexander's formulation can be inferred: reduced Betti numbers are related in complements by
.
References
Further reading
Algebraic topology
Duality theories | Alexander duality | [
"Mathematics"
] | 1,126 | [
"Mathematical structures",
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Category theory",
"Duality theories",
"Geometry"
] |
3,108,937 | https://en.wikipedia.org/wiki/Landauer%27s%20principle | Landauer's principle is a physical principle pertaining to a lower theoretical limit of energy consumption of computation. It holds that an irreversible change in information stored in a computer, such as merging two computational paths, dissipates a minimum amount of heat to its surroundings. It is hypothesized that energy consumption below this lower bound would require the development of reversible computing.
The principle was first proposed by Rolf Landauer in 1961.
Statement
Landauer's principle states that the minimum energy needed to erase one bit of information is proportional to the temperature at which the system is operating. Specifically, the energy needed for this computational task is given by
where is the Boltzmann constant and is the temperature in Kelvin. At room temperature, the Landauer limit represents an energy of approximately . , modern computers use about a billion times as much energy per operation.
History
Rolf Landauer first proposed the principle in 1961 while working at IBM. He justified and stated important limits to an earlier conjecture by John von Neumann. This refinement is sometimes called the Landauer bound, or Landauer limit.
In 2008 and 2009, researchers showed that Landauer's principle can be derived from the second law of thermodynamics and the entropy change associated with information gain, developing the thermodynamics of quantum and classical feedback-controlled systems.
In 2011, the principle was generalized to show that while information erasure requires an increase in entropy, this increase could theoretically occur at no energy cost. Instead, the cost can be taken in another conserved quantity, such as angular momentum.
In a 2012 article published in Nature, a team of physicists from the École normale supérieure de Lyon, University of Augsburg and the University of Kaiserslautern described that for the first time they have measured the tiny amount of heat released when an individual bit of data is erased.
In 2014, physical experiments tested Landauer's principle and confirmed its predictions.
In 2016, researchers used a laser probe to measure the amount of energy dissipation that resulted when a nanomagnetic bit flipped from off to on. Flipping the bit required about at 300 K, which is just 44% above the Landauer minimum.
A 2018 article published in Nature Physics features a Landauer erasure performed at cryogenic temperatures on an array of high-spin (S = 10) quantum molecular magnets. The array is made to act as a spin register where each nanomagnet encodes a single bit of information. The experiment has laid the foundations for the extension of the validity of the Landauer principle to the quantum realm. Owing to the fast dynamics and low "inertia" of the single spins used in the experiment, the researchers also showed how an erasure operation can be carried out at the lowest possible thermodynamic cost—that imposed by the Landauer principle—and at a high speed.
Challenges
The principle is widely accepted as physical law, but it has been challenged for using circular reasoning and faulty assumptions. Others have defended the principle, and Sagawa and Ueda (2008) and Cao and Feito (2009) have shown that Landauer's principle is a consequence of the second law of thermodynamics and the entropy reduction associated with information gain.
On the other hand, recent advances in non-equilibrium statistical physics have established that there is not a prior relationship between logical and thermodynamic reversibility. It is possible that a physical process is logically reversible but thermodynamically irreversible. It is also possible that a physical process is logically irreversible but thermodynamically reversible. At best, the benefits of implementing a computation with a logically reversible system are nuanced.
In 2016, researchers at the University of Perugia claimed to have demonstrated a violation of Landauer’s principle, though their conclusions were disputed.
See also
Quantum speed limit
Bremermann's limit
Bekenstein bound
Kolmogorov complexity
Entropy in thermodynamics and information theory
Information theory
Jarzynski equality
Limits of computation
Extended mind thesis
Maxwell's demon
Koomey's law
No-deleting theorem
References
Further reading
External links
Public debate on the validity of Landauer's principle (conference Hot Topics in Physical Informatics, November 12, 2013)
Introductory article on Landauer's principle and reversible computing
Maroney, O.J.E. " Information Processing and Thermodynamic Entropy" The Stanford Encyclopedia of Philosophy.
Eurekalert.org: "Magnetic memory and logic could achieve ultimate energy efficiency", July 1, 2011
Thermodynamic entropy
Entropy and information
Philosophy of thermal and statistical physics
Principles
Limits of computation | Landauer's principle | [
"Physics",
"Chemistry",
"Mathematics"
] | 973 | [
"Physical phenomena",
"Philosophy of thermal and statistical physics",
"Physical quantities",
"Thermodynamic entropy",
"Entropy and information",
"Entropy",
"Thermodynamics",
"Statistical mechanics",
"Limits of computation",
"Dynamical systems"
] |
3,108,990 | https://en.wikipedia.org/wiki/Sarcopenia | Sarcopenia (ICD-10-CM code M62.84) is a type of muscle loss that occurs with aging and/or immobility. It is characterized by the degenerative loss of skeletal muscle mass, quality, and strength. The rate of muscle loss is dependent on exercise level, co-morbidities, nutrition and other factors. The muscle loss is related to changes in muscle synthesis signalling pathways. It is distinct from cachexia, in which muscle is degraded through cytokine-mediated degradation, although the two conditions may co-exist. Sarcopenia is considered a component of frailty syndrome. Sarcopenia can lead to reduced quality of life, falls, fracture, and disability.
Sarcopenia is a factor in changing body composition. When associated with aging populations, certain muscle regions are expected to be affected first, specifically the anterior thigh and abdominal muscles. In population studies, body mass index (BMI) is seen to decrease in aging populations while bioelectrical impedance analysis (BIA) shows body fat proportion rising.
A new sarcopenia related condition is the Steatosarcopenia proposed by the Steatosarcopenia & Sarcopenia Brazilian Study Group. This condition is characterized by the loss of mass or skeletal muscle strength and performance associated with the excessive deposition of ectopic reserve fat in muscle tissue, in the same individual, not necessarily related to excess fat total body mass. Steatosarcopenia: A New Terminology for Clinical Conditions Related to Body Composition Classification.
Signs and symptoms
The hallmark sign of sarcopenia is loss of lean muscle mass, or muscle atrophy. The change in body composition may be difficult to detect due to obesity, changes in fat mass, or edema. Changes in weight, limb or waist circumference are not reliable indicators of muscle mass changes. Sarcopenia may also cause reduced strength, functional decline and increased risk of falling. Sarcopenia may also have no symptoms until it is severe and is often unrecognized. Research has shown, however, that hypertrophy may occur in the upper parts of the body to compensate for this loss of lean muscle mass Therefore, one early indicator of the onset of sarcopenia can be significant loss of muscle mass in the anterior thigh and abdominal muscles.
Causes
There are many proposed causes of sarcopenia and it is likely the result of multiple interacting factors. Understanding of the causes of sarcopenia is incomplete, however changes in hormones, immobility, age-related muscle changes, nutrition and neurodegenerative changes have all been recognized as potential causative factors.
The degree of sarcopenia is determined by two factors: the initial amount of muscle mass and the rate at which muscle mass declines. Due to variations in these factors across the population, the rate of progression and the threshold at which muscle loss becomes apparent is variable. Immobility dramatically increases the rate of muscle loss, even in younger people. Other factors that can increase the rate of progression of sarcopenia include decreased nutrient intake, low physical activity, or chronic disease. Additionally, epidemiological research has indicated that early environmental influences may have long-term effects on muscle health. For example, low birth weight, a marker of a poor early environment, is associated with reduced muscle mass and strength in adult life.
Pathophysiology
There are multiple theories proposed to explain the mechanisms of muscle changes of sarcopenia including changes in myosatellite cell recruitment, changes in anabolic signalling, protein oxidation, inflammation, and developmental factors. The pathologic changes of sarcopenia include a reduction in muscle tissue quality as reflected in the replacement of muscle fibers with fat, an increase in fibrosis, changes in muscle metabolism, oxidative stress, and degeneration of the neuromuscular junction. The failure to activate satellite cells upon injury or exercise is also thought to contribute to the pathophysiology of sarcopenia. Additionally, oxidized proteins can lead to a buildup of lipofuscin and cross-linked proteins causing an accumulation of non-contractile material in the skeletal muscle and contribute to sarcopenic muscle.
In sarcopenic muscle the distribution of the types of muscle fibers changes with a decrease in type II muscle fibers, or "fast twitch," with little to no decrease in type I muscle fibers, or "slow-twitch" muscle fibers. Deinervated type II fibers are often converted to type I fibers by reinnervation by slow type I fiber motor nerves. Males are perhaps more susceptible for this aging-related switching of the myofiber type, as a recent research has shown a higher percentage of "slow twitch" muscle fibers in old compared to young males, but not in old compared to young females.
Aging sarcopenic muscle shows an accumulation of mitochondrial DNA mutations, which has been demonstrated in various other cell types as well. Clones with mitochondrial mutations build up in certain regions of the muscle, which goes along with an about fivefold increase in the absolute mtDNA copy number, that is, these regions are denser. An apparent protective factor preventing cells' buildup of damaged mitochondria is sufficient levels of the protein BNIP3. Deficiency of BNIP3 leads to muscle inflammation and atrophy.
Furthermore, not every muscle is as susceptible to the atrophic effects of aging. For example, in both humans and mice it has been shown that lower leg muscles are not as susceptible to aging as upper leg muscles. This could perhaps be explained by the differential distribution of myofiber type within each muscle group, but this is unknown.
Diagnosis
Multiple diagnostic criteria have been proposed by various expert groups and continues to be an area of research and debate. Despite the lack of a widely accepted definition, sarcopenia was assigned an ICD-10 code (M62.84) in 2016, recognizing it as a disease state.
Sarcopenia can be diagnosed when a patient has muscle mass that is at least two standard deviations below the relevant population mean and has a slow walking speed. The European Working Group on Sarcopenia in Older People (EWGSOP) developed a broad clinical definition for sarcopenia, designated as the presence of low muscle mass and either low muscular strength or low physical performance. Other international groups have proposed criteria that include metrics on walking speed, distance walked in 6 minutes, or grip strength. Hand grip strength alone has also been advocated as a clinical marker of sarcopenia that is simple and cost effective and has good predictive power, although it does not provide comprehensive information.
There are screening tools for sarcopenia that assess patient reported difficulty in doing daily activities such as walking, climbing stairs or standing from a chair and have been shown to predict sarcopenia and poor functional outcomes.
Biomarkers
As sarcopenia is a complex clinical diagnosis, circulating biomarkers have been proposed as proxies for early diagnosis and prediction as well as for follow-up and serial assessment of response to interventions.
Aging and sarcopenia are associated with an increase in inflammatory markers ("inflamm-aging") including: C-reactive protein, tumor necrosis factor, interleukin-8, interleukin-6, granulocyte-monocyte colony-stimulating factor, interferons, and serine protease A1.
Changes in hormones associated with aging and sarcopenia include a reduction in the sex-hormones testosterone and dehydroepiandrosterone sulfate, as well as reduced levels of circulating growth hormone and IGF-1.
Circulating C-terminal agrin fragments (CAF) have been found to be higher in accelerated sarcopenic patients.
Lower plasma levels of the amino acids leucine and isoleucine as well as other essential amino acids were found in frail older people compared to non-frail controls.
Alanine aminotransferase (ALT) is responsible for the transfer of the α-amino group from an α-amino acid to an α-keto acid, transforming pyruvate to alanine in skeletal muscle. Low circulating ALT is a marker for low muscle mass and sarcopenia, as well for increased disease activity in patients with inflammatory bowel disease.
Management
Exercise
Exercise remains the intervention of choice for sarcopenia, but translation of research findings into clinical practice is challenging. The type, duration and intensity of exercise are variable between studies, preventing a standardized exercise prescription for sarcopenia. Lack of exercise is a significant risk factor for sarcopenia and exercise can dramatically slow the rate of muscle loss. Exercise can be an effective intervention because aging skeletal muscle retains the ability to synthesize proteins in response to short-term resistance exercise. Progressive resistance training in older adults can improve physical performance (gait speed) and muscular strength. Increased exercise can produce greater numbers of cellular mitochondria, increase capillary density, and increase the mass and strength of connective tissue.
Medication
There are currently no approved medications for the treatment of sarcopenia. Testosterone or other anabolic steroids have also been investigated for treatment of sarcopenia, and seem to have some positive effects on muscle strength and mass, but cause several side effects and raise concerns of prostate cancer in men and virilization in women. Additionally, recent studies suggest testosterone treatments may induce adverse cardiovascular events.
DHEA and human growth hormone have been shown to have little to no effect in this setting. Growth hormone increases muscle protein synthesis and increases muscle mass, but does not lead to gains in strength and function in most studies. This, and the similar lack of efficacy of its effector insulin-like growth factor 1 (IGF-1), may be due to local resistance to IGF-1 in aging muscle, resulting from inflammation and other age changes.
Other medications under investigation as possible treatments for sarcopenia include ghrelin, vitamin D, angiotensin converting enzyme inhibitors, and eicosapentaenoic acid.
Nutrition
Intake of calories and protein are important stimuli for muscle protein synthesis. Older adults may not utilize protein as efficiently as younger people and may require higher amounts to prevent muscle atrophy. A number of expert groups have proposed an increase in dietary protein recommendations for older age groups to 1.0–1.2 g/kg body weight per day.
Ensuring adequate nutrition in older adults is of interest in the prevention of sarcopenia and frailty, since it is a simple, low-cost treatment approach without major side effects.
Supplements
A component of sarcopenia is the loss of ability for aging skeletal muscle to respond to anabolic stimuli such as amino acids, especially at lower concentrations. However, aging muscle retains the ability of an anabolic response to protein or amino acids at larger doses. Supplementation with larger doses of amino acids, particularly leucine has been reported to counteract muscle loss with aging. Exercise may work synergistically with amino acid supplementation.
β-hydroxy β-methylbutyrate (HMB) is a metabolite of leucine that acts as a signalling molecule to stimulate protein synthesis. It is reported to have multiple targets, including stimulating mTOR and decreasing proteasome expression. Its use to prevent the loss of lean body mass in older adults is consistently supported in clinical trials. More research is needed to determine the precise effects of HMB on muscle strength and function in this age group.
Epidemiology
The prevalence of sarcopenia depends on the definition used in each epidemiological study. Estimated prevalence in people between the ages of 60-70 is 5-13% and increases to 11-50% in people more than 80 years of age. This equates to >50 million people and is projected to affect >200 million in the next 40 years given the rising population of older adults.
Public health impact
Sarcopenia is emerging as a major public health concern given the increased longevity of industrialized populations and growing geriatric population. Sarcopenia is a predictor of many adverse outcomes including increased disability, falls and mortality. Immobility or bed rest in populations predisposed to sarcopenia can cause dramatic impact on functional outcomes. In the elderly, this often leads to decreased biological reserve and increased vulnerability to stressors known as the "frailty syndrome". Loss of lean body mass is also associated with increased risk of infection, decreased immunity, and poor wound healing. The weakness that accompanies muscle atrophy leads to higher risk of falls, fractures, physical disability, need for institutional care, reduced quality of life, increased mortality, and increased healthcare costs. This represents a significant personal and societal burden and its public health impact is increasingly recognized.
Etymology
The term sarcopenia stems from Greek σάρξ sarx, "flesh" and πενία penia, "poverty". This was first proposed by Rosenberg in 1989, who wrote that "there may be no single feature of age-related decline that could more dramatically affect ambulation, mobility, calorie intake, and overall nutrient intake and status, independence, breathing, etc".
Sarcopenia is distinct from cachexia, in which muscle is degraded through cytokine-mediated degradation, although the two conditions may co-exist.
Research directions
There are significant opportunities to better understand the causes and consequences of sarcopenia and help guide clinical care. This includes elucidation of the molecular and cellular mechanisms of sarcopenia, further refinement of reference populations by ethnic groups, validation of diagnostic criteria and clinical tools, as well as tracking of incidence of hospitalization admissions, morbidity, and mortality. Identification and research on potential therapeutic approaches and timing of interventions is also needed.
, there are no drugs approved to treat muscle wasting in people with chronic diseases, and there is therefore an unmet need for anabolic drugs with few side effects. One aspect hindering drug approval for treatments for cachexia and sarcopenia is disagreement in endpoints. Several clinical trials have found that selective androgen receptor modulators (SARMs) improve lean mass in humans, but it is not clear whether strength and physical function are also improved. After promising results in a phase II trial, a phase III trial of the SARM ostarine was proven to increase lean body mass but did not show significant improvement in function. It and other drugs—such as the growth hormone secretagogue anamorelin—have been refused regulatory approval despite significant increases in lean mass due to a lack of evidence that they increased physical performance. Preventing decline in functionality was not considered an acceptable endpoint by the Food and Drug Administration. It is not known how SARMs interact with dietary protein intake and resistance training in people with muscle wasting.
See also
References
Further reading
Aging-associated diseases
Geriatrics
Rehabilitation medicine
Senescence | Sarcopenia | [
"Chemistry",
"Biology"
] | 3,025 | [
"Senescence",
"Aging-associated diseases",
"Metabolism",
"Cellular processes"
] |
3,109,025 | https://en.wikipedia.org/wiki/Trade-weighted%20effective%20exchange%20rate%20index | The trade-weighted effective exchange rate index, a common form of the effective exchange rate index, is a multilateral exchange rate index. It is compiled as a weighted average of exchange rates of home versus foreign currencies, with the weight for each foreign country equal to its share in trade. Depending on the purpose for which it is used, it can be export-weighted, import-weighted, or total-external trade weighted.
Overview
The trade-weighted effective exchange rate index is an economic indicator for comparing the exchange rate of a country against those of their major trading partners. By design, movements in the currencies of those trading partners with a greater share in an economy's exports and imports will have a greater effect on the effective exchange rate. In a multilateral, highly globalized, world, the effective exchange rate index is much more useful than a bilateral exchange rate, such as that between the Australian dollar and the United States dollar, for assessing changes in the competitiveness due to exchange rate movements.
Generally, the weighting method is geometric weighting rather than arithmetic weighting. Refer to weighted geometric mean. The use of trade weights in a globalized economy is potentially misleading, because the amount of value added content in exports destined for a country may deviate significantly from the gross value of exports shipped to that country. See the entry under effective exchange rate index for an alternative approach to compiling an effective exchange rate index.
Interpretation
The interpretation of the effective exchange rate is that if the index rises, other things being equal, the purchasing power of that currency also rises (the currency strengthened against those of the country's or area's trading partners). That will reduce the cost of imports but will undermine the competitiveness of exports. Other things refer, in particular, to the relative inflation rates of the economy as compared to the inflation rates of its trading partners. To account for all effects of relative inflation rates, the real effective exchange rate index is compiled as the product of the effective exchange rate index and the relative price index between the home economy and the trading partners.
References
External links
Monthly data from U.S. Federal Reserve: Monthly Rates
Data from Bank for International Settlements: Effective exchange rate indices
Effective exchange rates of the euro from the European Central Bank: data, methodology
Index numbers
Economic indicators
Foreign exchange market | Trade-weighted effective exchange rate index | [
"Mathematics"
] | 468 | [
"Index numbers",
"Mathematical objects",
"Numbers"
] |
3,110,057 | https://en.wikipedia.org/wiki/Gamma%20Centauri | Gamma Centauri, Latinized from γ Centauri, is a binary star system in the southern constellation of Centaurus. It has the proper name Muhlifain, not to be confused with Muliphein, which is γ Canis Majoris; both names derive from the same Arabic root. The system is visible to the naked eye as a single point of light with a combined apparent visual magnitude of +2.17; individually they are third-magnitude stars.
This system is located at a distance of about from the Sun based on parallax. In 2000, the pair had an angular separation of 1.217 arcseconds with a position angle of 351.9°. Their positions have been observed since 1897, which is long enough to estimate an orbital period of 84.5 years and a semimajor axis of 0.93 arcsecond. At the distance of this system, this is equivalent to a physical separation of about .
The combined stellar classification of the pair is A1IV+; when they are separated out they have individual classes of A1IV and A0IV, suggesting they are A-type subgiant stars in the process of becoming giants. The star Tau Centauri is relatively close to Gamma Centauri, with an estimated separation of . There is a 98% chance that they are co-moving stars.
Etymology
In Chinese astronomy, (), meaning Arsenal, refers to an asterism consisting of γ Centauri, ζ Centauri, η Centauri, θ Centauri, 2 Centauri, HD 117440, ξ1 Centauri, τ Centauri, D Centauri and σ Centauri. Consequently, the Chinese name for γ Centauri itself is (, ).
The people of Aranda and Luritja tribe around Hermannsburg, Central Australia named a quadrangular arrangement comprising this star, δ Cen (Ma Wei), δ Cru (Imai) and γ Cru (Gacrux) as Iritjinga ("The Eagle-hawk").
References
A-type subgiants
Binary stars
Centaurus
Centauri, Gamma
BD-48 7597
110304
061932
4819
TIC objects | Gamma Centauri | [
"Astronomy"
] | 462 | [
"Centaurus",
"Constellations"
] |
3,110,084 | https://en.wikipedia.org/wiki/Upsilon%20Orionis | Upsilon Orionis (υ Ori, υ Orionis) is a star in the constellation Orion. It has the traditional name Thabit or Tabit (ﺛﺎﺑﺖ, Arabic for "the endurer"), a name shared with Pi3 Orionis. It is a blue-white main sequence star of apparent magnitude 4.62 located over 1,300 light-years distant from the Solar System. It is a suspected Beta Cephei variable.
Name
Located south of Iota Orionis, Upsilon Orionis is one of two stars (the other is 29 Orionis) marking the top of Orion's right boot in Johann Bayer's Uranometria (1603). It was given the number 36 by John Flamsteed, while its proper name appears to be derived from the Arabic Al Thabit "the endurer". In his Star-Names and Their Meanings (1899), American amateur naturalist Richard Hinckley Allen noted that the name appeared on the star atlas Geography of the Heavens, composed by Elijah Hinsdale Burritt, but its ultimate origin was unknown.
Properties
Since 1943, this star has been consistently defined as a B0 main sequence star used as a reference for classifying the spectra of other stars on the MK scale, although in other studies it has been classified as O9V and O9.5V. The Galactic O-Star Spectroscopic Survey defined it as the standard star for the O9.7V spectral type in 2011, but the 2016 version redefined it as B0V.
In a 1981 paper, Thabit was observed to have nonradial pulsations over a period of around 12 hours, and has been classified as a slowly pulsating B star. Subsequent review of Hipparcos catalog data indicated it was most likely a Beta Cephei variable, and is hence considered a candidate for that class. These are blue-white main sequence stars of around 10 to 20 times the mass of the Sun that pulsate with periods of 0.1 to 0.3 days; their changes in magnitude are much more pronounced in the ultraviolet than in the visual spectrum. It is classified as a Beta Cephei variable by the American Association of Variable Star Observers, and has an apparent magnitude of +4.62.
Thabit's parallax has been measured at , yielding a distance of approximately 1,325 light years from Earth. Spectroscopic observations found it to be 1,260 light-years distant, with a radius 5.5 and a luminosity 32,000 that of the Sun, an effective temperature of 32,900 K and a mass 17.5 that of the Sun. It is one of the most massive stars of the Orion OB1c association (in Orion's Sword).
Notes
References
Orion (constellation)
Orionis, Upsilon
Thabit
Beta Cephei variables
Orionis, 36
025923
O-type main-sequence stars
1855
036512
Durchmusterung objects
B-type main-sequence stars | Upsilon Orionis | [
"Astronomy"
] | 620 | [
"Constellations",
"Orion (constellation)"
] |
3,110,099 | https://en.wikipedia.org/wiki/Gamma%20Librae | Gamma Librae (γ Librae, abbreviated Gamma Lib, γ Lib) is a suspected binary star system in the constellation of Libra. It is visible to the naked eye, having an apparent visual magnitude of +3.91. Based upon an annual parallax shift of 19.99 mas as seen from Earth, it lies 163 light years from the Sun.
The primary component (designated Gamma Librae A) has been formally named Zubenelhakrabi , the traditional name of the system.
Nomenclature
γ Librae (Latinised to Gamma Librae) is the system's Bayer designation. The designations of the two components as Gamma Librae A and B derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
Gamma Librae bore the traditional name Zuben (el) Hakrabi (also rendered as Zuben-el-Akrab and corrupted as Zuben Hakraki). The name is a modification of the Arabic زبانى العقرب Zubān al-ʿAqrab "the claws of the scorpion", a name that dates to before Libra was a distinct constellation from Scorpius. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Zubenelhakrabi for the component Gamma Librae A on 5 September 2017 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Root, refers to an asterism consisting of Gamma Librae, Alpha2 Librae, Iota Librae and Beta Librae. Consequently, the Chinese name for Gamma Librae itself is (), "the Third Star of Root".
Properties
Because the star lies near the ecliptic it is subject to occultations by the Moon, allowing the angular size to be measured. As of 1940, the pair had an angular separation of 0.10 arc seconds along a position angle of 191°.
The yellow-hued primary, component Aa, is an evolved G-type giant star with a stellar classification of G8.5 III and an estimated age of 4.3 billion years. It has 1.15 times the mass of the Sun and has expanded to 11.14 times the Sun's radius. The star is radiating around 72 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,786 K. There is a magnitude 11.2 visual companion, component B, at an angular separation of 42.5 arc seconds along a position angle of 157°, as of 2013.
At its distance, the visual magnitude is diminished by an extinction of 0.11 due to interstellar dust. The system is moving closer to the Sun with a radial velocity of −26.71 km/s.
Planetary system
On the 11th of April 2018 the discovery of two gas giant planets orbiting Gamma Librae was announced.
References
Zubenelhakrabi
Librae, 38
Librae, Gamma
076333
G-type giants
138905
5787
CD-27 10464
Libra (constellation)
Planetary systems with two confirmed planets | Gamma Librae | [
"Astronomy"
] | 709 | [
"Libra (constellation)",
"Constellations"
] |
3,110,111 | https://en.wikipedia.org/wiki/Delta%20Librae | Delta Librae, Latinized from δ Librae, is a variable star in the constellation Libra. It has the traditional name Zuben Elakribi, a variant of the traditional name of Gamma Librae. With μ Virginis it forms one of the Akkadian lunar mansions Mulu-izi(meaning "Man-of-fire").
δ Librae is approximately 300 light years from the Earth and the primary, component A, belongs to the spectral class B9.5V, indicating it is a B-type main-sequence star. It is visible to the naked eye with an apparent visual magnitude of 4.93 and is moving closer to the Sun with a radial velocity of −39 km/s. This is an Algol-like eclipsing binary star system, with a period of 2.3274 days and an eccentricity of 0.07. Its apparent magnitude varies from 4.91m to 5.9m. The secondary is filling its Roche lobe and there is evidence of large-scale mass transfer in the past, with the star being more evolved than the primary.
Along with λ Tauri, it was one of the first stars on which
rotational line broadening was observed, by Frank Schlesinger in 1911.
References
Algol variables
B-type main-sequence stars
Eclipsing binaries
Zuben Elakribi
Libra (constellation)
Librae, Delta
Durchmusterung objects
Librae, 19
132742
073473
5586 | Delta Librae | [
"Astronomy"
] | 316 | [
"Libra (constellation)",
"Constellations"
] |
3,110,120 | https://en.wikipedia.org/wiki/Epsilon%20Draconis | Epsilon Draconis, Latinized from ε Draconis, is a fourth-magnitude star in the constellation Draco. This star along with Delta Draconis (Altais), Pi Draconis and Rho Draconis forms an asterism known as Al Tāis, meaning "the Goat".
In Chinese astronomy, (), meaning the Celestial Kitchen, refers to an asterism consisting of Epsilon Draconis, Delta Draconis, Sigma Draconis, Rho Draconis, 64 Draconis and Pi Draconis. Consequently, the Chinese name for Epsilon Draconis itself is (, .) Most authors do not use a traditional name for this star, using instead the Bayer designation;
but Bečvář (1951) listed it as Tyl .
Visibility
With a declination in excess of 70 degrees north, Epsilon Draconis is principally visible in the northern hemisphere, with southern locations north of 20° South able to see it just above the horizon. The star is circumpolar throughout all of Europe, China, most of India and as far south as the tip of the Baja peninsula in North America as well as other locations around the globe having a latitude greater than ± 20° North. Since Epsilon Draconis has an apparent magnitude of almost 4.0, the star is easily observable to the naked eye as long as one's stargazing is not hampered by the light pollution common to most cities.
The best time for observation is in the evening sky during the summer months, when the "Dragon constellation" passes the meridian at midnight, but given its circumpolar nature in the northern hemisphere, it is visible to most of the world's inhabitants throughout the year.
Properties
Epsilon Draconis is a yellow giant star with a spectral type of G8III. It has a radius that has been estimated at 11 solar radii and a mass of 2.7 solar masses. Compared to most G class stars, Epsilon Draconis is a relatively young star with an estimated age of around 500 million years old. Like the majority of giant stars, Epsilon Draconis rotates slowly on its axis with a rotational velocity of 1.2 km/s, a speed which takes the star approximately 420 days to make one complete revolution.
In 2007, Floor van Leeuwen and his team calibrated the star's apparent magnitude at 3.9974 with an updated parallax of 22.04 ± 0.37 milliarcseconds, yielding a distance of 45.4 parsecs or approximately 148 light years from Earth. Given a surface temperature of 5,068 Kelvin, theoretical calculations would yield a total luminosity for the star of about 60 times the solar luminosity.
Star system
Epsilon Draconis is resolvable as a double star in telescopes of 10 centimeters aperture or larger. The companion has an apparent brightness of 7.3 at an angular distance of 3.2 arcseconds. It is a giant of spectral class F5, orbiting the yellow giant at about 130 astronomical units.
See also
Lists of stars in the constellation Draco
Class G Stars
Variable star
Double star
References
External links
Astrophotographs:Epsilon Draconis
SkyView Image: Epsilon Draconis
G-type giants
Double stars
Draco (constellation)
Draconis, Epsilon
BD+69 1070
Draconis, 63
188119
097433
7582
Tyl | Epsilon Draconis | [
"Astronomy"
] | 711 | [
"Constellations",
"Draco (constellation)"
] |
3,110,133 | https://en.wikipedia.org/wiki/Beta%20Columbae | Beta Columbae (β Columbae, abbreviated Beta Col, β Col), officially named Wazn , is the second-brightest star in the southern constellation of Columba. It has an apparent visual magnitude of 3.1, which is bright enough to be viewed with the naked eye even from an urban location. Parallax measurements place it at a distance of about from the Sun.
Nomenclature
Beta Columbae is the star's Bayer designation. It has the traditional name Wazn (or Wezn) from the Arabic وزن "weight". In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Wazn for this star.
In Chinese, (), meaning Son, refers to an asterism consisting of Beta Columbae and Lambda Columbae. Consequently, Beta Columbae itself is known as (, .)
Properties
The spectrum of Beta Columbae matches a stellar classification of K1 IIICN+1, where the 'III' luminosity class indicates this is a giant star that has exhausted the supply of hydrogen at its core and evolved away from the main sequence of stars like the Sun. The notation 'CN+1' indicates a higher than normal level of cyanogen (CN) absorption in the atmosphere of the star. The interferometry-measured angular diameter of this star, after correcting for limb darkening, is , which, at its estimated distance, equates to a physical radius of about 11.5 times the radius of the Sun. Despite having expanded to this radius, Beta Columbae only has about a 10% greater mass than the Sun. The outer envelope of this star is radiating energy at an effective temperature of 4,545 K, resulting an orange hue that is typical of a cool, K-type star.
Beta Columbae has a high proper motion across the celestial sphere and is moving at an unusually large speed of relative to the Sun. About 107,200 years ago, it made a close approach to the Beta Pictoris system. The estimated separation of the two stars at this time was around and Beta Columbae may have perturbed outlying planetesimals within the debris disk surrounding Beta Pictoris.
References
Columba (constellation)
K-type giants
Columbae, Beta
039425
027628
CD-35 02546
Wazn
2040
TIC objects | Beta Columbae | [
"Astronomy"
] | 545 | [
"Columba (constellation)",
"Constellations"
] |
3,110,162 | https://en.wikipedia.org/wiki/Iota%20Draconis | Iota Draconis (ι Draconis, abbreviated Iota Dra, ι Dra), also named Edasich , is a star in the northern circumpolar constellation of Draco. A visually unremarkable star of apparent magnitude 3.3, in 2002 it was discovered to have a planet orbiting it (designated Iota Draconis b, later named Hypatia). From parallax measurements, this star is located at a distance of about from the Sun.
Nomenclature
ι Draconis (Latinised to Iota Draconis) is the star's Bayer designation. On discovery the planet was designated Iota Draconis b (or Edasich b).
It bore the traditional name Edasich, derived from the Arabic ' of Ulug Beg and the Dresden Globe, or 'Male hyena' by Kazwini, with Eldsich being recorded in the Century Cyclopedia. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Edasich for this star.
In July 2014 the International Astronomical Union launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning name was Hypatia for this planet. The winning name was submitted by Hypatia, a student society of the Physics Faculty of the Universidad Complutense de Madrid, Spain. Hypatia was a famous Greek astronomer, mathematician, and philosopher.
In Chinese, (), meaning Left Wall of Purple Forbidden Enclosure, refers to an asterism consisting of Iota Draconis, Theta Draconis, Eta Draconis, Zeta Draconis, Upsilon Draconis, 73 Draconis, Gamma Draconis and 23 Cassiopeiae. Consequently, the Chinese name for Iota Draconis itself is (, .), representing (), meaning Left Pivot. 左樞 (Zuǒshū) is westernized into Tsao Choo by R.H. Allen with the same meaning
Properties
Iota Draconis is larger and more massive than the Sun, with 1.6 times the mass and nearly 12 times the radius. The spectrum matches a stellar classification of K2 III, indicating this is an evolved star that has exhausted the supply of hydrogen at its core and left the main sequence. It is currently on the red giant branch, fusing hydrogen in a shell around its helium core. With an expanded outer envelope, it is radiating over 50 times the luminosity of the Sun at an effective temperature of 4,504 K. This temperature gives it an orange hue that is a characteristic of K-type stars. It is rotating at a leisurely rate, with a period of around 434 days. It is about 2.5 billion years old.
In the past Iota Draconis has been suspected of variability. However, the star has been found to have a constant luminosity to within about 0.004 magnitudes. Hence, as of 2010, the variability remains unconfirmed. An excess emission of infrared radiation at a wavelength of 70μm suggests the presence of a circumstellar disk of dust; what astronomers term a debris disk.
Edasich is the faintest star of which a color has been reported in pre-telescopic times, and was classified as an orange-red star.
Planetary system
The planetary companion discovered in 2002 was the first planet known to orbit a giant star. The habitable zone for this star lies in the range of 6.8–13.5 Astronomical Units, placing this planet well inside. The alignment of this planet's orbit may make it directly detectable via the transit method. Another long-period planet or brown dwarf was discovered in 2021, and the true masses of both planets were measured via astrometry.
References
External links
Extrasolar Planets Encyclopaedia: Notes for star HIP 75458
SolStation: Edasich/Iota Draconis
K-type giants
Draco (constellation)
Draconis, Iota
Draconis, 12
137759
075458
5744
BD+59 1654
Edasich
Planetary systems with two confirmed planets
Circumstellar disks
Suspected variables
J15245578+5857577 | Iota Draconis | [
"Astronomy"
] | 953 | [
"Constellations",
"Draco (constellation)"
] |
3,110,196 | https://en.wikipedia.org/wiki/Eta%20Pegasi | Eta Pegasi or η Pegasi, formally named Matar , is a binary star in the constellation of Pegasus. The apparent visual magnitude of this star is +2.95, making it the fifth-brightest member of Pegasus. Based upon parallax measurements, the distance to this star is about from the Sun.
Nomenclature
η Pegasi (Latinised to Eta Pegasi) is the star's Bayer designation.
It bore the traditional name Matar, derived from the Arabic سعد المطر Saʽd al Maṭar, meaning lucky star of rain. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Matar for this star on 21 August 2016 and it is now so entered in the IAU Catalog of Star Names.
In Chinese, (), meaning Resting Palace, refers to an asterism consisting η Pegasi, λ Pegasi, μ Pegasi, ο Pegasi, τ Pegasi and ν Pegasi. Consequently, η Pegasi itself is known as (), "the Fourth Star of Resting Palace".
Namesake
USS Matar (AK-119) was a United States Navy Crater class cargo ship named after the star.
Properties
The Eta Pegasi system consists of a pair of stars in a binary orbit with a period of 813 days and an eccentricity of 0.183. The primary component is a bright giant star with a stellar classification of G2 II and about three and a half times the mass of the Sun. The interferometry-measured angular diameter of this star, after correcting for limb darkening, is , which, at its estimated distance, equates to a physical radius of more than 24 times the radius of the Sun. It is radiating 331 times the luminosity of the Sun from its expanded outer envelope at an effective temperature of 4,970 K. The rotation rate of the star slowed as it expanded, so it has a projected rotational velocity of 1.7 km s−1 with an estimated rotation period of 818 days.
The secondary component is an F-type main sequence star with a classification of F0 V. The secondary star is 3.56 magnitudes fainter that the primary star at 700 nm. There are also 2 class G stars further away that may or may not be physically related to the main pair.
References
G-type bright giants
F-type main-sequence stars
Binary stars
Pegasus (constellation)
Pegasi, Eta
Durchmusterung objects
Pegasi, 44
215182
112158
8650
Matar | Eta Pegasi | [
"Astronomy"
] | 536 | [
"Pegasus (constellation)",
"Constellations"
] |
3,110,216 | https://en.wikipedia.org/wiki/Omicron%20Piscium | Omicron Piscium (ο Piscium, abbreviated Omi Psc, ο Psc) is a binary star in the constellation of Pisces. It is visible to the naked eye, having an apparent visual magnitude of 4.27. Based upon an annual parallax shift of 11.67 mas as seen from the Earth, the system is located roughly 280 light-years from the Sun. It is positioned near the ecliptic, so is subject to occultation by the Moon. It is a member of the thin disk population of the Milky Way.
The two components are designated Omicron Piscium A (formally named Torcular ) and B.
Nomenclature
ο Piscium (Latinised to Omicron Piscium) is the system's Bayer designation. The designations of the two components as Omicron Piscium A and B derives from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
The system bore the traditional name Torcularis septentrionalis, taken from the 1515 Almagest. The name is translated from the Greek ληνός ('full'), which was "erroneously written for λίνος" ('linen'). In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Torcular for the component Omicron Piscium A on 5 September 2017 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Official in Charge of the Pasturing, refers to an asterism consisting of Omicron Piscium, Eta Piscium, Rho Piscium, Pi Piscium and 104 Piscium. Consequently, the Chinese name for Omicron Piscium itself is (, .)
Properties
This is a probable astrometric binary system. The visible component, Omicron Piscium A, is an evolved K-type giant star with a stellar classification of K0 III. At the estimated age of 390 million years, it is most likely (76% chance) on the horizontal branch, rather than the red-giant branch. As such, it is a red clump star that is generating energy through helium fusion at its core. The star has three times the mass of the Sun and has expanded to over 14 times the Sun's radius. It is radiating 132 times the Sun's luminosity from its photosphere at an effective temperature of 5,004 K.
References
K-type giants
Horizontal-branch stars
Astrometric binaries
Torcular
Piscium, Omicron
Pisces (constellation)
Piscium, 110
Durchmusterung objects
010761
008198
0510 | Omicron Piscium | [
"Astronomy"
] | 613 | [
"Pisces (constellation)",
"Constellations"
] |
3,110,241 | https://en.wikipedia.org/wiki/53%20Eridani | 53 Eridani (abbreviated 53 Eri), also designated l Eridani (l Eri), is a binary star in the constellation of Eridanus. The system has a combined apparent magnitude of 3.87. Parallax estimates made by the Hipparcos spacecraft put it at a distance of about 110 light-years, or 33.7 parsecs, from the Sun.
The two components are designated 53 Eridani A (officially named Sceptrum) and B.
Nomenclature
53 Eridani is the system's Flamsteed designation; l Eridani is its Bayer designation. The designations of the two components as 53 Eridani A and B derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
53 Eridani bore the traditional name ('scepter'), as it was one of the brighter stars, designated "p Sceptri (Brandenburgici)", in the obsolete constellation of Sceptrum Brandenburgicum. The constellation was coined by Gottfried Kirch to honor the Brandenburg province of Prussia, and although it was later used in other atlases by Johann Elert Bode, the constellation fell out of use. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Sceptrum for the component 53 Eridani A on 30 June 2017 and it is now so included in the List of IAU-approved Star Names.
Properties
53 Eridani is a visual binary, where the orbit of the two stars is calculated from their orbital motions. The primary star, 53 Eridani A, is an evolved red giant with a spectral type of K1III. It is almost ten times as wide as the Sun and slightly more massive than the Sun. The secondary star, 53 Eridani B, has an apparent magnitude of 6.95 and its spectral type is unknown. The two have an orbital period of 77 years and have a quite eccentric orbit at 0.666. The total mass of the system is .
References
Eridanus (constellation)
Eridani, 53
Eridani, l
Sceptrum
K-type giants
Binary stars
Durchmusterung objects
9160
029503
021594
1481 | 53 Eridani | [
"Astronomy"
] | 509 | [
"Eridanus (constellation)",
"Constellations"
] |
3,110,270 | https://en.wikipedia.org/wiki/Beta%20Leporis | Beta Leporis (β Leporis, abbreviated Beta Lep, β Lep), formally named Nihal , is the second brightest star in the constellation of Lepus.
Nomenclature
Beta Leporis is the star's Bayer designation. It is also known by the traditional named Nihal, Arabic for "quenching their thirst". The occasional spelling Nibal appears to be due to a misreading. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Nihal for this star.
In Chinese, (), meaning Toilet, refers to an asterism consisting of β Leporis, α Leporis, γ Leporis and δ Leporis. Consequently, the Chinese name for β Leporis itself is (), "the Second Star of Toilet".
Properties
Based on parallax measurements from the Hipparcos astrometry satellite, this star is located about from the Earth. It has an apparent visual magnitude of 2.84 and a stellar classification of G5 II. The mass of this star is 3.5 times the mass of the Sun and it is about 240 million years old, which is the sufficient time for a star this massive to consume the hydrogen at its core and evolve away from the main sequence, becoming a G-type bright giant. The angular diameter of Beta Leporis, after correction for limb darkening, is . At the distance to this star, it yield a physical radius of 15.9 times the radius of the Sun.
This is a double star system and may be a binary, whereby the second star has a brightness of 7.34 mag. Using adaptive optics on the AEOS telescope at Haleakala Observatory, the pair was found to be separated by an angle of 2.58 arcseconds at a position angle of 1.4°. Component B has been observed to fluctuate in brightness and is catalogued as suspected variable star NSV 2008.
References
Nihal
Lepus (constellation)
G-type bright giants
5
Leporis, Beta
Leporis, 09
036079
025606
BD-20 1096
1829 | Beta Leporis | [
"Astronomy"
] | 484 | [
"Sky regions",
"Lepus (constellation)",
"Multiple stars",
"Constellations"
] |
3,110,306 | https://en.wikipedia.org/wiki/Zeta%20Persei | Zeta Persei (ζ Per, ζ Persei) is a star in the northern constellation of Perseus. With an apparent visual magnitude of 2.9, it can be readily seen with the naked eye. Parallax measurements place it at a distance of about from Earth, though measuremets of its Ca II lines place it at .
Description
This is a lower luminosity supergiant star with a stellar classification of B1 Ib. This is an enormous star, with an estimated 26–27 times the Sun's radius and 13–16 times the Sun's mass. It has about 47,000 times the luminosity of the Sun and it is radiating this energy at an effective temperature of 20,800 K, giving it the blue-white hue of a B-type star. The spectrum displays anomalously high levels of carbon. Zeta Persei has a strong stellar wind that is expelling times the mass of the Sun per year, or the equivalent of the Sun's mass every 4.3 million years.
Zeta Persei has a 9th magnitude companion at an angular separation of 12.9 arcseconds. The two stars have the same proper motion, so they may be physically associated. If so, they are separated by at least 4,000 Astronomical Units. Zeta Persei is a confirmed member of the Perseus OB2 association (Per OB2), also called the Zeta Persei association, which is a moving group of stars that includes 17 massive, high luminosity members with spectral types of O or B, giving them a blue hue. These stars have a similar trajectory through space, suggesting they originated in the same molecular cloud and are about the same age.
Ambiguity
Some sources, including Starry Night (planetarium software), an atlas, and a web site attribute the name 'Atik' to Zeta Persei instead of nearby Omicron Persei.
See also
HD 121228 - Same spectral class
References
External links
B-type supergiants
Perseus (constellation)
Persei, Zeta
Durchmusterung objects
Persei, 44
024398
018246
1203
Atik | Zeta Persei | [
"Astronomy"
] | 443 | [
"Perseus (constellation)",
"Constellations"
] |
3,110,350 | https://en.wikipedia.org/wiki/Xi%20Persei | Xi Persei (ξ Persei, abbreviated Xi Per, ξ Per), known also as Menkib , is a star in the constellation of Perseus. Based upon parallax measurements taken during the Hipparcos mission, it is approximately 1,200 light-years from the Sun.
Nomenclature
ξ Persei (Latinised to Xi Persei) is the star's Bayer designation.
It bore the traditional name Menkib, Menchib, Menkhib or Al Mankib, from Mankib al Thurayya (Arabic for "shoulder" [of the Pleiades]). In 2016, the International Astronomical Union (IAU) organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Menkib for this star on 12 September 2016 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Rolled Tongue, refers to an asterism consisting of Xi Persei, Nu Persei, Epsilon Persei, Zeta Persei, Omicron Persei and 40 Persei. Consequently, the Chinese name for Xi Persei itself is (, "the Third Star of Rolled Tongue").
Properties
Xi Persei has an apparent magnitude of +4.06 and is classified as a blue giant (spectral class O7.5III). It is intrinsically 12,700 times brighter than the Sun with absolute magnitude −5.5 in the V band. If the ultraviolet light and light from other wavelengths that emanates from Menkib is included, its total bolometric luminosity is 263,000 times that of the Sun.
The star has a mass of some 30 solar masses and a surface temperature of 35,000 kelvins, making it one of the hottest stars that can be seen with the naked eye. The fluorescence of the California Nebula (NGC 1499) is due to this star's prodigious radiation. It is a member of the Perseus OB2 association of co-moving stars, but may be a runaway star since it is now separated by 200 pc from the association's center and has an unusually high radial velocity.
References
External links
O-type giants
Emission-line stars
Runaway stars
Perseus (constellation)
Persei, Xi
BD+35 0775
Persei, 46
024912
018614
1228
Menkib | Xi Persei | [
"Astronomy"
] | 507 | [
"Perseus (constellation)",
"Constellations"
] |
3,110,387 | https://en.wikipedia.org/wiki/Eta%20Persei | Eta Persei (η Persei, abbreviated Eta Per, η Per), is a binary star and the 'A' component of a triple star system (the 'B' component is the star HD 237009) in the constellation of Perseus. Parallax measurements by the Gaia spacecraft imply that it is 1,000 is light-years away from Earth. At such distance, interstellar dust diminishes its apparent brightness by 0.47magnitudes.
The two components of Eta Persei itself are designated Eta Persei A (officially named Miram , a recent name for the system) and B.
Nomenclature
η Persei (Latinised to Eta Persei) is the binary star's Bayer designation. The designations of its two components as Eta Persei A and B derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
Eta Persei mysteriously gained the named Miram in the 20th Century, though no source is known. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Miram for the component Eta Persei A on 5 September 2017 and it is now so included in the List of IAU-approved Star Names.
This star, together with Delta Persei, Psi Persei, Sigma Persei, Alpha Persei and Gamma Persei has been called the Segment of Perseus.
In Chinese, (), meaning Celestial Boat, refers to an asterism consisting of Eta Persei, Gamma Persei, Alpha Persei, Psi Persei, Delta Persei, 48 Persei, Mu Persei and HD 27084. Consequently, the Chinese name for Eta Persei itself is (, .)
Properties
The primary star (η Persei A) has a spectral classification of K3Ib, meaning that it is a lower luminosity red supergiant star. It has expanded to 170 times the Sun's size and currently is emitting 7,500 times its luminosity. Its surface has a cool effective temperature of , which is cooler than the Sun and gives it an orange hue, typical of K-type stars.
References
Persei, Eta
Perseus (constellation)
K-type supergiants
Miram
0834
017506
Persei, 15
Durchmusterung objects
013268
Double stars | Eta Persei | [
"Astronomy"
] | 527 | [
"Perseus (constellation)",
"Constellations"
] |
3,110,402 | https://en.wikipedia.org/wiki/Kappa%20Persei | Kappa Persei or κ Persei, is a triple star system in the northern constellation of Perseus. Based upon an annual parallax shift of 28.93 mas, it is located at a distance of 113 light-years from the Sun.
The system consists of a spectroscopic binary, designated Kappa Persei A, which can be seen with the naked eye, having an apparent visual magnitude of 3.80. The third star, designated Kappa Persei B, is of magnitude 13.50.
Kappa Persei A's two components are designated Kappa Persei Aa (officially named Misam , the traditional name of the entire system) and Ab.
Nomenclature
κ Persei (Latinised to Kappa Persei) is the system's Bayer designation. The designations of the two constituents as Kappa Persei A and B, and those of A's components - Kappa Persei Aa and Ab - derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
The traditional name comes from the Arabic مِعْصَم miʽṣam 'wrist'.
In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Misam for the component Kappa Persei Aa on 5 September 2017 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Mausoleum, refers to an asterism consisting of Kappa Persei, 9 Persei, Tau Persei, Iota Persei, Beta Persei (Algol), Rho Persei, 16 Persei and 12 Persei. Consequently, the Chinese name for Kappa Persei itself is (, .).
Properties
At its distance, the visual magnitude of Kappa Persei is diminished by an extinction factor of 0.06 due to interstellar dust. It has a relatively high proper motion totaling 0.230 arcseconds per year. There is a 76.3% chance that it is a member of the Hyades-Pleiades stream of stars that share a common motion through space.
With an estimated age of 4.58 billion years, Kappa Persei Aa is an evolved G-type giant star with a stellar classification of G9.5 IIIb. It is a red clump giant, which means that it is generating energy at its core through the nuclear fusion of helium. The star has about 1.5 times the mass of the Sun and 9 times the Sun's radius. It radiates 40 times the solar luminosity from its outer atmosphere at an effective temperature of 4,857 K.
Kappa Persei B is at an angular separation of 44.10 arc seconds along a position angle of 319°, as of 2009.
References
G-type giants
Horizontal-branch stars
Spectroscopic binaries
Persei, Kappa
Perseus (constellation)
BD+44 0631
Persei, 27
019476
014668
0941 | Kappa Persei | [
"Astronomy"
] | 642 | [
"Perseus (constellation)",
"Constellations"
] |
3,110,432 | https://en.wikipedia.org/wiki/Pi%20Puppis | Pi Puppis, Latinized from π Puppis, also named Ahadi, is the second-brightest star in the southern constellation of Puppis. It has an apparent visual magnitude of 2.733, so it can be viewed with the naked eye at night. Parallax measurements yield an estimated distance of roughly from the Earth. This is a double star with a magnitude 6.86 companion at an angular separation of 0.72 arcsecond and a position angle of 148° from the brighter primary.
The spectrum of Pi Puppis matches a stellar classification of K3 Ib. The Ib luminosity class indicates this a lower luminosity supergiant star that has consumed the hydrogen fuel at its core, evolved away from the main sequence, and expanded to about 235 times the Sun's radius. The effective temperature of the star's outer envelope is approximately 4,000 K, which gives it the orange hue of a K-type star. With a mass 11.7 times that of the Sun, this is a short-lived star with an estimated age of 20 million years.
It is a semiregular variable star that varies in apparent magnitude from a high of 2.70 down to 2.85. Pi Puppis is the brightest star in the open cluster Collinder 135.
Naming
The star has the traditional name Ahadi, which is derived from Arabic for "having much promise". In Chinese, (), meaning Bow and Arrow, refers to an asterism consisting of π Puppis, δ Canis Majoris, η Canis Majoris, HD 63032, HD 65456, ο Puppis, k Puppis, ε Canis Majoris and κ Canis Majoris. Consequently, π Puppis itself is known as (, .)
References
External links
Pi Puppis
Puppis
Puppis, Pi
K-type supergiants
Ahadi
035264
2773
056855
Durchmusterung objects
Binary stars | Pi Puppis | [
"Astronomy"
] | 402 | [
"Puppis",
"Constellations"
] |
3,110,453 | https://en.wikipedia.org/wiki/Alpha%20Corvi | Alpha Corvi (α Corvi, abbreviated Alpha Crv, α Crv), also named Alchiba , is an F-type main-sequence star and, despite its "alpha" designation, is the fifth-brightest star in the constellation of Corvus. Based on parallax measurements made by the Gaia mission, it is approximately 49 light-years from the Sun.
Nomenclature
α Corvi (Latinised to Alpha Corvi) is the star's Bayer designation.
It bore the traditional names Al Chiba ( , 'tent') and Al Minliar al Ghurab (Arabic ) or Minkar al Ghurab. The latter appeared in the catalogue of stars in the Calendarium of Al Achsasi al Mouakket, which was translated into Latin as , 'beak of the crow'. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Alchiba for this star on 12 September 2016 and it is now so included in the List of IAU-approved Star Names.
In Chinese astronomy, Alchiba is called , Pinyin: , meaning 'right linchpin', because it stands alone in the 'right linchpin' asterism, Chariot mansion (see: Chinese constellations), , was westernized into Yew Hea by R.H. Allen.
Namesake
is a former United States Navy ship.
Properties
Alchiba has a spectral class F1V, classifying it as a main sequence star fusing hydrogen into helium at its core. This star exhibits periodic changes in its spectrum over a three-day period, which suggests it is either a spectroscopic binary or (more likely) a pulsating Gamma Doradus-type variable. Alchiba has 32% more mass and is 37% larger than the Sun. It is four times more luminous and has a surface effective temperature of 7,035 K, giving it the yellow-white hue of an F-type star. The abundance of chemical elements other than hydrogen and helium, what astronomers name metallicity, is slightly lower than that of the Sun.
Alpha Corvi has a common proper motion companion, named Alpha Corvi B (or Alchiba B), located about 3.1 arcsec away. It is a red dwarf with a spectral type of M4V.
See also
List of nearest bright stars
List of nearest F-type stars
Iota Persei
References
External links
Alpha Corvi by Professor Jim Kaler.
Corvus (constellation)
Corvi, Alpha
Alchiba
F-type main-sequence stars
Corvi, 01
059199
4623
105452
CD-24 10174
Spectroscopic binaries
Gliese and GJ objects | Alpha Corvi | [
"Astronomy"
] | 588 | [
"Corvus (constellation)",
"Constellations"
] |
3,110,460 | https://en.wikipedia.org/wiki/Zeta%20Ceti | Zeta Ceti (ζ Ceti, abbreviated Zeta Cet, ζ Cet) is a binary star in the equatorial constellation of Cetus. It has a combined apparent visual magnitude of 3.74, which is bright enough to be seen with the naked eye. Based upon parallax measurements taken during the Hipparcos mission, it is approximately 235 light-years from the Sun.
Zeta Ceti is the primary or 'A' component of a double star system designated WDS J01515-1020 (the secondary or 'B' component is HD 11366). Zeta Ceti's two components are therefore designated WDS J01515-1020 Aa and Ab. Aa is officially named Baten Kaitos , the traditional name of the entire system.
Nomenclature
ζ Ceti (Latinised to Zeta Ceti) is the binary pair's Bayer designation. WDS J01515-1020 A is its designation in the Washington Double Star Catalog. The designations of the two components as WDS J01515-1020 Aa and Ab derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
It bore the traditional name Baten Kaitos, derived from the Arabic بطن قيطس batn qaytus "belly of the sea monster". In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Baten Kaitos for the component WDS J01515-1020 Aa on 12 September 2016 and it is now so included in the List of IAU-approved Star Names.
In the catalogue of stars in the Calendarium of Al Achsasi al Mouakket, this star was designated Rabah al Naamat رابع ألنعامة raabi3 al naʽāmāt, which was translated into Latin as Quarta Struthionum, meaning "the fourth ostrich". This star, along with Eta Ceti (Deneb Algenubi), Theta Ceti (Thanih Al Naamat), Tau Ceti (Thalath Al Naamat), and Upsilon Ceti, formed Al Naʽāmāt ('ألنعامة), "the Hen Ostriches".
In Chinese, (), meaning Square Celestial Granary, refers to an asterism consisting of Zeta Ceti, Iota Ceti, Theta Ceti, Eta Ceti, Tau Ceti and 57 Ceti. Consequently, the Chinese name for Zeta Ceti itself is (, ).
Properties
Zeta Ceti is a single-lined spectroscopic binary system with an orbital period of 4.5 years and an eccentricity of 0.59. The primary, Baten Kaitos, is an evolved K-type giant star with a stellar classification of . The suffix notation indicates this is a weak barium star, showing slightly stronger than normal lines of singly-ionized barium. This star has an estimated 2.34 times the mass of the Sun and, at an estimated age of 1.24 billion years, has expanded to 25 times the Sun's radius.
HD 11366 (WDS J01515-1020B), of spectral type K0 III, is further away (419 parsecs, compared to WDS J01515-1020A's 72 parsecs), and is therefore not a member of the system but a chance alignment - this is referred to as an optical companion.
References
K-type giants
Spectroscopic binaries
Suspected variables
Barium stars
Cetus
Baten Kaitos
Ceti, Zeta
BD-11 359
Ceti, 55
011353
008645
0539 | Zeta Ceti | [
"Astronomy"
] | 815 | [
"Cetus",
"Constellations"
] |
3,110,492 | https://en.wikipedia.org/wiki/Xi%20Draconis | Xi Draconis (ξ Draconis, abbreviated Xi Dra, ξ Dra) is a double or binary star in the northern circumpolar constellation of Draco. It has an apparent visual magnitude of 3.75. Based upon parallax measurements, it is located at a distance of from the Sun. At this distance, the apparent magnitude is diminished by 0.03 from extinction caused by intervening gas and dust.
The two components are designated Xi Draconis A (officially named Grumium , a traditional name for the system) and B.
Nomenclature
ξ Draconis (Latinised to Xi Draconis) is the system's Bayer designation. The designations of the two components as Xi Draconis A and B derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
It bore the traditional names Grumium. This is a graphic corruption of the Latin Grunnum 'snout', as Ptolemy had described this star as being on the jawbone of the dragon. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Grumium for the component Xi Draconis A on 12 September 2016 and it is now so included in the List of IAU-approved Star Names.
This star was also known as Nodus I or Nodus Primus. Along with Beta Draconis (Rastaban), Gamma Draconis (Eltanin), Mu Draconis (Erakis) and Nu Draconis (Kuma), it was one of Al ʽAwāyd "the Mother Camels", which were later known as the Quinque Dromedarii.
In Chinese, (), meaning Celestial Flail, refers to an asterism consisting of Xi Draconis, Nu Draconis, Beta Draconis, Gamma Draconis and Iota Herculis. Consequently, the Chinese name for Xi Draconis itself is (, ).
Namesake
USS Grumium (AK-112) was a United States Navy Crater-class cargo ship named after the star.
Properties
Xi Draconis A is of spectral class K2-III. It is not known for certain if Xi Draconis A is on the red giant branch, fusing hydrogen into helium in a shell surrounding an inert helium core, or on the horizontal branch fusing helium into carbon. The possible companion, Xi Draconis B, is a 16th-magnitude star 316 arcseconds away but, most likely, the pairing is just a line-of-sight coincidence.
References
Draco (constellation)
Draconis, Xi
Grumium
K-type giants
087585
Draconis, 32
6688
163588
Durchmusterung objects | Xi Draconis | [
"Astronomy"
] | 613 | [
"Constellations",
"Draco (constellation)"
] |
3,110,528 | https://en.wikipedia.org/wiki/Zeta%20Draconis | Zeta Draconis (ζ Draconis, abbreviated Zet Dra, ζ Dra) is a binary star in the northern circumpolar constellation of Draco. With an apparent visual magnitude of +3.17, it is the fifth-brightest member of this generally faint constellation. Its distance from the Sun has been measured using the parallax technique, yielding an estimate of roughly .
The two components are designated Zeta Draconis A (formally named Aldhibah , after the traditional name of the system) and B.
Nomenclature
ζ Draconis (Latinised to Zeta Draconis) is the system's Bayer designation. The designations of the two components as Zeta Draconis A and B derives from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
Zeta Draconis has the old Arabic name الذئب al-dhiʼb "the wolf" or "the hyena", given in its feminine form "Al Dhiʼbah" (ذئبة) in Allen (1899) (though he mistranslated it as plural "hyenas", which would be الضباع al-ḍibāʽ). It shares the dual form of the name, الذئبين al-dhiʼbayn, with Eta Draconis. It is also known as Nodus III (Third Knot, the knot being a loop in the tail of Draco).
In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Aldhibah for the component Zeta Draconis A on 5 September 2017. It also approved the name Athebyne for Eta Draconis A on the same date. Both are now so included in the List of IAU-approved Star Names.
Zeta Draconis is mentioned in Hindu texts as Tara who was a celestial goddess married to Lord Brhaspati. A divine epic was played out in the night sky when Lord Chandra, the moon, lusted after and abducted Tara, the blue pole star of Brhaspati, the planet Jupiter. By the completion of the epic Tara gives birth to Lord Budha, or Mercury.
In Chinese, (), meaning Left Wall of Purple Forbidden Enclosure, refers to an asterism consisting of Zeta Draconis, Iota Draconis, Eta Draconis, Theta Draconis, Upsilon Draconis, 73 Draconis, Gamma Cephei and 23 Cassiopeiae. Consequently, the Chinese name for Zeta Draconis itself is (, ), representing (), meaning The First Minister. 上弼 (Shǎngbì) is westernized into Shang Pih by R.H. Allen with meaning "the Higher Minister".
Properties
Zeta Draconis A is a giant star with a stellar classification of B6 III. Compared to the Sun, this star is about 2.5 times larger, 3.5 times more massive, and is radiating 148 times as much luminosity. This energy is being emitted from the star's outer envelope at an effective temperature of nearly 13,400 K. The azimuthal rotation velocity along the equator is at least 55 km/s.
The north ecliptic pole is located at right ascension 18h and declination +66.5°. This is located roughly midway between Delta Draconis and Zeta Draconis. The north ecliptic pole almost coincides with the south celestial pole of Venus; Zeta Draconis is also the north pole star of Jupiter.
References
Draco (constellation)
Draconis, Zeta
B-type giants
Aldhibah
083895
Draconis, 22
6396
155763
Durchmusterung objects
Binary stars | Zeta Draconis | [
"Astronomy"
] | 820 | [
"Constellations",
"Draco (constellation)"
] |
3,110,553 | https://en.wikipedia.org/wiki/Statgraphics | Statgraphics is a statistics package that performs and explains basic and advanced statistical functions.
History
The software was created in 1980 by Dr. Neil W. Polhemus while on the Princeton University School of Engineering and Applied Science faculty as a teaching tool for his statistics students. It was made available to the public in 1982, becoming an early example of data science software designed for use on the PC.
Software
The flagship version of Statgraphics is Statgraphics Centurion, a Windows desktop application with capabilities for regression analysis, ANOVA, multivariate statistics, Design of Experiments, statistical process control, life data analysis, machine learning, and data visualization. The data analysis procedures include descriptive statistics, hypothesis testing, regression analysis, analysis of variance, survival analysis, time series analysis and forecasting, sample size determination, multivariate methods, machine learning and Monte Carlo techniques. The SPC menu includes many procedures for quality assessment, capability analysis, control charts, measurement systems analysis, and acceptance sampling. The program also features a DOE Wizard that creates and analyzes statistically designed experiments.
Applications
Statgraphics is frequently used for Six Sigma process improvement. The program has also been used in various health and nutrition-related studies. The software is heavily used in manufacturing chemicals, pharmaceuticals, medical devices, automobiles, food and consumer goods. It is also widely used in mining, environmental studies, and basic R&D.
Distribution
Statgraphics is distributed by Statgraphics Technologies, Inc., a privately held company based in The Plains, Virginia.
See also
List of statistical packages
Comparison of statistical packages
List of information graphics software
References
Statistical software
Science software for Windows | Statgraphics | [
"Mathematics"
] | 335 | [
"Statistical software",
"Mathematical software"
] |
3,110,873 | https://en.wikipedia.org/wiki/Cyclamic%20acid | Cyclamic acid is a compound with formula C6H13NO3S.
It is included in E number "E952".
Cyclamic acid is mainly used as catalyst in the production of paints and plastics, and furthermore as a reagent for laboratory usage.
The sodium and calcium salts of cyclamic acid are used as artificial sweeteners under the name cyclamate.
References
Amines
Sulfamates
Cyclohexyl compounds | Cyclamic acid | [
"Chemistry"
] | 99 | [
"Sulfamates",
"Organic compound stubs",
"Functional groups",
"Organic compounds",
"Amines",
"Bases (chemistry)",
"Organic chemistry stubs"
] |
3,111,042 | https://en.wikipedia.org/wiki/Mauve%20%28test%20suite%29 | Mauve is a project to provide a free software test suite for the Java class libraries. Mauve is developed by the members of Kaffe, GNU Classpath, GCJ, and other projects. Unlike a similar project, JUnit, Mauve is designed to run on various experimental Java virtual machines, where some features may be still missing. Because of this, Mauve does not discover the testing method by name, as JUnit does. Mauve can also be used to test the user java application, not just the core class library. Mauve is released under GNU General Public License.
Example
The "Hello world" example in Mauve:
// Tags: JDK1.4
public class HelloWorld implements Testlet {
// Test if 3 * 2 = 6
public void test(TestHarness harness) {
harness.check(3 * 2, 6, "Multiplication failed.");
}
}
See also
Technology Compatibility Kit
External links
Mauve homepage
Extreme programming
Software testing | Mauve (test suite) | [
"Engineering"
] | 203 | [
"Software engineering",
"Software testing"
] |
3,111,057 | https://en.wikipedia.org/wiki/Sensory%20threshold | In psychophysics, sensory threshold is the weakest stimulus that an organism can sense. Unless otherwise indicated, it is usually defined as the weakest stimulus that can be detected half the time, for example, as indicated by a point on a probability curve. Methods have been developed to measure thresholds in any of the senses.
Several different sensory thresholds have been defined;
Absolute threshold: the lowest level at which a stimulus can be detected.
Recognition threshold: the level at which a stimulus can not only be detected but also recognized.
Differential threshold: the level at which an increase in a detected stimulus can be perceived.
Terminal threshold: the level beyond which any increase to a stimulus no longer changes the perceived intensity.
History
The first systematic studies to determine sensory thresholds were conducted by Ernst Heinrich Weber, a physiologist and pioneer of experimental psychology at the Leipzig University. His experiments were intended to determine the absolute and difference, or differential, thresholds. Weber was able to define absolute and difference threshold statistically, which led to the establishment of Weber's Law and the concept of just noticeable difference to describe threshold perception of stimuli.
Following Weber's work, Gustav Fechner, a pioneer of psychophysics, studied the relationship between the physical intensity of a stimulus and the psychologically perceived intensity of the stimulus. Comparing the measured intensity of sound waves with the perceived loudness, Fechner concluded that the intensity of a stimulus changes in proportion to the logarithm of the stimulus intensity. His findings would lead to the creation of the decibel scale.
Measuring and testing sensory thresholds
Defining and measuring sensory thresholds requires setting the sensitivity limit such that the perception observations lead to the absolute threshold. The level of sensitivity is usually assumed to be constant in determining the threshold limit. There are three common methods used to determine sensory thresholds:
Method of Limits:
In the first step, the subject is stimulated by strong, easily detectable stimuli that are decreased stepwise (descending sequence) until they cannot detect the stimulus. Then another stimulation sequence is applied called ascending sequence. In this sequence, stimulus intensity increases from subthreshold to easily detectable. Both sequences are repeated several times. This yields several momentary threshold values. In the following step, mean values are calculated for ascending and descending sequences separately. The mean value will be lower for descending sequences. In case of audiometry, the difference of the means in case of ascending vs. descending sequences has a diagnostic importance. In the final step, the average of the previously calculated means will result in the absolute threshold.
Method of constant stimuli:
Stimuli of varying intensities are presented in random order to a subject. Intensities involve stimuli which are surely subthreshold and stimuli which are surely supra-threshold. For the creation of the series, the approximate threshold judged by a simpler method (i.e.: by the method of limits). The random sequences are presented to the subject several times. The strength of the stimulus, perceived in more than half of the presentations, will be taken as the threshold.
Adaptive method:
Stimulation starts with a surely supra-threshold stimulus; then further stimuli are given with an intensity decreased in previously-defined steps. The series is stopped when the stimulus strength become subthreshold (this is called the turn phenomena). Then the step is halved, and the stimulation is repeated, but now with increasing intensities, until the subject perceives the sound again. This process is repeated several times, until the step size reaches the preset minimal value. With this method, the threshold value can be delineated very accurately. The initial size of the step can be selected depending on the expected accuracy.
In measuring sensory threshold, noise must be accounted for. Signal noise is defined as the presence of extra, unwanted energy in the observational system which obscures the information of interest. As the measurements come closer to the absolute threshold, the variability of the noise increases, causing the threshold to be obscured. Different types of internal and external noise include excess stimuli, nervous system over- or under-stimulation, and conditions that falsely stimulate nerves in the absence of external stimuli.
A universal absolute threshold is difficult to define a standard because of the variability of the measurements. While sensation occurs at the physical nerves, there can be reasons why it is not consistent. Age or nerve damage can affect sensation. Similarly, psychological factors can affect perception of physical sensation. Mental state, memory, mental illness, fatigue, and other factors can alter perception.
Aviation use
When related to motion in any of the possible six degrees of freedom (6-DoF), the fact that sensory
thresholds exist is why it is essential that aircraft have blind-flying instruments. Sustained flight in cloud is not possible by `seat-of-the-pants' cues alone, since errors build up due to aircraft movements below the pilot's sensory threshold,
ultimately leading to loss of control.
In flight simulators with motion platforms, the motion sensory thresholds are utilised in the technique known as `acceleration-onset cueing'. This is where a motion platform, having made the initial acceleration that is sensed by the simulator crew, the platform is re-set to approximately its neutral position by being moved at a rate below the sensory threshold and is then ready to respond to the next acceleration demanded by the simulator computer.
See also
Detection theory
Odor detection threshold
Perception
Sensory analysis
References
Perception
Psychophysics | Sensory threshold | [
"Physics"
] | 1,101 | [
"Psychophysics",
"Applied and interdisciplinary physics"
] |
3,111,290 | https://en.wikipedia.org/wiki/Carnassial | Carnassials are paired upper and lower teeth modified in such a way as to allow enlarged and often self-sharpening edges to pass by each other in a shearing manner. This adaptation is found in carnivorans, where the carnassials are the modified fourth upper premolar and the first lower molar. These teeth are also referred to as sectorial teeth.
Taxonomy
The name carnivoran is applied to a member of the order Carnivora. Carnivorans possess a common arrangement of teeth called carnassials, in which the first lower molar and the last upper premolar possess blade-like enamel crowns that act similar to a pair of shears for cutting meat. This dental arrangement has been modified by adaptation over the past 60 million years for diets composed of meat, for crushing vegetation, or for the loss of the carnassial function altogether found in pinnipeds.
Carnassial dentition
Carnassial teeth are modified molars (and in the case of carnivorans premolars) which are adapted to allow for the shearing (rather than tearing) of flesh to permit the more efficient consumption of meat. These modifications are not limited to the members of the order Carnivora, but are seen in a number of different mammal groups. Not all carnivorous mammals, however, developed carnassial teeth. Mesonychids, for example, had no carnassial adaptations, and as a result, the blunt, rounded cusps on its molars had a much more difficult time reducing meat. Likewise, neither members of Oxyclaenidae nor Arctocyonidae had carnassial teeth.
On the other hand, carnivorous marsupials have teeth of a carnassial form. Both the living Tasmanian devil (Sarcophilus harrisii) and the recently extinct Tasmanian wolf (Thylacinus cynocephalus) possessed modified molars to allow for shearing, although the Tasmanian wolf, the larger of the two, had dentition more similar to the dog. The Pleistocene marsupial lion (Thylacoleo carnifex) had massive carnassial molars. A recent study concludes that these teeth produced the strongest bite of any known land mammal in history. Moreover, these carnassial molars appear to have been used, unlike in any other known mammal, to inflict the killing blow to the prey by severing the spinal cord, crushing the windpipe or severing a major artery. Like these true marsupials, the closely related borhyaenids of South America had three carnassial teeth involving the first three upper molars (M1-M3) and the second through fourth lower molars (m2-m4). In the borhyaenids the upper carnassials appear to have been rotated medially around the anterior-posterior axis of the tooth row in order to maintain tight occlusional contact between the upper and lower shearing teeth.
Creodonts had two or three pairs of carnassial teeth, but only one pair performed the cutting function: either M1/m2 or M2/m3, depending on the family. In Oxyaenidae, it is M1 and m2 that form the carnassials. Among the hyaenodontids it is M2 and m3. Unlike most modern carnivorans, in which the carnassials are the sole shearing teeth, in the creodonts other molars had a subordinate shearing function. The fact that the two lineages developed carnassials from different types of teeth has been used as evidence against the validity of Creodonta as a clade.
Modern carnivorous bats generally lack true carnassial teeth, but the extinct Necromantis had particularly convergent teeth, in particular M1 and M2, which bore expanded heels and broad stylar shelves. These were particularly suited for crushing over an exclusively slicing action.
Though not superficially similar, the triconodont teeth of some early mammals such as eutriconodonts are thought to have had a function similar to those of carnassials, sharing a similar shearing function. Eutriconodonts possess several speciations towards animalivory, and the larger forms such as Repenomamus, Gobiconodon and Jugulator probably fed on vertebrate prey. Similarly the "tooth lips" of clevosaurid sphenodontians such as Clevosaurus are described as "carnassial-like". A lineage of pycnodont fish also developed carnassials eerily convergent with those of modern carnivorans.
In modern carnivorans the carnassial teeth pairs are found on either side of the jaw and are composed of the fourth upper pre-molar and the first lower molar (P4/m1). The location these carnassial pairs is determined primarily by the masseter muscle. In this position, the carnassial teeth benefit from most of the force generated by this mastication muscle, allowing for efficient shearing and cutting of flesh, tendon and muscle.
The scissor-like motion is created by the movement between the carnassial pair when the jaw occludes. The inside of the fourth upper pre-molar closely passes by the outer surface of the first lower molar, thus allowing the sharp cusps of the carnassial teeth to slice through meat.
The length and size of the carnassial teeth vary between species, taking into account factors such as:
the size of the carnivorous animal
the extent to which the diet is carnivorous
the size of the chunk of meat that can be swallowed.
Evolution of carnassial teeth
The fossil record indicates the presence of carnassial teeth 50 million years ago, implying that Carnivora family members descend from a common ancestor.
The shape and size of sectorial teeth of different carnivorous animals vary depending on diet, illustrated by the comparisons of bear (Ursus) carnassials with those of a leopard (Panthera). Bears, being omnivores, have a flattened, more blunt carnassial pair than leopards. This reflects the bear's diet, as the flattened carnassials are useful both in slicing meat and grinding up vegetation, whereas the leopard's sharp carnassial pairs are more adapted for its hypercarnivorous diet. During the Late Pleistocene – early Holocene a now extinct hypercarnivorous wolf ecomorph existed that was similar in size to a large extant gray wolf but with a shorter, broader palate and with large carnassial teeth relative to its overall skull size. This adaptation allowed the megafaunal wolf to predate and scavenge on Pleistocene megafauna.
Cheetahs, Scimitar-toothed cats and Barbourofelis, have relatively elongated blade-like shape carnassials, with reduced lingual cusps. This may have been an adaptation to consume quickly the flesh of a prey before larger and stronger predators arrive to take it from them, either from other species or from their own group.
Disease
Wear and cracking of the carnassial teeth in a wild carnivore (e.g. a wolf or lion) may result in the death of the individual due to starvation.
Carnassial teeth infections are common in domestic dogs. They can present as abscesses (a large swollen lump under the eye). Extraction or root canal procedure (with or without a crown) of the tooth is necessary to ensure that no further complications occur, as well as pain medication and antibiotics.
References
Types of teeth
Mammal anatomy
Carnivory
Articles containing video clips | Carnassial | [
"Biology"
] | 1,612 | [
"Eating behaviors",
"Carnivory"
] |
3,111,337 | https://en.wikipedia.org/wiki/Call%20super | Call super is a code smell or anti-pattern of some object-oriented programming languages. Call super is a design pattern in which a particular class stipulates that in a derived subclass, the user is required to override a method and call back the overridden function itself at a particular point. The overridden method may be intentionally incomplete, and reliant on the overriding method to augment its functionality in a prescribed manner. However, the fact that the language itself may not be able to enforce all conditions prescribed on this call is what makes this an anti-pattern.
Description
In object-oriented programming, users can inherit the properties and behaviour of a superclass in subclasses. A subclass can override methods of its superclass, substituting its own implementation of the method for the superclass's implementation. Sometimes the overriding method will completely replace the corresponding functionality in the superclass, while in other cases the superclass's method must still be called from the overriding method. Therefore, most programming languages require that an overriding method must explicitly call the overridden method on the superclass for it to be executed.
The call super anti-pattern relies on the users of an interface or framework to derive a subclass from a particular class, override a certain method and require the overridden method to call the original method from the overriding method:
This is often required, since the superclass must perform some setup tasks for the class or framework to work correctly, or since the superclass's main task (which is performed by this method) is only augmented by the subclass.
The anti-pattern is the of calling the parent. There are many examples in real code where the method in the subclass may still want the superclass's functionality, usually where it is only augmenting the parent functionality. If it still has to call the parent class even if it is fully replacing the functionality, the anti-pattern is present.
A better approach to solve these issues is instead to use the template method pattern, where the superclass includes a purely abstract method that must be implemented by the subclasses and have the original method call that method:
Language variation
The appearance of this anti-pattern in programs is usually because few programming languages provide a feature to contractually ensure that a super method is called from a derived class. One language that does have this feature, in a quite radical fashion, is BETA. The feature is found in a limited way in for instance Java and C++, where a child class constructor always calls the parent class constructor.
Languages that support before and after methods, such as Common Lisp (specifically the Common Lisp Object System), provide a different way to avoid this anti-pattern. The subclass's programmer can, instead of overriding the superclass's method, supply an additional method which will be executed before or after the superclass's method. Also, the superclass's programmer can specify before, after, and around methods that are guaranteed to be executed in addition to the subclass's actions.
Example
Suppose there is a class for generating a report about the inventory of a video rental store. Each particular store has a different way of tabulating the videos currently available, but the algorithm for generating the final report is the same for all stores. A framework that uses the call super anti-pattern may provide the following abstract class (in C#):
abstract class ReportGenerator
{
public virtual Report CreateReport()
{
// Generate the general report object
// ...
return new Report(...);
}
}
A user of the class is expected to implement a subclass like this:
class ConcreteReportGenerator : ReportGenerator
{
public override Report CreateReport()
{
// Tabulate data in the store-specific way
// ...
// Design of this class requires the parent CreateReport() function to be called at the
// end of the overridden function. But note this line could easily be left out, or the
// returned report could be further modified after the call, violating the class design
// and possibly also the company-wide report format.
return base.CreateReport();
}
}
A preferable interface looks like this:
abstract class ReportGenerator
{
public Report CreateReport()
{
Tabulate();
// Generate the general report object
// ...
return new Report(...);
}
protected abstract void Tabulate();
}
An implementation would override this class like this:
class ConcreteReportGenerator : ReportGenerator
{
protected override void Tabulate()
{
// Tabulate data in the store-specific way
// ...
}
}
References
Anti-patterns
Object-oriented programming | Call super | [
"Technology"
] | 998 | [
"Anti-patterns"
] |
3,111,575 | https://en.wikipedia.org/wiki/Drainage%20density | Drainage density is a quantity used to describe physical parameters of a drainage basin. First described by Robert E. Horton, drainage density is defined as the total length of channel in a drainage basin divided by the total area, represented by the following equation:
The quantity represents the average length of channel per unit area of catchment and has units , which is often reduced to .
Drainage density depends upon both climate and physical characteristics of the drainage basin. Soil permeability (infiltration difficulty) and underlying rock type affect the runoff in a watershed; impermeable ground or exposed bedrock will lead to an increase in surface water runoff and therefore to more frequent streams. Rugged regions or those with high relief will also have a higher drainage density than other drainage basins if the other characteristics of the basin are the same.
When determining the total length of streams in a basin, both perennial and ephemeral streams should be considered. If a drainage basin contained only ephemeral streams, the drainage density by the equation above would be calculated to be zero if only the total length of streams was calculated using only perennial streams. Ignoring ephemeral streams in the calculations does not consider the behavior of the basin during flood events and is therefore not completely representative of the drainage characteristics of the basin.
Drainage density is indicative of infiltration and permeability of a drainage basin, as well as relating to the shape of the hydrograph. Drainage density depends upon both climate and physical characteristics of the drainage basin.
High drainage densities also mean a high bifurcation ratio.
Inverse of drainage density as a physical quantity
Drainage density can be used to approximate the average length of overland flow in a catchment. Horton (1945) used the following equation to describe the average length of overland flow as a function of drainage density:
Where is the length of overland flow with units of length and is the drainage density of the catchment, expressed in units of inverse length.
Considering the geometry of channels on the hillslope, Horton also proposed the following equation:
Where is the channel slope and is the average slope of the ground in the area.
Elementary components of drainage basins
A drainage basin can be defined by three elementary quantities: channels, the hillslope area associated with those channels, and the source areas. The channels are the well-defined segments that efficiently carry water through the catchment. Labeling these features as “channels” rather than “streams” indicates that there need not be a continuous flow of water to capture the behavior of this region as a conduit of water. According to Arthur Strahler’s stream ordering system, the channels are not defined to be any single order or range of orders. Channels of lower orders combine to form higher order channels. The associated hillslope areas are the hillslopes that slope directly into the channels. Precipitation that enters the system on the hillslopes areas and is not lost to infiltration or evapotranspiration enters the channels. The source areas are concave regions of hillslope that are associated with a single channel. Precipitation entering a source area that is not lost to infiltration or evapotranspiration flows through the source area and enters the channel at the channel’s head. Source areas and the hillslope areas associated with channels are differentiated by source areas draining through the channel head, while the associated hillslope areas drain into the rest of the stream. According to Strahler’s stream ordering system, all source areas drain into a primary channel, by the definition of a primary channel.
Bras et al. (1991) describe the conditions that are necessary for channel formation. Channel formation is a concept intimately tied to the formation and evolution of a drainage system and influence the drainage density of catchment. The relation they propose determines the behavior of a given hillslope in response to a small perturbation. They propose the following equation as a relation between source area, source slope, and the sediment flux through this source area:
Where F is the sediment flux, S is the slope of the source area, and a is the source area. The right-hand side of this relation determines channel stability or instability. If the right-hand side of the equation is greater than zero, the hillslope is stable, and small perturbations such as small erosive events do no develop into channels. Conversely, if the right-hand side of the equation is less than zero, Bras et al. determine the hillslope to be unstable, and small erosive structures, such as rills, will tend to grow and form a channel and increase the drainage density of a basin. In this sense, "unstable" is not used in the sense of the gradient of the hillslope being greater than the angle of repose and therefore susceptible to mass wasting, but rather fluvial erosive processes such as sheet flow or channel flow tend to incise and erode to form a singular channel. Therefore, the characteristics of the source area, or potential source area, influence the drainage density and evolution of a drainage basin.
Relation to water balance
Drainage density is tied to the water balance equation:
Where is the change in reservoir storage, R is precipitation, ET is evapotranspiration, Gi and Go are the respective groundwater flux into and out of the basin, Gs is the groundwater discharge into streams, and Qw is groundwater discharge from the basin through wells. Drainage density relates to the storage and runoff terms. Drainage density relates to the efficiency by which water is carried over the landscape. Water is carried through channels much faster than over hillslopes, as saturated overland flow is slower due to being thinned out and obstructed by vegetation or pores in the ground. Consequently, a drainage basin with a relatively higher drainage density will be more efficiently drained than a higher density one. Because of the more extensive drainage system in a higher density basin, precipitation entering the basement will, on average, travel a shorter distance over the slower hillslopes before reaching the faster-flowing channels and exit the basin through the channels in less time. Conversely, precipitation entering a lower drainage density basin will take longer to exit the basin due to travelling over the slower hillslope longer.
In his 1963 paper on drainage density and streamflow, Charles Carlston found that baseflow into streams is inversely related to the drainage density of the drainage basin:
This equation represents the effect of drainage density on infiltration. As drainage density increases, baseflow discharge into a stream decreases for a given basin because there is less infiltration to contribute to baseflow. More of the water entering the drainage basin during a precipitation immediately following a rainfall event exits quickly through streams and does not become infiltration to contribute to baseflow discharge.
Gregory and Walling (1968) found that the average discharge through a drainage basin is proportional to the square of drainage density:
This relation illustrates that a higher drainage density environment transports water more efficiently through the basin. In a relatively low drainage density environment, the lower average discharge results predicted by this relation would be the result of the surface runoff spending more time travelling over hillslope and having a larger time for infiltration to occur. The increased infiltration results in a decreased surface runoff according to the water balance equation.
These two equations agree with each other and follow the water balance equation. According to the equations, a basin with high drainage density, the contribution of surface runoff to stream discharge will be high, while that from baseflow will be low. Conversely, a stream in a low drainage density system will have a larger contribution from baseflow and a smaller contribution from overland flow.
Relation to hydrographs
The discharge through the central stream draining a catchment reflects the drainage density, which makes it a useful diagnostic for predicting the flooding behavior of a catchment following a storm event due to being intimately tied to the hydrograph. The material that overland flow travels over is one factor that influences the speed that water can flow out of a catchment. Water flows significantly slower over hillslopes compared to channels that form to efficiently carry water and other flowing material. According to Horton’s interpretation of half of the inverse of drainage density as the average length of overland flow implies that overland flow in high-drainage environments will reach a fast-flowing channel faster over a shorter range. On the hydrograph, the peak is higher and occurs over a shorter range. This more compact and higher peak is often referred to as being “flashy”.
The timing of the hydrograph in relation to the peak of the hyetograph is influenced by the drainage density. The water that enters a high-drainage watershed during a storm will reach a channel relatively fast and travel in the high-velocity channels to the outlet of the watershed in a relatively short time. Conversely, the water entering a low drainage density basin will, on average, have to travel a longer distance over the low velocity hillslope to reach the channels. As a result, the water will require more time to reach the exit of the catchment. The lag time between the peak of the hyetograph and the hydrograph is then inversely related to drainage density; as drainage density increases, water is more efficiently drained from the basin and the lag time decreases.
Another impact on the hydrograph that drainage density has is a steeper falling limb following the storm event due to its impact on both overland flow and baseflow. The falling limb occurs after the peak of the hydrograph curve and is when overland flow is decreasing back to ambient levels. In higher drainage systems, the overland flow reaches the channels quicker resulting in a narrower spread in the falling limb. Baseflow is the other contributor to the hydrograph. The peak of baseflow to the channels will occur after the quick-flow peak because groundwater flow is much slower than quick-flow. Because the baseflow peak occurs after the quick-flow peak, the baseflow peak influences the shape of the falling limb. According to the proportionality put forth by Gregory and Walling, as drainage density increases, the contribution of baseflow to the falling limb of the hydrograph diminishes. During a storm event in a high drainage density basin, there is little water that infiltrates into the ground as infiltration because water spends less time flowing over the surface in the catchment before exiting through the central channel. Because there is little water that enters the water as infiltration, baseflow will contribute only a small part to falling limb. The falling limb is thus quite steep. Conversely, a low drainage system will have a shallower falling limb. According to Gregory and Walling’s relation, the decrease in drainage density results in an increase in baseflow to the channels and a more gradual decrease in the hydrograph.
Formula for drainage density
Montgomery and Dietrich (1989)
Montgomery and Dietrich (1989) determined the following equation for drainage density by observing drainage basins in the Tennessee Valley, California:
Where ws is the mean source width, ρw is the density of water, R0 is the average precipitation rate, W* is the width of the channel head, ρs is the saturated bulk density of the soil, Kz is the vertical saturated hydraulic conductivity, θ is the slope at the channel head, and φ is the soil angle of internal friction.
R0, the average precipitation term, shows the dependence of drainage density on climate. With all other factors being constant, an increase in precipitation in the drainage basin results in an increase in drainage density. A decrease in precipitation, such as in an arid environment, results in a lower drainage density. The equation also shows the dependence on the physical characteristics and lithology of the drainage basin. Materials with a low hydraulic conductivities, such as clay or solid rock, would result in a higher-drainage density system. Because of the low hydraulic conductivity, there is little water lost to infiltration and that water exits the system as runoff and can contribute to erosion. In a basin with a higher vertical hydraulic conductivity, water more effectively infiltrates into the ground and does not contribute to saturated overland flow erosion, resulting in a less developed channel system and therefore lower drainage density.
Relation to the mean annual flood
Charles Carlston (1963) determined an equation to express the mean annual flood runoff, Q2.33, for a given drainage basin as a function of drainage density. Carlston found a correlation between the two quantities when plotting data from 15 drainage basins and determined the following equation:
Where Q is in units of cubic feet per second per square mile and Dd is in units of inverse miles. From that equation, it is concluded that a drainage basin will adjust itself through erosion such that this equation is satisfied.
Effect of vegetation on drainage density
The presence of vegetation in a drainage basin has multiple effects on the drainage density. Vegetation prevents landslides in the source area of a basin that would result in channel formation as well as decrease the range of drainage density values regardless of soil composition.
Vegetation stabilizes the unstable source area in basin and prevents channel initiation. Plants stabilize the hillslope that they grow in, which results in physical erosion processes such as rain splash, dry ravel, or freezing and thawing processes. While there is significant variation between species, plant roots grow in underground networks that holds the soil in place. Because the soil is held in place, it is less prone to erosion from those physical methods. Hillslope diffusion was found to decrease exponentially with vegetation cover. By stabilizing the hillslope in the source area of the basins, channel initiation, channel initiation is less likely. The erosional processes that may lead to channel initiation are prevented. The increased soil strength also protects against surface runoff erosion, which hinders channel evolution once it has begun.
At the basin scale, there are fewer channels in the basin and the drainage density is lower than an unvegetated system. The effect of the vegetation on decreasing the drainage density is not unbounded though. At high vegetative coverage, the effect of increasing the coverage diminishes. This effect imposes an upper limit to the total reduction in drainage density that vegetation can result in.
Vegetation also narrows the range of drainage density values for basins of various soil composition. Unvegetated basins can have a large range in drainage densities, from low to high. Drainage density is related to the ease at which channels can form. According to Montgomery and Dietrich’s equation, drainage density is a function of vertical hydraulic conductivity. Coarse-grained sediment like sand would have a higher hydraulic conductivity and are predicted by the equation to form a relatively higher drainage density system than a system formed by finer silt with a lower hydraulic conductivity.
Forest fires play an indirect role in a basin’s drainage density. Forest fires, both natural and unnatural, destroy some or all of the existing vegetation, which removes the stability that the plants and their roots provide. Newly destabilized hillslope in the basin is then susceptible to channel formation processes, and drainage density of the basin may increase until the vegetation grows back to the previous state. The type of plants and the associated depth and density of the plant roots determine how strongly the soil is held in place as well as the intensity of the forest fire in killing and removing the vegetation. Computer simulation experiments have validated that drainage density will be higher in regions that have more frequent forest fires.
Relation to flood hydrograph
The discharge through the central stream draining a catchment reflects the drainage density, which makes it a useful diagnostic for predicting the flooding behavior of a catchment following a storm event due to being intimately tied to the hydrograph. The material that overland flow travels over is one factor that influences the speed that water can flow out of a catchment. Water flows significantly slower over hillslopes compared to channels that form to efficiently carry water and other flowing material. According to Horton’s interpretation of half of the inverse of drainage density as the average length of overland flow implies that overland flow in high-drainage environments will reach a fast-flowing channel faster over a shorter range. On the hydrograph, the peak is higher and occurs over a shorter range. This more compact and higher peak is often referred to as being “flashy”.
The timing of the hydrograph in relation to the peak of the hyetograph is influenced by the drainage density. The water that enters a high-drainage watershed during a storm will reach a channel relatively fast and travel in the high-velocity channels to the outlet of the watershed in a relatively short time. Conversely, the water entering a low drainage density basin will, on average, have to travel a longer distance over the low velocity hillslope to reach the channels. As a result, the water will require more time to reach the exit of the catchment. The lag time between the peak of the hyetograph and the hydrograph is then inversely related to drainage density; as drainage density increases, water is more efficiently drained from the basin and the lag time decreases.
Another impact on the hydrograph that drainage density has is a steeper falling limb following the storm event due to its impact on both overland flow and baseflow. The falling limb occurs after the peak of the hydrograph curve and is when overland flow is decreasing back to ambient levels. In higher drainage systems, the overland flow reaches the channels quicker resulting in a narrower spread in the falling limb. Baseflow is the other contributor to the hydrograph. The peak of baseflow to the channels will occur after the quick-flow peak because groundwater flow is much slower than quick-flow. Because the baseflow peak occurs after the quick-flow peak, the baseflow peak influences the shape of the falling limb.4 According to the proportionality put forth by Gregory and Walling, as drainage density increases, the contribution of baseflow to the falling limb of the hydrograph diminishes. During a storm event in a high drainage density basin, there is little water that infiltrates into the ground as infiltration because water spends less time flowing over the surface in the catchment before exiting through the central channel. Because there is little water that enters the water as infiltration, baseflow will contribute only a small part to falling limb. The falling limb is thus quite steep. Conversely, a low drainage system will have a shallower falling limb. According to Gregory and Walling’s relation, the decrease in drainage density results in an increase in baseflow to the channels and a more gradual decrease in the hydrograph.
Effect of climate change on drainage density
Drainage density may also be influenced by climate change. Langbein and Schumm (1958)9 propose an equation for the rate of sediment discharge through catchment as a function of precipitation rate:
Where P is sediment yield, R is the average effective rainfall, α ~ 2.3, γ ~ 3.33, and a and b vary depending on units. The graph of this equation has a maximum between 10 and 14 inches and sharp declines on either side of the peak. At lower effective rainfalls, sediment discharge is lower because there is less rainfall to erode the hillslope. At effective rainfalls of greater than 10-14 inches, the decrease in sediment yield is interpreted to be the result of increasing vegetation cover. Increasing precipitation supports denser vegetation coverage and prevents overland flow and other methods of physical erosion. This finding is consistent with the Istanbulluoglu and Bras’ findings on the effect of vegetation on erosion and channel formation.
The Caineville Badlands
The badlands of Caineville, Utah are often cited as a region of extremely high drainage density. The region features steep slopes, high relief, an arid climate, and a complete absence of vegetation. Because the slopes of hillslopes are often greater than the angle of repose, the dominant erosional process in the Caineville badlands is mass wasting. There is no vegetation to provide stability to the slopes and increase the angle of repose and prevent mass-wasting. The regions below the angle of repose, however, are still generally at a significant angle, and hillslope diffusion, according to the following relation, is still a significant source of erosion:
Where Ks is a coefficient of diffusivity of the hillslope, z is the elevation of the hillslope, and x is horizontal distance.
The range of drainage densities in the Caineville Badlands illustrates the complicated nature of drainage densities in low-precipitation environments. In a study on the region, Alan Howard (1996) found that the effect of increasing relief angles in different basins did not have a constant effect on the drainage density. For regions of relatively low relief, drainage density and relief are positively correlated. This occurs until a threshold is reached at a higher relief ratio, when increase the slope ratio is accompanied by a decrease in drainage density. This is interpreted by Howard to be a result of the critical source area needed to support a channel increasing. At a higher slope, the erosion is faster and more efficiently funneled through fewer channels. The smaller number of channels results in a smaller drainage density for the basin.
This qualitative topographic map of a section of a section of the Caineville Badlands shows the extensive drainage network in the arid environment. Relating to Montgomery and Dietrich’s definition of the elementary parts of a drainage basin, the source area for each of the channels is relatively very small, resulting in a large number of channels forming. The image of the Caineville Badlands displays the lack of vegetation and numerous channels. The Caineville Badlands are located in an arid environment, receiving an average of 125mm of precipitation per year. This low precipitation contrasts with Montgomery and Dietrich’s equation of drainage density, which predicts that drainage density should be low where rainfall is low. This behavior is more consistent with Langbein and Schumm’s expression of erosion rate as a function of rainfall. According to the equation, erosion will increase with precipitation up to a point where the precipitation can support stabilizing vegetation. The lack of vegetation present in the image of the Caineville Badlands implies that the rainfall rate of this region is below the critical rainfall amount vegetation can be supported.
References
External links
Drainage Basin at the Learning Channel
Geomorphology
Density
Rivers
Hydrology | Drainage density | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering",
"Environmental_science"
] | 4,496 | [
"Hydrology",
"Physical quantities",
"Quantity",
"Mass",
"Density",
"Environmental engineering",
"Wikipedia categories named after physical quantities",
"Matter"
] |
3,111,681 | https://en.wikipedia.org/wiki/Theta%20Pegasi | θ Pegasi, Latinized as Theta Pegasi, is a single star in the equatorial constellation of Pegasus, lying about 7.5 degrees southwest of Enif. It has the traditional name Biham , and the Flamsteed designation 26 Pegasi. This object is visible to the naked eye as a white-hued point of light with an apparent visual magnitude of +3.52. The system is located 92 light years from the Sun based on parallax, but is drifting closer with a radial velocity of −8 km/s.
This object an A-type main-sequence star with a stellar classification of A2V. It is 448 million years old with a high rate of spin, showing a projected rotational velocity of 136 km/s. This star has 2.09 times the mass of the Sun and 2.6 times the Sun's radius. It is radiating 25 times the luminosity of the Sun from its outer envelope at an effective temperature of 7,951 K. The star appears to display a slight infrared excess.
θ Pegasi was suspected of being a binary star due to an acceleration detected by Hipparcos. In 2021, a low-mass companion star was discovered, associated with θ Pegasi. It is a red dwarf with a spectral type of M4 to M5.5, and a luminosity of 0.5% that of the Sun. The orbit around the primary is estimated to be moderately eccentric, at 0.54, and has a semimajor axis of 6.55 au.
Nomenclature
θ Pegasi (Latinised to Theta Pegasi) is the star's Bayer designation.
It bore the traditional name Biham or Baham from the Arabic phrase s'ad al Biham "Lucky Stars of the Young Beasts". In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Biham for this star on 21 August 2016 and it is now so entered in the IAU Catalog of Star Names.
In Chinese, (), meaning Rooftop, refers to an asterism consisting of Theta Pegasi, Alpha Aquarii and Epsilon Pegasi. Consequently, the Chinese name for Theta Pegasi itself is (, .)
References
A-type main-sequence stars
M-type main-sequence stars
Binary stars
Pegasus (constellation)
Pegasi, Theta
Durchmusterung objects
Pegasi, 26
210418
109427
8450
Biham | Theta Pegasi | [
"Astronomy"
] | 517 | [
"Pegasus (constellation)",
"Constellations"
] |
3,111,712 | https://en.wikipedia.org/wiki/Sigma%20Librae | Sigma Librae (σ Librae, abbreviated Sigma Lib, σ Lib) is a binary star in the constellation of Libra. The apparent visual magnitude is +3.29, making it visible to the naked eye. Based upon parallax measurements, this system is at a distance of roughly from the Sun, with a 2% margin of error. At that distance, the visual magnitude is diminished by 0.20 ± 0.17 from extinction caused by intervening gas and dust.
The two components are designated Sigma Librae A (officially named Brachium , the traditional name for the system) and B.
Nomenclature
σ Librae (Latinised to Sigma Librae) is the system's current Bayer designation (the star originally bore the designation Gamma Scorpii and did not receive its current designation until the new designation was agreed upon by Commission 3 of the International Astronomical Union (IAU) on July 31, 1930.) The designations of the two components as Sigma Librae A and B derives from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
It bore the traditional Latin names Brachium (arm) and Cornu (horn), and the non-unique minor Arabic names Zuben el Genubi (southern claw) (shared with Alpha Librae); Zuben Hakrabi (shared with Gamma Librae and Eta Librae, also rendered as Zuban Alakrab), and Ankaa (shared with Alpha Phoenicis). In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Brachium for the primary component Sigma Librae A on 5 September 2017. Ankaa had previously been approved as the name for Alpha Phoenicis on 29 July 2016. Both are now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Executions (asterism), refers to an asterism consisting of σ Librae, 50 Hydrae, 3 Librae, 4 Librae and 12 Librae. Consequently, the Chinese name for σ Librae is (, ).
Properties
The primary, Sigma Librae A, has a spectral class M2.5 III, which places it in the red giant stage of its evolution. This is a semi-regular variable star with a single pulsation period of 20 days. It shows small amplitude variations in magnitude of 0.10–0.15 on time scales as brief as 15–20 minutes, with cycles of repetition over intervals of 2.5–3.0 hours. This form of variability indicates that the star is on the asymptotic giant branch and is generating energy through the nuclear fusion of hydrogen and helium within concentric shells surrounding an inert core of carbon and oxygen.
The companion, Sigma Librae B, is of the 16th magnitude and over an arc minute away.
Notes
References
External links
Libra (constellation)
Librae, Sigma
M-type giants
Brachium
Librae, 20
Semiregular variable stars
073714
133216
5603
Durchmusterung objects | Sigma Librae | [
"Astronomy"
] | 698 | [
"Libra (constellation)",
"Constellations"
] |
3,111,753 | https://en.wikipedia.org/wiki/Theta%20Leonis | Theta Leonis, Latinized from θ Leonis, formally named Chertan, is a star in the constellation of Leo. With an apparent visual magnitude of +3.324 it is visible to the naked eye and forms one of the brighter stars in the constellation. The distance from the Sun can be directly determined from parallax measurements, yielding a value of about .
Description
This is a large star with 2.8 times the mass of the Sun and 4.3 times the Sun's radius. The spectrum matches a stellar classification of A2 V, making this a seemingly typical A-type main sequence star. However, the spectrum shows enhanced absorption lines of metals, marking this as a chemically peculiar Am star. The abundance of elements other than hydrogen and helium, what astronomers term the star's metallicity, appears around 12% higher than in the Sun. It is radiating 118 times the luminosity of the Sun from its outer atmosphere at an effective temperature of 9,480 K, literally giving it a white-hot glow.
Theta Leonis is much younger than the Sun, with an estimated age of around 550 million years. It has a moderately high rate of rotation, with a projected rotational velocity of . However, interferometric observations suggest that it is a rapidly rotating star being viewed nearly pole-on. Measurements in the infrared band show an excess of emission from the star and its surroundings, suggesting the presence of a circumstellar disk of dust. The temperature of this emission indicates the disk has an orbital radius of 36 AU.
Nomenclature
θ Leonis (Latinised to Theta Leonis) is the star's Bayer designation.
It bore the traditional names Chertan, Chort and Coxa . Chertan is derived from the Arabic 'two small ribs', originally referring to Delta Leonis and Theta Leonis; Chort from Arabic or 'small rib', and is Latin for 'hip'. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Chertan for this star.
In Chinese, (), meaning Right Wall of Supreme Palace Enclosure, refers to an asterism consisting of Theta Leonis, Beta Virginis, Sigma Leonis, Iota Leonis and Delta Leonis. Consequently, the Chinese name for Theta Leonis itself is (, .), representing (), meaning The Second Western Minister. 西次相 (Xīcìxiāng), spelled Tsze Seang by R.H. Allen, means "the Second Minister of State"
References
Leo (constellation)
Leonis, Theta
Chertan
A-type main-sequence stars
Leonis, 70
054879
4359
097633
Durchmusterung objects | Theta Leonis | [
"Astronomy"
] | 594 | [
"Leo (constellation)",
"Constellations"
] |
3,111,776 | https://en.wikipedia.org/wiki/Beta%20Piscium | Beta Piscium or β Piscium, formally named Fumalsamakah , is a blue-white hued star in the zodiac constellation of Pisces. Its apparent magnitude is 4.40, meaning it can be faintly seen with the naked eye. Based on parallax measurements taken during the Hipparcos mission, it is about 410 light-years (125 parsecs) distant from the Sun.
Nomenclature
β Piscium (Latinised to Beta Piscium) is the star's Bayer designation.
It bore the traditional name Fum al Samakah from the Arabic فم السمكة fum al-samakah "mouth of the fish" (compare Fomalhaut). In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN approved the name Fumalsamakah for this star on 1 June 2018 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Thunderbolt, refers to an asterism consisting of Beta Piscium and Gamma, Theta, Iota and Omega Piscium. Consequently, the Chinese name for Beta Piscium itself is (, ).
Properties
Beta Piscium is a Be star, a special class of B-type stars with emission lines in their spectra. With a spectral type of B6Ve its mass is estimated to be about , and its radius is about . It is suspected to be a variable star. Beta Piscium is radiating 524 times the Sun's luminosity from its photosphere at an effective temperature of 15,500 K. The star has a high rate of spin, showing a projected rotational velocity of around 90 km/s. Beta Piscium does not appear to have companion stars.
References
B-type main-sequence stars
Be stars
Pisces (constellation)
Piscium, Beta
BD+03 4818
Piscium, 004
217891
113889
8773
Fumalsamakah | Beta Piscium | [
"Astronomy"
] | 438 | [
"Pisces (constellation)",
"Constellations"
] |
3,111,791 | https://en.wikipedia.org/wiki/Beta%20Canum%20Venaticorum | Beta Canum Venaticorum (β Canum Venaticorum, abbreviated Beta CVn, β CVn), also named Chara , is a G-type main-sequence star in the northern constellation of Canes Venatici. At an apparent visual magnitude of 4.25, it is the second-brightest star in the constellation. Based upon an annual parallax shift of , this star is distant from the Sun.
Along with the brighter star Cor Caroli, the pair form the "southern dog" in this constellation that represents hunting dogs.
Nomenclature
β Canum Venaticorum (Latinised to Beta Canum Venaticorum) is the star's Bayer designation.
The traditional name Chara was originally applied to the "southern dog", but it later became used specifically to refer to Beta Canum Venaticorum. Chara (χαρά) means 'joy' in Greek. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Chara for this star.
In Chinese, (), meaning Imperial Guards, refers to an asterism consisting of Beta Canum Venaticorum, Alpha Canum Venaticorum, 10 Canum Venaticorum, 6 Canum Venaticorum, 2 Canum Venaticorum, and 67 Ursae Majoris. Consequently, the Chinese name for Beta Canum Venaticorum itself is (, .)
Characteristics
Beta CVn has a stellar classification of G0 V, and so is a G-type main-sequence star. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. The spectrum of this star shows a very weak emission line of singly ionized calcium (Ca II) from the chromosphere, making it a useful reference star for a reference spectrum to compare with other stars in a similar spectral category. (The Ca-II emission lines are readily accessible and can be used to measure the level of activity in a star's chromosphere.)
Beta CVn is considered to be slightly metal-poor, which means it has a somewhat lower portion of elements heavier than helium when compared to the Sun. In terms of mass, age and evolutionary status, however, this star is very similar to the Sun. As a result, it has been called a solar analog. It is about 3% less massive than the Sun, with a radius 3% larger than the Sun's and 25% greater luminosity.
The components of this star's space velocity are = . In the past it was suggested that it may be a spectroscopic binary. However, further analysis of the data does not seem to bear that out. In addition, a 2005 search for a brown dwarf in orbit around this star failed to discover any such companion, at least down to the sensitivity limit of the instrument used.
Habitability
In 2006, astronomer Margaret Turnbull labeled Beta CVn as the top stellar system candidate to search for extraterrestrial life forms. Because of its solar-type properties, astrobiologists have listed it among the most astrobiologically interesting stars within 10 parsecs of the Sun. However, as of 2009, this star is not known to host planets.
See also
List of star systems within 25–30 light-years
References
External links
Canes Venatici
Canum Venaticorum, Beta
G-type main-sequence stars
Canum Venaticorum, Beta
Chara
Canum_Venaticorum, 08
061317
109358
4785
Durchmusterung objects
TIC objects | Beta Canum Venaticorum | [
"Astronomy"
] | 789 | [
"Canes Venatici",
"Constellations"
] |
3,111,822 | https://en.wikipedia.org/wiki/Gamma%20Gruis | Gamma Gruis or γ Gruis, formally named Aldhanab (), is a star in the southern constellation of Grus (it once belonged to the Ptolemaic constellation Piscis Austrinus). With an apparent visual magnitude of 3.0, it is the third-brightest star in Grus. Based upon parallax measurements, this star is located at a distance of roughly from the Sun.
Nomenclature
γ Gruis (Latinised to Gamma Gruis) is the system's Bayer designation.
It bore the traditional Arabic name Al Dhanab, from the Arabic الذنب al-dhanab "the tail" (of the Southern Fish)when it was still part of Piscis Austrinus with the Bayer designation κ Piscis Austrini (Kappa Piscis Austrini). In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN approved the name Aldhanab for this star on 5 September 2017 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Decayed Mortar, refers to an asterism consisting of Gamma Gruis, Lambda Gruis, Gamma Piscis Austrini and 19 Piscis Austrini. Consequently, the Chinese name for Gamma Gruis itself is (, .)
Properties
Analysis of the spectrum by N. Houk in 1979 shows it to match a stellar classification of B8 III, with the luminosity class of III indicating this is a giant star that has exhausted the supply of hydrogen at its core and evolved away from the main sequence. R. O. Gray and R. F. Garrison in 1989 found a less evolved class of B8IV-Vs. The luminosity of Gamma Gruis is around 390 times that of the Sun, with a significant portion of the energy emission being in the ultraviolet. Its outer envelope has an effective temperature of 12,520 K, which gives the star a blue-white hue. Gamma Gruis is rotating relatively rapidly with a projected rotational velocity of . By way of comparison, the Sun has an azimuthal velocity along its equator of just .
Based upon analysis of data collected during the Hipparcos mission, this star may have a proper motion companion that is causing gravitational perturbation of Gamma Gruis.
See also
Traditional Chinese star names#Grus
References
B-type giants
Grus (constellation)
Gruis, Gamma
Durchmusterung objects
207971
108085
8353
Aldhanab | Gamma Gruis | [
"Astronomy"
] | 540 | [
"Grus (constellation)",
"Constellations"
] |
3,112,019 | https://en.wikipedia.org/wiki/Combined%20diesel%20or%20gas | Combined diesel or gas (CODOG) is a type of propulsion system for ships that need a maximum speed that is considerably faster than their cruise speed, particularly warships like modern frigates or corvettes.
System
For every propeller shaft there is one diesel engine for cruising speed and one geared gas turbine for high speed dashes. Both are connected to the shaft with clutches; only one system is driving the ship, in contrast to combined diesel and gas (CODAG) systems that can use the combined power output of both. The advantage of CODOG is a simpler gearing compared to CODAG, but it needs either more powerful or additional gas turbines to achieve the same maximum power output. The disadvantage of CODOG is that the fuel consumption at high speed is poor compared to CODAG.
CODOG vessels
MGB 2009, a modified Motor Gun Boat of the Royal Navy (1947), and
The two German torpedo boats Pfeil and Strahl (Vosper class, 1963-65)
The US Navy s (built 1966-1971)
The US Coast Guard s (from 1967)
s of the Royal Canadian Navy
s, and
s of the German Navy
s of the Philippine Navy
s of the Royal Australian Navy (RAN) and Royal New Zealand Navy (RNZN)
other MEKO type frigates or corvettes
s of the Royal Danish Navy
s of the South Korean Navy
s of the Swedish Navy
s of the Indian Navy
s of the Brazilian Navy
of the Bangladesh Navy
s of the Russian and Vietnamese Navies
118 WallyPower, luxury yacht
Type 052D destroyers of the Chinese Navy
Citations
Bibliography
Marine propulsion | Combined diesel or gas | [
"Engineering"
] | 322 | [
"Marine propulsion",
"Marine engineering"
] |
3,112,241 | https://en.wikipedia.org/wiki/Institut%20d%27Astrophysique%20de%20Paris | The Institut d'Astrophysique de Paris (translated: Paris Institute of Astrophysics) is a research institute in Paris, France. The Institute is part of the Sorbonne University and is associated with the CNRS Centre national de la recherche scientifique. It is located at 98bis, Boulevard Arago Il in the 14th arrondissement of Paris, adjacent to the Paris Observatory.
History
The IAP was created in 1936 by the French ministry of education under Jean Zay, initially for the purpose of processing data received from the Observatory of Haute-Provence, which was created at the same time. Construction of the building started on 6 January 1938. On 15 June 1939, Henri Mineur became the institute's first director. IAP scientists were at first located in Paris Observatory, then in the École normale supérieure de Paris before arriving in the current building in 1944 which was finally completed in 1952.
Current research
The IAP includes 160 researchers, engineers, technicians, and administrators and regularly welcomes many visitors and students.
The main areas of research at the IAP are:
General relativity and cosmology
Cosmological structure formation
High-energy astrophysics
Origin and evolution of galaxies
Stellar structure
Exoplanets
The IAP is one of five laboratories of AERA, the European association for research in astronomy. The laboratory is situated at the interface between two disciplines, astrophysics and theoretical physics. The International Astronomical Union has its seat at the IAP.
Directors
1936-1954 : Henri Mineur
1954-1960 : André Danjon
1960-1971 : André Lallemand
1972-1977 : Jean-Claude Pecker
1978-1989 : Jean Audouze
1990-1998 : Alain Omont
1998-2004 : Bernard Fort
2005-2013 : Laurent Vigroux
Since 2014 : Francis Bernardeau
References
External links
Official website
Scientific organizations established in 1936
1936 establishments in France
Astronomy institutes and departments
Research institutes in France
Astrophysics research institutes
French National Centre for Scientific Research
Sorbonne University
French UMR | Institut d'Astrophysique de Paris | [
"Physics",
"Astronomy"
] | 402 | [
"Astronomy organization stubs",
"Astronomy institutes and departments",
"Astronomy stubs",
"Astronomy organizations",
"Astrophysics",
"Astrophysics stubs",
"Astrophysics research institutes"
] |
3,112,392 | https://en.wikipedia.org/wiki/Nuclear%20reactor%20core | A nuclear reactor core is the portion of a nuclear reactor containing the nuclear fuel components where the nuclear reactions take place and the heat is generated. Typically, the fuel will be low-enriched uranium contained in thousands of individual fuel pins. The core also contains structural components, the means to both moderate the neutrons and control the reaction, and the means to transfer the heat from the fuel to where it is required, outside the core.
Water-moderated reactors
Inside the core of a typical pressurized water reactor or boiling water reactor are fuel rods with a diameter of a large gel-type ink pen, each about 4 m long, which are grouped by the hundreds in bundles called "fuel assemblies". Inside each fuel rod, pellets of uranium, or more commonly uranium oxide, are stacked end to end. Also inside the core are control rods, filled with pellets of substances like boron or hafnium or cadmium that readily capture neutrons. When the control rods are lowered into the core, they absorb neutrons, which thus cannot take part in the chain reaction. Conversely, when the control rods are lifted out of the way, more neutrons strike the fissile uranium-235 (U-235) or plutonium-239 (Pu-239) nuclei in nearby fuel rods, and the chain reaction intensifies. The core shroud, also located inside of the reactor, directs the water flow to cool the nuclear reactions inside of the core. The heat of the fission reaction is removed by the water, which also acts to moderate the neutron reactions.
Graphite-moderated reactors
There are also graphite moderated reactors in use.
One type uses solid nuclear graphite for the neutron moderator and ordinary water for the coolant. See the Soviet-made RBMK nuclear-power reactor. This was the type of reactor involved in the Chernobyl disaster.
In the Advanced Gas-cooled Reactor, a British design, the core is made of a graphite neutron moderator where the fuel assemblies are located. Carbon dioxide gas acts as a coolant and it circulates through the core, removing heat.
There have also been several experimental reactors that use graphite for moderation, such as the pebble bed reactor concepts and the molten-salt reactor experiment.
See also
Nuclear meltdown
Lists of nuclear disasters and radioactive incidents
Nuclear power
Nuclear reactor technology
References
Nuclear Reactor Analysis, John Wiley & Sons Canada, Ltd.
Nuclear power plant components
Nuclear technology | Nuclear reactor core | [
"Physics"
] | 500 | [
"Nuclear technology",
"Nuclear physics"
] |
3,112,544 | https://en.wikipedia.org/wiki/HD%20219134 | HD 219134 (also known as Gliese 892 or HR 8832) is a main-sequence star in the constellation of Cassiopeia. It is smaller and less luminous than the Sun, with a spectral class of K3V, which makes it an orange-hued star. HD 219134 is relatively close to our system, with an estimated distance of 21.34 light years. This star is close to the limit of apparent magnitude that can still be seen by the unaided eye. The limit is considered to be magnitude 6 for most observers. This star has a magnitude 9.4 optical companion at an angular separation of 106.6 arcseconds.
Planetary system
HD 219134 has a system of six known exoplanets. The innermost planet, HD 219134 b, is a rocky super-Earth based on size (1.6 Earth radii), and density (6.4 grams per cubic cm). This and three additional exoplanets; one super-Earth (designated c and later found to be rocky as well), one Neptunian world (d), and one Jovian world (e); were deduced using HARPS-N radial velocity data by Motalebi et al. in 2015. Two months later, Vogt et al. published a paper on this system which found a 6-planet solution, with planets b, c & d corresponding to those in Motalebi et al., f & g being new planets, and h corresponding to Motalebi's e but with different, and more accurate, estimated parameters.
A number of independent studies have been done regarding the planetary system of HD 219134, with some of their results conflicting with each other. As of March 2017, the star is known to have at least 5 planets, with two of them (HD 219134 b and c) known to be transiting, rocky super-Earths. While a 2016 study suggested that the radial velocity signal corresponding to planet f might be caused by stellar activity, it has been confirmed by subsequent studies in 2017 and 2021. Planet g has not been reported by subsequent studies, and a 2020 study did not find evidence of its claimed 94-day period, but instead found a period of 192 days.
Habitable zone
The conservative habitable zone (CHZ) of HD 219134 is estimated to extend from 0.516 to 0.948 AU. As of 2024, none of the planets orbiting the star are confirmed to orbit inside the habitable zone. The planet candidate HD 219134 g may orbit slightly interior to the inner edge of the habitable zone based on its initially published parameters, or may orbit within the habitable zone based on a more recent estimated orbital period of 192 days and semi-major axis of 0.603 AU. This planet is significantly more massive than Earth and therefore it likely retains a dense atmosphere, comparable to the Solar System's ice giants (see Mini-Neptune).
References
External links
Cassiopeia (constellation)
0892
K-type main-sequence stars
Gliese, 0892
Suspected variables
219134
8832
114622
Durchmusterung objects
Planetary systems with six confirmed planets
Planetary transit variables | HD 219134 | [
"Astronomy"
] | 672 | [
"Cassiopeia (constellation)",
"Constellations"
] |
3,112,548 | https://en.wikipedia.org/wiki/Onion%20powder | Onion powder is dehydrated, ground onion used as a seasoning. It is a common ingredient in seasoned salt and spice mixes, such as beau monde seasoning. Some varieties are prepared using toasted onion. White, yellow, and red onions may be used. Onion powder is a commercially prepared food product that has several culinary uses. Onion powder can also be homemade.
Onion salt is a spice preparation using dried onion and salt as primary ingredients.
Commercial production
Commercial onion powders are prepared using dehydration, freeze-drying, vacuum-shelf drying and flow drying. Some commercial onion powders are irradiated as a treatment against potential microbial contamination. It readily absorbs water upon contact, so commercial varieties may be packaged in airtight containers with a liner atop the container. Onion powder with a moisture content of 4–5 percent is prone to caking when stored in warmer environments, with increased temperatures corresponding to a shorter time for the occurrence of caking. It is generally accepted that commercial onion powder is around ten times stronger in flavor compared to fresh onion.
Onion salt
Early commercial preparations of onion salt were simply a mixture of onion powder and salt. An example ratio for earlier commercial preparations is one part salt to every five parts of dehydrated onion. Contemporary versions typically utilize dried granulated onion and salt and usually include an anticaking agent. The salt may help prevent the loss of onion flavor in the mixture by reducing the evaporation of onion oil. The development of commercial onion salt preparations included formulating products that reduced the strong odor of onion in the product and on the breath of consumers who eat it.
Commercial preparation of onion salt involves the testing and sorting of onions by the degree of onion flavor they have, as flavor constituents can vary considerably among various onion varieties. This is done before mixing to produce a consistent final product. Some commercial onion salt preparations are never touched by human hands, as the stages of processing are all performed using automated processes.
Culinary uses
Onion powder may be used as a seasoning atop a variety of foods and dishes, such as pasta, pizza, and grilled chicken. It is a primary ingredient in beau monde seasoning. and is sometimes used as a meat rub. Onion powder is also an ingredient in some commercially prepared foods, such as sauces, soups, and salad dressings. Additionally, it can be used in various recipes like burgers or meatloaf.
Onion salt is used as a seasoning on finished dishes and as an ingredient in many types of dishes, such as meat and vegetable dishes, casseroles and soups.
See also
Celery powder
Celery salt
Chili powder
Garlic powder
Garlic salt
List of culinary herbs and spices
List of onion dishes
syn-Propanethial-S-oxide, the molecule that gives onions and onion powder their strong flavor.
References
Further reading
5 pages.
Onion-based foods
Spices
Edible salt
Condiments
Food powders | Onion powder | [
"Chemistry"
] | 595 | [
"Edible salt",
"Salts"
] |
3,112,558 | https://en.wikipedia.org/wiki/Building%20Sites%20Bite | Building Sites Bite is a 1978 British short public information film produced by the Central Office of Information for the Health and Safety Executive and the Mighty Movie Company for British schools to warn children about the dangers of playing on building sites. It was written and directed by David Hughes and produced by Maggie Evans. The film is 28 minutes in duration.
Building Sites Bite was filmed as party of a national campaign responding to the deaths of more than 20 children in 1977 in building-site accidents. Most of the actors are children. Because of the style of filming and grim subject matter, Building Sites Bite is often compared to the earlier films Apaches (1975) and The Finishing Line (1975).
Plot
The film focuses entirely on the perspective of Ronald, a young boy who aspires to become a builder or surveyor when he grows up. His cousins Paul and Jane decide to test Ronald's know-it-all attitude by teleporting him to a building site, where he must avoid several hazards and obstacles without getting hurt. In each test Ronald disobeys various warning signs and ignores the dangers, resulting in him getting killed in each one.
Each time Ronald is about to die a heartbeat sound is played. In order, Ronald's deaths are displayed as him being buried alive in a trench collapse, electrocuted in a condemned building, run over by an earthmoving vehicle, breaking his skull against a metal retaining wall, crushed to death by a pile of bricks and finally drowning in a disused quarry.
Back in the real world, Ronald announces he intends to abandon his ambitions, and goes outside to play with Paul and Jane. Over the closing shot of the film, Paul reads out real-life stories of children who were killed in similar ways to those seen in the film.
Cast
Stephanie Coles as Auntie
Nigel Rhodes as Ronald
David Mckail as dad
Jo Kendall as mum
Miranda Hunnisett as Jane
Terry Russell as Paul
Reception
In 2020 Bob Fischer write in Fortean Times: "Sensible Paul and Jane are visited by posh-but-dim cousin Ronald. 'I reckon he's a twit...' muses Paul, and employs comprehensively grim methods to prove it. Imagining himself and his sister as silver-suited cosmic overlords, he inflicts multiple imaginary deaths on his cravat-sporting nemesis by transporting him (via a garden shed TARDIS) to a succession of deserted building sites. Here, exposed electrical cables and collapsing walls repeatedly nudge Ronald from an increasingly thankless mortal coil."
Home media
Building Sites Bite was released by the BFI on the DVD COI Collection Vol 4: Stop! Look! Listen!, which also included other contemporary public information films such as Apaches. The film was also later re-released in a similar compilation, The Best Of COL. Five Decades of Public Information Films (2020).
Reference
1978 films
Public information films
Child safety
Construction safety
British documentary films
1970s educational films
1970s British films
British educational films
Films about child death | Building Sites Bite | [
"Engineering"
] | 610 | [
"Construction",
"Construction safety"
] |
3,112,575 | https://en.wikipedia.org/wiki/Intertidal%20zone | The intertidal zone or foreshore is the area above water level at low tide and underwater at high tide; in other words, it is the part of the littoral zone within the tidal range. This area can include several types of habitats with various species of life, such as sea stars, sea urchins, and many species of coral with regional differences in biodiversity. Sometimes it is referred to as the littoral zone or seashore, although those can be defined as a wider region.
The intertidal zone also includes steep rocky cliffs, sandy beaches, bogs or wetlands (e.g., vast mudflats). This area can be a narrow strip, such as in Pacific islands that have only a narrow tidal range, or can include many meters of shoreline where shallow beach slopes interact with high tidal excursion. The peritidal zone is similar but somewhat wider, extending from above the highest tide level to below the lowest. Organisms in the intertidal zone are well-adapted to their environment, facing high levels of interspecific competition and the rapidly changing conditions that come with the tides. The intertidal zone is also home to several species from many different phyla (Porifera, Annelida, Coelenterata, Mollusca, Arthropoda, etc.).
The water that comes with the tides can vary from brackish waters, fresh with rain, to highly saline and dry salt, with drying between tidal inundations. Wave splash can dislodge residents from the littoral zone. With the intertidal zone's high exposure to sunlight, the temperature can range from very hot with full sunshine to near freezing in colder climates. Some microclimates in the littoral zone are moderated by local features and larger plants such as mangroves. Adaptations in the littoral zone allow the utilization of nutrients supplied in high volume on a regular basis from the sea, which is actively moved to the zone by tides. The edges of habitats, in this case the land and sea, are themselves often significant ecosystems, and the littoral zone is a prime example.
A typical rocky shore can be divided into a spray zone or splash zone (also known as the supratidal zone), which is above the spring high-tide line and is covered by water only during storms, and an intertidal zone, which lies between the high and low tidal extremes. Along most shores, the intertidal zone can be clearly separated into the following subzones: high tide zone, middle tide zone, and low tide zone. The intertidal zone is one of a number of marine biomes or habitats, including estuaries, the neritic zone, the photic zone, and deep zones.
Zonation
Marine biologists divide the intertidal region into three zones (low, middle, and high), based on the overall average exposure of the zone. The low intertidal zone, which borders on the shallow subtidal zone, is only exposed to air at the lowest of low tides and is primarily marine in character. The mid intertidal zone is regularly exposed and submerged by average tides. The high intertidal zone is only covered by the highest of the high tides, and spends much of its time as terrestrial habitat. The high intertidal zone borders on the splash zone (the region above the highest still-tide level, but which receives wave splash). On shores exposed to heavy wave action, the intertidal zone will be influenced by waves, as the spray from breaking waves will extend the intertidal zone.
Depending on the substratum and topography of the shore, additional features may be noticed. On rocky shores, tide pools form in depressions that fill with water as the tide rises. Under certain conditions, such as those at Morecambe Bay, quicksand may form.
Low tide zone (lower littoral)
This subregion is mostly submerged – it is only exposed at the point of low tide and for a longer period of time during extremely low tides. This area is teeming with life; the most notable difference between this subregion and the other three is that there is much more marine vegetation, especially seaweeds. There is also a great biodiversity. Organisms in this zone generally are not well adapted to periods of dryness and temperature extremes. Some of the organisms in this area are abalone, sea anemones, brown seaweed, chitons, crabs, green algae, hydroids, isopods, limpets, mussels, nudibranchs, sculpin, sea cucumber, sea lettuce, sea palms, starfish, sea urchins, shrimp, snails, sponges, surf grass, tube worms, and whelks. Creatures in this area can grow to larger sizes because there is more available energy in the localized ecosystem. Also, marine vegetation can grow to much greater sizes than in the other three intertidal subregions due to the better water coverage. The water is shallow enough to allow plenty of sunlight to reach the vegetation to allow substantial photosynthetic activity, and the salinity is at almost normal levels. This area is also protected from large predators such as fish because of the wave action and the relatively shallow water.
Ecology
The intertidal region is an important model system for the study of ecology, especially on wave-swept rocky shores. The region contains a high diversity of species, and the zonation created by the tides causes species ranges to be compressed into very narrow bands. This makes it relatively simple to study species across their entire cross-shore range, something that can be extremely difficult in, for instance, terrestrial habitats that can stretch thousands of kilometres. Communities on wave-swept shores also have high turnover due to disturbance, so it is possible to watch ecological succession over years rather than decades.
The burrowing invertebrates that make up large portions of sandy beach ecosystems are known to travel relatively great distances in cross-shore directions as beaches change on the order of days, semilunar cycles, seasons, or years. The distribution of some species has been found to correlate strongly with geomorphic datums such as the high tide strand and the water table outcrop.
Since the foreshore is alternately covered by the sea and exposed to the air, organisms living in this environment must be adapted to both wet and dry conditions. Intertidal zone biomass reduces the risk of shoreline erosion from high intensity waves. Typical inhabitants of the intertidal rocky shore include sea urchins, sea anemones, barnacles, chitons, crabs, isopods, mussels, starfish, and many marine gastropod molluscs such as limpets and whelks. Sexual and asexual reproduction varies by inhabitants of the intertidal zones.
Humans have historically used intertidal zones as foraged food sources during low tide . Migratory birds also rely on intertidal species for feeding areas because of low water habitats consisting of an abundance of mollusks and other marine species.
Legal issues
As with the dry sand part of a beach, legal and political disputes can arise over the ownership and use of the foreshore. One recent example is the New Zealand foreshore and seabed controversy. In legal discussions, the foreshore is often referred to as the wet-sand area.
For privately owned beaches in the United States, some states such as Massachusetts use the low-water mark as the dividing line between the property of the State and that of the beach owner; however the public still has fishing, fowling, and navigation rights to the zone between low and high water. Other states such as California use the high-water mark.
In the United Kingdom, the foreshore is generally deemed to be owned by the Crown, with exceptions for what are termed several fisheries, which can be historic deeds to title, dating back to King John's time or earlier, and the Udal Law, which applies generally in Orkney and Shetland.
In Greece, according to the L. 2971/01, the foreshore zone is defined as the area of the coast that might be reached by the maximum climbing of the waves on the coast (maximum wave run-up on the coast) in their maximum capacity (maximum referring to the "usually maximum winter waves" and of course not to exceptional cases, such as tsunamis). The foreshore zone, a part of the exceptions of the law, is public, and permanent constructions are not allowed on it. In Italy, about half the shoreline is owned by the government but leased to private beach clubs called lidos.
In the East African and West Indian Ocean, intertidal zone management is often neglected of being a priority due to there being no intent for collective economic productivity. According to workshops performing questionaries, it is stated that eighty-six percent of respondents believe mismanagement of mangrove and coastal ecosystems are due to lack of knowledge to steward the ecosystems, yet forty-four percent of respondents state that there is a fair amount of knowledge used in those regions for fisheries.
Threats
Intertidal zones are sensitive habitats with an abundance of marine species that can experience ecological hazards associated with tourism and human-induced environmental impacts. A variety of other threats that have been summarized by scientists include nutrient pollution, overharvesting, habitat destruction, and climate change. Habitat destruction is advanced through activities including harvesting fisheries with drag nets and a neglect of the sensitivity of intertidal zones.
Gallery
See also
Ballantine Scale
Ecological forecasting
Littoral series
NaGISA
Shorezone
Tidelands
References
External links
Watch the online documentary The Intertidal Zone
Aquatic ecology
Tides
Marine biology
Habitats
Coastal geography
Coastal and oceanic landforms
Fisheries science
Physical oceanography
Oceanographical terminology
de:Watt (Küste)
eo:Vado | Intertidal zone | [
"Physics",
"Biology"
] | 2,009 | [
"Applied and interdisciplinary physics",
"Marine biology",
"Ecosystems",
"Physical oceanography",
"Aquatic ecology"
] |
3,112,664 | https://en.wikipedia.org/wiki/Avionics%20Full-Duplex%20Switched%20Ethernet | Avionics Full-Duplex Switched Ethernet (AFDX), also ARINC 664, is a data network, patented by international aircraft manufacturer Airbus, for safety-critical applications that utilizes dedicated bandwidth while providing deterministic quality of service (QoS). AFDX is a worldwide registered trademark by Airbus. The AFDX data network is based on Ethernet technology using commercial off-the-shelf (COTS) components. The AFDX data network is a specific implementation of ARINC Specification 664 Part 7, a profiled version of an IEEE 802.3 network per parts 1 & 2, which defines how commercial off-the-shelf networking components will be used for future generation Aircraft Data Networks (ADN). The six primary aspects of an AFDX data network include full duplex, redundancy, determinism, high speed performance, switched and profiled network.
History
Many commercial aircraft use the ARINC 429 standard developed in 1977 for safety-critical applications. ARINC 429 utilizes a unidirectional bus with a single transmitter and up to twenty receivers. A data word consists of 32 bits communicated over a twisted pair cable using the bipolar return-to-zero modulation. There are two speeds of transmission: high speed operates at 100 kbit/s and low speed operates at 12.5 kbit/s. ARINC 429 operates in such a way that its single transmitter communicates in a point-to-point connection, thus requiring a significant amount of wiring which amounts to added weight.
Another standard, ARINC 629, introduced by Boeing for the 777 provided increased data speeds of up to 2 Mbit/s and allowing a maximum of 120 data terminals. This ADN operates without the use of a bus controller thereby increasing the reliability of the network architecture. The drawback is that it requires custom hardware which can add significant cost to the aircraft. Because of this, other manufacturers did not openly accept the ARINC 629 standard.
AFDX was designed as the next-generation aircraft data network. Basing on standards from the IEEE 802.3 committee (commonly known as Ethernet) allows commercial off-the-shelf hardware to reduce costs and development time. AFDX is one implementation of deterministic Ethernet defined by ARINC Specification 664 Part 7. AFDX was developed by Airbus Industries for the A380, initially to address real-time issues for flight-by-wire system development. Multiple switches can be bridged together in a cascaded star topology. This type of network can significantly reduce wire runs, thus the weight of the aircraft. In addition, AFDX can provide quality of service and dual link redundancy.
Building on the experience from the A380, the Airbus A350 also uses an AFDX network, with avionics and systems supplied by Rockwell Collins. AFDX using fiber optic rather than copper interconnections is used on the Boeing 787 Dreamliner.
Airbus and its EADS parent company have made AFDX licenses available through the EADS Technology Licensing initiative, including agreements with Selex ES and Vector Informatik GmbH.
Overview
AFDX adopted concepts such as the token bucket from the telecom standards, Asynchronous Transfer Mode (ATM), to fix the shortcomings of IEEE 802.3 Ethernet. By adding key elements from ATM to those already found in Ethernet, and constraining the specification of various options, a highly reliable full-duplex deterministic network is created providing guaranteed bandwidth and quality of service (QoS). Through the use of full-duplex Ethernet, the possibility of transmission collisions is eliminated. The network is designed in such a way that all critical traffic is prioritized using QoS policies so delivery, latency, and jitter are all guaranteed to be within set parameters. A highly intelligent switch, common to the AFDX network, is able to buffer transmission and reception packets. Through the use of twisted pair or fiber optic cables, full-duplex Ethernet uses two separate pairs or strands for transmitting and receiving the data. AFDX extends standard Ethernet to provide high data integrity and deterministic timing. Further a redundant pair of networks is used to improve the system integrity (although a virtual link may be configured to use one or the other network only). It specifies interoperable functional elements at the following OSI reference model layers:
Data link (MAC and virtual link addressing concept);
Network (IP and ICMP);
Transport (UDP and optionally TCP)
Application (network) (sampling, queuing, SAP, TFTP and SNMP).
The main elements of an AFDX network are:
AFDX end systems
AFDX switches
AFDX links
Virtual links
The central feature of an AFDX network are its virtual links (VL). In one abstraction, it is possible to visualise the VLs as an ARINC 429 style network each with one source and one or more destinations. Virtual links are unidirectional logic paths from the source end-system to all of the destination end-systems. Unlike that of a traditional Ethernet switch which switches frames based on the Ethernet destination or MAC address, AFDX routes packets using a virtual link ID, which is carried in the same position in an AFDX frame as the MAC destination address in an Ethernet frame. However, in the case of AFDX, this virtual link ID identifies the data carried rather than the physical destination. The virtual link ID is a 16-bit unsigned integer value that follows a constant 32-bit field. The switches are designed to route an incoming frame from one, and only one, end system to a predetermined set of end systems. There can be one or more receiving end systems connected within each virtual link. Each virtual link is allocated dedicated bandwidth [sum of all VL bandwidth allocation gap (BAG) rates x MTU] with the total amount of bandwidth defined by the system integrator. However, total bandwidth cannot exceed the maximum available bandwidth on the network. Bi-directional communications must therefore require the specification of a complementary VL.
Each VL is frozen in specification to ensure that the network has a designed maximum traffic, hence determinism. Also the switch, having a VL configuration table loaded, can reject any erroneous data transmission that may otherwise swamp other branches of the network. Additionally, there can be sub-virtual links (sub-VLs) that are designed to carry less critical data. Sub-virtual links are assigned to a particular virtual link. Data are read in a round-robin sequence among the virtual links with data to transmit. Also sub-virtual links do not provide guaranteed bandwidth or latency due to the buffering, but AFDX specifies that latency is measured from the traffic regulator function anyway.
BAG rate
BAG stands for bandwidth allocation gap, this is one of the main features of the AFDX protocol. This is the maximum rate data can be sent, and it is guaranteed to be sent at that interval. When setting the BAG rate for each VL, care must be taken so there will be enough bandwidth for other VL's and the total speed cannot exceed 100 Mbit/s.
Switching of virtual links
Each switch has filtering, policing, and forwarding functions that should be able to process at least 4096 VLs. Therefore, in a network with multiple switches (cascaded star topology), the total number of virtual links is nearly limitless. There is no specified limit to the number of virtual links that can be handled by each end system, although this will be determined by the BAG rates and maximum frame size specified for each VL versus the Ethernet data rate. However, the number of sub-VLs that may be created in a single virtual link is limited to four. The switch must also be non-blocking at the data rates that are specified by the system integrator, and in practice this may mean that the switch shall have a switching capacity that is the sum of all of its physical ports.
Since AFDX utilizes the Ethernet protocol at the MAC layer, it is possible to use high performance COTS switches with Layer 2 routing as AFDX switches for testing purposes as a cost-cutting measure. However, some features of a real AFDX switch may be missing, such as traffic policing and redundancy functions.
Usage
The AFDX bus is used in Airbus A380, Boeing 787, Airbus A400M, Airbus A350, Sukhoi Superjet 100, ATR 42, ATR 72 (-600), AgustaWestland AW101, AgustaWestland AW189, AgustaWestland AW169, Irkut MC-21, Bombardier Global Express, Airbus A220, Learjet 85, Comac ARJ21, Comac C919 and AgustaWestland AW149.
References
External links
AFDX/ARINC664P7 AIM Avionics Databus Solutions, Interface Boards for AFDX/ARINC-664
PBA.pro-AFDX AIM Avionics Databus Solutions, Analyzers for AFDX/ARINC-664 and more
AFDX Training by AIM Avionics Databus Solutions, Interface Boards for AFDX/ARINC-664
Goebel AFDX by Mercury Instruments, Inc. ARINC664/AFDX Simulation and Verification Solution.
ARINC-664 part 7(AFDX) Tutorial (video) from Excalibur Systems Inc.
Embvue AFDX | Arinc 664 by Embvue
AFDX/ARINC 664 Tutorial from GE Intelligent Platforms
AFDX Suite - AFDX Tools - software solution for an easy analyzing and simulation of AFDX systems (EC Comp GmbH)
Avionics Ethernet Data Xplorer ARINC-664P7 Simulyzer - Software for monitoring, simulating and testing ARINC-664P7 / AFDX systems (MHZ Solutions)
AFDX SID data frame structure (MHZ Solutions)
Industrial Ethernet
Avionics | Avionics Full-Duplex Switched Ethernet | [
"Technology",
"Engineering"
] | 2,098 | [
"Industrial Ethernet",
"Avionics",
"Aircraft instruments"
] |
3,112,767 | https://en.wikipedia.org/wiki/Zirconium%20hydride | Zirconium hydride describes an alloy made by combining zirconium and hydrogen. Hydrogen acts as a hardening agent, preventing dislocations in the zirconium atom crystal lattice from sliding past one another. Varying the amount of hydrogen and the form of its presence in the zirconium hydride (precipitated phase) controls qualities such as the hardness, ductility, and tensile strength of the resulting zirconium hydride. Zirconium hydride with increased hydrogen content can be made harder and stronger than zirconium, but such zirconium hydride is also less ductile than zirconium.
Material properties
Zirconium is found in the Earth's crust only in the form of an ore, usually a zirconium silicate, such as zircon. Zirconium is extracted from zirconium ore by removing the oxygen and silica. This process, known as the Kroll process, was first applied to titanium. The Kroll process results in an alloy containing hafnium. The hafnium and other impurities are removed in a subsequent step. Zirconium hydride is created by combining refined zirconium with hydrogen. Like titanium, solid zirconium dissolves hydrogen quite readily.
The density of zirconium hydride varies based the hydrogen and ranges between 5.56 and 6.52 g cm−3.
Even in the narrow range of concentrations which make up zirconium hydride, mixtures of hydrogen and zirconium can form a number of different structures, with very different properties. Understanding such properties is essential to making quality zirconium hydride. At room temperature, the most stable form of zirconium is the hexagonal close-packed (HCP) structure α-zirconium. It is a fairly soft metallic material that can dissolve only a small concentration of hydrogen, no more than 0.069 wt% at 550 °C. If zirconium hydride contains more than 0.069% hydrogen at zirconium hydride making temperatures then it transforms into a body-centred cubic (BCC) structure called β-zirconium. It can dissolve considerably more hydrogen, more than 1.2% hydrogen above 900 °C.
When zirconium hydrides with less than 0.7% hydrogen, known as hypoeutectoid zirconium hydride, are cooled from the β phase the mixture attempts to revert to the α phase, resulting in an excess of hydrogen.
Another polymorphic form is the γ phase, is generally accepted to be a metastable phase.
Zirconium hydrides are odorless, dark gray to black metallic powders.
They behave as usual metals in terms of electrical conductivity and magnetic properties (paramagnetic, unless contaminated with ferromagnetic impurities). Their structure and composition is stable at ambient conditions. Similar to other metal hydrides, different crystalline phases of zirconium hydrides are conventionally labeled with Greek letters, and α is reserved for the metal. The known ZrHx phases are γ (x = 1), δ (x = 1.5–1.65) and ε (x = 1.75–2). Fractional x values often correspond to mixtures, so the compositions with x = 0.8–1.5 usually contain a mixture of α, γ and δ phases, and δ and ε phases coexist for x = 1.65–1.75. As a function of increasing x, the transition between δ-Zr and ε-Zr is observed as a gradual distortion of the face-centered cubic δ (fluorite-type) to face-centered tetragonal ε lattice. This distortion is accompanied by a rapid decrease in Vickers hardness, which is constant at 260 HV for x < 1.6, linearly decreases to 160 HV for 1.6 < x < 1.75 and stabilizes at about 160 HV for 1.75 < x < 2.0. This hardness decrease is accompanied by the decrease in magnetic susceptibility. The mass density behaves differently with the increasing hydrogen content: it decreases linearly from 6.52 to 5.66 g/cm3 for x = 0–1.6 and changes little for x = 1.6–2.0.
Preparation and chemical properties
Zirconium hydrides form upon interaction of the metal with hydrogen gas. Whereas this reaction occurs even at room temperature, homogeneous bulk hydrogenation is usually achieved by annealing at temperatures of 400–600 °C for a period between several hours and a few weeks. At room temperature, zirconium hydrides quickly oxidize in air, and even in high vacuum. The formed nanometer-thin layer of oxide stops further oxygen diffusion into the material, and thus the change in composition due to oxidation can usually be neglected. However, the oxidation proceeds deeper into the bulk with increasing temperature. The hydrogen is anionic due to the electronegativity difference between Zr and H. When prepared as thin films, the crystal structure can be improved and surface oxidation minimized.
Zirconium hydrides are soluble in hydrofluoric acid or alcohol; they react violently with water, acids, oxidizers or halogenated compounds.
Applications
Formation of zirconium hydrides is an important factor in the operation of several types of nuclear reactors, such as boiling water reactors Fukushima I and II, which suffered from a series of explosions caused by the 2011 Tōhoku earthquake and tsunami. Their uranium fuel pellets are enclosed in metal rods made from Zircaloy – an alloy of typically about 98.25% zirconium with 1.5% tin and minor amounts of other metals. Zircaloy is used because of its small absorption cross-section for thermal neutrons and superior mechanical and corrosion properties to those of most metals, including zirconium. The rods are cooled by streaming water which gradually oxidizes zirconium, liberating hydrogen. In Fukushima reactors, the reactor cooling system failed because of the tsunami. The resulting temperature increase accelerated chemical reactions and caused accumulation of significant amounts of hydrogen, which exploded upon reaction with oxygen when the gas was released to the atmosphere.
In regular operation, most hydrogen is safely neutralized in the reactor systems; however, a fraction of 5-20% diffuses into the Zircaloy rods forming zirconium hydrides. This process mechanically weakens the rods because the hydrides have lower hardness and ductility than metal. Only a few percent of hydrogen can dissolve in zirconium. Excess hydrogen forms voids that weaken Zircalloy. Among Zircaloys, Zircaloy-4 is the least susceptible to hydrogen blistering.
It is also used as a neutron moderator in thermal-spectrum nuclear reactors such as the TRIGA research reactor developed by General Atomics or the Soviet TOPAZ nuclear reactors. At neutron energies above 0.14 eV it is as effective at moderating a nuclear reactor as elemental hydrogen (the best known material), but far more dense, and therefore permits compact reactors with high power per unit volume. It has neutron resonances that prevent almost all moderation at energies below 0.14 eV. Zirconium deuteride is superior, because it has a lower neutron absorption cross-section than aneutronic hydrogen, decreasing neutron absorption in a reactor.
As a pure powder, zirconium hydrides are used as hydrogenation catalysts, in powder metallurgy, and as getters in the vacuum tube industry. In vacuum systems, zirconium hydrides help establish a seal between a metal and ceramic. In this method, a hydride powder is mixed with the sealing metal; heating the mixture results in decomposition of the hydride. The evolving hydrogen cleans up the surrounding area, and the produced metal flows and forms a seal even at temperatures as low as 300 °C.
ZrH2 is used in powder metallurgy, as a hydrogenation catalyst, and as a reducing agent, vacuum tube getter, and a foaming agent in production of metal foams. Other uses include acting as a fuel in pyrotechnic compositions, namely pyrotechnic initiators.
Safety
Powdered zirconium hydrides are flammable and can ignite and explode if exposed to heat, fire, or sparks. When heated above 300 °C, they decompose releasing hydrogen gas, which is also flammable.
References
External links
Google books search results for the dedicated conference named "Zirconium in the nuclear industry"
Metal hydrides
Zirconium alloys
Neutron moderators | Zirconium hydride | [
"Chemistry"
] | 1,858 | [
"Inorganic compounds",
"Reducing agents",
"Alloys",
"Metal hydrides",
"Zirconium alloys"
] |
3,112,875 | https://en.wikipedia.org/wiki/Computational%20immunology | In academia, computational immunology is a field of science that encompasses high-throughput genomic and bioinformatics approaches to immunology. The field's main aim is to convert immunological data into computational problems, solve these problems using mathematical and computational approaches and then convert these results into immunologically meaningful interpretations.
Introduction
The immune system is a complex system of the human body and understanding it is one of the most challenging topics in biology. Immunology research is important for understanding the mechanisms underlying the defense of human body and to develop drugs for immunological diseases and maintain health. Recent findings in genomic and proteomic technologies have transformed the immunology research drastically. Sequencing of the human and other model organism genomes has produced increasingly large volumes of data relevant to immunology research and at the same time huge amounts of functional and clinical data are being reported in the scientific literature and stored in clinical records. Recent advances in bioinformatics or computational biology were helpful to understand and organize these large-scale data and gave rise to new area that is called Computational immunology or immunoinformatics.
Computational immunology is a branch of bioinformatics and it is based on similar concepts and tools, such as sequence alignment and protein structure prediction tools. Immunomics is a discipline like genomics and proteomics. It is a science, which specifically combines immunology with computer science, mathematics, chemistry, and biochemistry for large-scale analysis of immune system functions. It aims to study the complex protein–protein interactions and networks and allows a better understanding of immune responses and their role during normal, diseased and reconstitution states. Computational immunology is a part of immunomics, which is focused on analyzing large-scale experimental data.
History
Computational immunology began over 90 years ago with the theoretic modeling of malaria epidemiology. At that time, the emphasis was on the use of mathematics to guide the study of disease transmission. Since then, the field has expanded to cover all other aspects of immune system processes and diseases.
Immunological database
After the recent advances in sequencing and proteomics technology, there have been many fold increase in generation of molecular and immunological data. The data are so diverse that they can be categorized in different databases according to their use in the research. Until now there are total 31 different immunological databases noted in the Nucleic Acids Research (NAR) Database Collection, which are given in the following table, together with some more immune related databases. The information given in the table is taken from the database descriptions in NAR Database Collection.
Online resources for allergy information are also available on http://www.allergen.org. Such data is valuable for investigation of cross-reactivity between known allergens and analysis of potential allergenicity in proteins. The Structural Database of Allergen Proteins (SDAP) stores information of allergenic proteins. The Food Allergy Research and Resource Program (FARRP) Protein Allergen-Online Database contains sequences of known and putative allergens derived from scientific literature and public databases. Allergome emphasizes the annotation of allergens that result in an IgE-mediated disease.
Tools
A variety of computational, mathematical and statistical methods are available and reported. These tools are helpful for collection, analysis, and interpretation of immunological data. They include text mining, information management, sequence analysis, analysis of molecular interactions, and mathematical models that enable advanced simulations of immune system and immunological processes.
Attempts are being made for the extraction of interesting and complex patterns from non-structured text documents in the immunological domain, such as categorization of allergen cross-reactivity information, identification of cancer-associated gene variants and the classification of immune epitopes.
Immunoinformatics is using the basic bioinformatics tools such as ClustalW, BLAST, and TreeView, as well as specialized immunoinformatics tools, such as EpiMatrix, IMGT/V-QUEST for IG and TR sequence analysis, IMGT/ Collier-de-Perles and IMGT/StructuralQuery for IG variable domain structure analysis. Methods that rely on sequence comparison are diverse and have been applied to analyze HLA sequence conservation, help verify the origins of human immunodeficiency virus (HIV) sequences, and construct homology models for the analysis of hepatitis B virus polymerase resistance to lamivudine and emtricitabine.
There are also some computational models which focus on protein–protein interactions and networks. There are also tools which are used for T and B cell epitope mapping, proteasomal cleavage site prediction, and TAP– peptide prediction. The experimental data is very much important to design and justify the models to predict various molecular targets. Computational immunology tools is the game between experimental data and mathematically designed computational tools.
Applications
Allergies
Allergies, while a critical subject of immunology, also vary considerably among individuals and sometimes even among genetically similar individuals. The assessment of protein allergenic potential focuses on three main aspects: (i) immunogenicity; (ii) cross-reactivity; and (iii) clinical symptoms. Immunogenicity is due to responses of an IgE antibody-producing B cell and/or of a T cell to a particular allergen. Therefore, immunogenicity studies focus mainly on identifying recognition sites of B-cells and T-cells for allergens. The three-dimensional structural properties of allergens control their allergenicity.
The use of immunoinformatics tools can be useful to predict protein allergenicity and will become increasingly important in the screening of novel foods before their wide-scale release for human use. Thus, there are major efforts under way to make reliable broad based allergy databases and combine these with well validated prediction tools in order to enable the identification of potential allergens in genetically modified drugs and foods. Though the developments are on primary stage, the World Health organization and Food and Agriculture Organization have proposed guidelines for evaluating allergenicity of genetically modified foods. According to the Codex alimentarius, a protein is potentially allergenic if it possesses an identity of ≥6 contiguous amino acids or ≥35% sequence similarity over an 80 amino acid window with a known allergen. Though there are rules, their inherent limitations have started to become apparent and exceptions to the rules have been well reported
Infectious diseases and host responses
In the study of infectious diseases and host responses, the mathematical and computer models are a great help. These models were very useful in characterizing the behavior and spread of infectious disease, by understanding the dynamics of the pathogen in the host and the mechanisms of host factors which aid pathogen persistence. Examples include Plasmodium falciparum and nematode infection in ruminants.
Much has been done in understanding immune responses to various pathogens by integrating genomics
and proteomics with bioinformatics strategies. Many exciting developments in large-scale screening of pathogens are currently taking place. National Institute of Allergy and Infectious Diseases (NIAID) has initiated an endeavor for systematic mapping of B and T cell epitopes of category A-C pathogens. These pathogens include Bacillus anthracis (anthrax), Clostridium botulinum toxin (botulism), Variola major (smallpox), Francisella tularensis (tularemia), viral hemorrhagic fevers, Burkholderia pseudomallei, Staphylococcus enterotoxin B, yellow fever, influenza, rabies, Chikungunya virus etc. Rule-based systems have been reported for the automated extraction and curation of influenza A records.
This development would lead to the development of an algorithm which would help to identify the conserved regions of pathogen sequences and in turn would be useful for vaccine development. This would be helpful in limiting the spread of infectious disease. Examples include a method for identification of vaccine targets from protein regions of conserved HLA binding and computational assessment of cross-reactivity of broadly neutralizing antibodies against viral pathogens. These examples illustrate the power of immunoinformatics applications to help solve complex problems in public health. Immunoinformatics could accelerate the discovery process dramatically and potentially shorten the time required for vaccine development. Immunoinformatics tools have been used to design the vaccine against SARS-CoV-2, Dengue virus and Leishmania.
Immune system function
Using this technology it is possible to know the model behind immune system. It has been used to model T-cell-mediated suppression, peripheral lymphocyte migration, T-cell memory, tolerance, thymic function, and antibody networks. Models are helpful to predicts dynamics of pathogen toxicity and T-cell memory in response to different stimuli. There are also several models which are helpful in understanding the nature of specificity in immune network and immunogenicity.
For example, it was useful to examine the functional relationship between TAP peptide transport and HLA class I antigen presentation. TAP is a transmembrane protein responsible for the transport of antigenic peptides into the endoplasmic reticulum, where MHC class I molecules can bind them and presented to T cells. As TAP does not bind all peptides equally, TAP-binding affinity could influence the ability of a particular peptide to gain access to the MHC class I pathway. Artificial neural network (ANN), a computer model was used to study peptide binding to human TAP and its relationship with MHC class I binding. The affinity of HLA-binding peptides for TAP was found to differ according to the HLA supertype concerned using this method. This research could have important implications for the design of peptide based immuno-therapeutic drugs and vaccines. It shows the power of the modeling approach to understand complex immune interactions.
There exist also methods which integrate peptide prediction tools with computer simulations that can provide detailed information on the immune response dynamics specific to the given pathogen's peptides
.
Cancer Informatics
Cancer is the result of somatic mutations which provide cancer cells with a selective growth advantage. Recently it has been very important to determine the novel mutations. Genomics and proteomics techniques are used worldwide to identify mutations related to each specific cancer and their treatments. Computational tools are used to predict growth and surface antigens on cancerous cells. There are publications explaining a targeted approach for assessing mutations and cancer risk. Algorithm CanPredict was used to indicate how closely a specific gene resembles known cancer-causing genes. Cancer immunology has been given so much importance that the data related to it is growing rapidly. Protein–protein interaction networks provide valuable information on tumorigenesis in humans. Cancer proteins exhibit a network topology that is different from normal proteins in the human interactome. Immunoinformatics have been useful in increasing success of tumour vaccination. Recently, pioneering works have been conducted to analyse the host immune system dynamics in response to artificial immunity induced by vaccination strategies. Other simulation tools use predicted cancer peptides to forecast immune specific anticancer responses that is dependent on the specified HLA.
These resources are likely to grow significantly in the near future and immunoinformatics will be a major growth area in this domain.
See also
Computational biology
Immunology
Genetics
Cancer
Immunity
References
External links
Boston University Center for Computational Immunology
York Computational Immunology Lab
Immunoinformatics Immunological Software and Web Services from Gajendra Pal Singh Raghava group
VacTarBac A web based platform for predicted vaccine candidates against major pathogens.
Bioinformatics
Branches of immunology
Genomics
Computational fields of study | Computational immunology | [
"Technology",
"Engineering",
"Biology"
] | 2,421 | [
"Biological engineering",
"Computational fields of study",
"Branches of immunology",
"Bioinformatics",
"Computing and society"
] |
17,547,137 | https://en.wikipedia.org/wiki/Arctic%20sea%20ice%20ecology%20and%20history | The Arctic sea ice covers less area in the summer than in the winter. The multi-year (i.e. perennial) sea ice covers nearly all of the central deep basins. The Arctic sea ice and its related biota are unique, and the year-round persistence of the ice has allowed the development of ice endemic species, meaning species not found anywhere else.
There are differing scientific opinions about how long perennial sea ice has existed in the Arctic. Estimates range from 700,000 to 4 million years.
Endemic species
The specialized, sympagic (i.e. ice-associated) community within the sea ice is found in the tiny (mostly <1mm diameter) liquid filled network of pores and brine channels or at the ice-water interface. The organisms living within the sea ice are consequently small (<1mm), and dominated by bacteria, and unicellular plants and animals. Diatoms, a certain type of algae, are considered the most important primary producers inside the ice with more than 200 species occurring in Arctic sea ice. In addition, flagellates contribute substantially to biodiversity, but their species number is unknown.
Protozoan and metazoan ice meiofauna, in particular turbellarians, nematodes, crustaceans and rotifers, can be abundant in all ice types year-round. In spring, larvae and juveniles of benthic animals (e.g. polychaetes and molluscs) migrate into coastal fast ice to feed on the ice algae for a few weeks.
A partially endemic fauna, comprising mainly gammaridean amphipods, thrives at the underside of ice floes. Locally and seasonally occurring at several hundred individuals per square meter, they are important mediators for particulate organic matter from the sea ice to the water column. Ice-associated and pelagic crustaceans are the major food sources for polar cod (Boreogadus saida) that occurs in close association with sea ice and acts as the major link from the ice-related food web to seals and whales.
While previous studies of coastal and offshore sea ice provided a glimpse of the seasonal and regional abundances and the diversity of the ice-associated biota, biodiversity in these communities is virtually unknown for all groups, from bacteria to metazoans. Many taxa are likely still undiscovered due to the methodological problems in analyzing ice samples. The study of diversity of ice related environments is urgently required before they ultimately change with altering ice regimes and the likely loss of the multi-year ice cover.
Dating Arctic ice
Estimates of how long the Arctic Ocean has had perennial ice cover vary. Those estimates range from 700,000 years in the opinion of Worsley and Herman, to 4 million years in the opinion of Clark. Here is how Clark refuted the theory of Worsley and Herman:
Recently, a few coccoliths have been reported from late Pliocene and Pleistocene central Arctic sediment (Worsley and Herman, 1980). Although this is interpreted to indicate episodic ice-free conditions for the central Arctic, the occurrence of ice-rafted debris with the sparse coccoliths is more easily interpreted to represent transportation of coccoliths from ice-free continental seas marginal to the central Arctic. The sediment record as well as theoretical considerations make strong argument against alternating ice-covered and ice-free....The probable Middle Cenozoic development of an ice cover, accompanied by Antarctic ice development and a late shift of the Gulf Stream to its present position, were important events that led to the development of modern climates. The record suggests that altering the present ice cover would have profound effects on future climates.
More recently, Melnikov has noted that, "There is no common opinion on the age of the Arctic sea ice cover." Experts apparently agree that the age of the perennial ice cover exceeds 700,000 years but disagree about how much older it is.
However, some research indicates that a sea area north of Greenland may have been open during the Eemian interglacial 120,000 years ago. Evidence of subpolar foraminifers (Turborotalita quinqueloba) indicate open water conditions in that area. This is in contrast to Holocene sediments that only show polar species.
See also
Arctic amplification
Arctic Climate Impact Assessment
Arctic ecology
Arctic Ocean
Arctic sea ice decline
Climate of the Arctic
Further reading
Bluhm, B., Gradinger R. (2008) "Regional Variability In Food Availability For Arctic Marine Mammals." Ecological Applications 18: S77–96 (link to free PDF)
Gradinger, R.R., K. Meiners, G.Plumley, Q. Zhang, and B.A. Bluhm (2005) "Abundance and composition of the sea-ice meiofauna in off-shore pack ice of the Beaufort Gyre in summer 2002 and 2003." Polar Biology 28: 171 – 181
Melnikov I.A.; Kolosova E.G.; Welch H.E.; Zhitina L.S. (2002) "Sea ice biological communities and nutrient dynamics in the Canada Basin of the Arctic Ocean." Deep Sea Res 49: 1623–1649.
Christian Nozais, Michel Gosselin, Christine Michel, Guglielmo Tita (2001) "Abundance, biomass, composition and grazing impact of the sea-ice meiofauna in the North Water, northern Baffin Bay." Mar Ecol Progr Ser 217: 235–250
Bluhm BA, Gradinger R, Piraino S. 2007. "First record of sympagic hydroids (Hydrozoa, Cnidaria) in Arctic coastal fast ice." Polar Biology 30: 1557–1563.
Horner, R. (1985) Sea Ice Biota. CRC Press.
Melnikov, I. (1997) The Arctic Sea Ice Ecosystem. Gordon and Breach Science Publishers.
Thomas, D., Dieckmann, G. (2003) Sea Ice. An Introduction to its Physics, Chemistry, Biology and Geology. Blackwell.
Footnotes
Environment of the Arctic
Fauna of the Arctic
Sea ice | Arctic sea ice ecology and history | [
"Physics"
] | 1,266 | [
"Physical phenomena",
"Earth phenomena",
"Sea ice"
] |
17,548,151 | https://en.wikipedia.org/wiki/Comparison%20of%20real-time%20operating%20systems | This is a list of real-time operating systems (RTOSs). This is an operating system in which the time taken to process an input stimulus is less than the time lapsed until the next input stimulus of the same type.
References
External links
2024 RTOS Performance Report (FreeRTOS / ThreadX / PX5 / Zephyr) - Beningo Embedded Group
2013 RTOS Comparison (Nucleus / ThreadX / ucOS / Unison) - Embedded Magazine
Embedded operating systems
Real-time operating systems | Comparison of real-time operating systems | [
"Technology"
] | 105 | [
"Real-time computing",
"Real-time operating systems"
] |
17,548,688 | https://en.wikipedia.org/wiki/Chloroformic%20acid | Chloroformic acid is a chemical compound with the formula . It is the single acyl-halide derivative of carbonic acid (phosgene is the double acyl-halide derivative). Chloroformic acid is also structurally related to formic acid, in a way that the non-acidic hydrogen of formic acid is replaced by chlorine. Despite the similar name, it is very different from chloroform. It is described as unstable.
Chloroformic acid itself is too unstable to be handled for chemical reactions. However, many esters of this carboxylic acid are stable and these chloroformates are important reagents in organic chemistry. They are used to prepare mixed carboxylic acid anhydrides used in peptide synthesis.
Important chloroformate esters include 4-nitrophenyl chloroformate, fluorenylmethyloxycarbonylchloride, benzyl chloroformate and ethyl chloroformate.
See also
Chloroacetic acids
Dichloroacetic acid
Trichloroacetic acid
References
Carboxylic acids
Chloroformates | Chloroformic acid | [
"Chemistry"
] | 255 | [
"Carboxylic acids",
"Functional groups"
] |
17,549,172 | https://en.wikipedia.org/wiki/Burnishing%20%28metal%29 | Burnishing is the plastic deformation of a surface due to sliding contact with another object. It smooths the surface and makes it shinier. Burnishing may occur on any sliding surface if the contact stress locally exceeds the yield strength of the material. The phenomenon can occur both unintentionally as a failure mode, and intentionally as part of a metalworking or manufacturing process. It is a squeezing operation under cold working.
Failure mode (unintentionally)
The action of a hardened ball against a softer, flat plate illustrates the process of burnishing. If the ball is pushed directly into the plate, stresses develop in both objects around the area where they contact. As this normal force increases, both the ball and the plate's surfaces deform.
The deformation caused by the hardened ball increases with the magnitude of the force pressing against it. If the force on it is small, when the force is released both the ball and plate's surface will return to their original, undeformed shape. In that case, the stresses in the plate are always less than the yield strength of the material, so the deformation is purely elastic. Since it was given that the flat plate is softer than the ball, the plate's surface will always deform more.
If a larger force is used, there will also be plastic deformation and the plate's surface will be permanently altered. A bowl-shaped indentation will be left behind, surrounded by a ring of raised material that was displaced by the ball. The stresses between the ball and the plate are described in more detail by Hertzian stress theory.
Dragging the ball across the plate will have a different effect than pressing. In that case, the force on the ball can be decomposed into two component forces: one normal to the plate's surface, pressing it in, and the other tangential, dragging it along. As the tangential component is increased, the ball will start to slide along the plate. At the same time, the normal force will deform both objects, just as with the static situation. If the normal force is low, the ball will rub against the plate but not permanently alter its surface. The rubbing action will create friction and heat, but it will not leave a mark on the plate. However, as the normal force increases, eventually the stresses in the plate's surface will exceed its yield strength. When this happens the ball will plow through the surface and create a trough behind it. The plowing action of the ball is burnishing. Burnishing also occurs when the ball can rotate, as would happen in the above scenario if another flat plate was brought down from above to induce downwards loading, and at the same time to cause rotation and translation of the ball, as in the case of a ball bearing.
Burnishing also occurs on surfaces that conform to each other, such as between two flat plates, but it happens on a microscopic scale. Even the smoothest of surfaces will have imperfections if viewed at a high enough magnification. The imperfections that extend above the general form of a surface are called asperities, and they can plow material on another surface just like the ball dragging along the plate. The combined effect of many of these asperities produce the smeared texture that is associated with burnishing.
Effects on sliding contact
Burnishing is normally undesirable in mechanical components for a variety of reasons, sometimes simply because its effects are unpredictable. Even light burnishing will significantly alter the surface finish of a part. Initially the finish will be smoother, but with repetitive sliding action, grooves will develop on the surface along the sliding direction. The plastic deformation associated with burnishing will harden the surface and generate compressive residual stresses. Although these properties are usually advantageous, excessive burnishing leads to sub-surface cracks which cause spalling, a phenomenon where the upper layer of a surface flakes off of the bulk material.
Burnishing may also affect the performance of a machine. The plastic deformation associated with burnishing creates greater heat and friction than from rubbing alone. This reduces the efficiency of the machine and limits its speed. Furthermore, plastic deformation alters the form and geometry of the part. This reduces the precision and accuracy of the machine. The combination of higher friction and degraded form often leads to a runaway situation that continually worsens until the component fails.
To prevent destructive burnishing, sliding must be avoided, and in rolling situations, loads must be beneath the spalling threshold. In the areas of a machine that slide with respect to each other, roller bearings can be inserted so that the components are in rolling contact instead of sliding. If sliding cannot be avoided, then a lubricant should be added between the components. The purpose of the lubricant in this case is to separate the components with a lubricant film so they cannot contact. The lubricant also distributes the load over a larger area, so that the local contact forces are not as high. If there was already a lubricant, its film thickness must be increased; usually this can be accomplished by increasing the viscosity of the lubricant.
In manufacturing (intentionally)
Burnishing is not always unwanted. If it occurs in a controlled manner, it can have desirable effects. Burnishing processes are used in manufacturing to improve the size, shape, surface finish, or surface hardness of a workpiece. It is essentially a forming operation that occurs on a small scale. The benefits of burnishing often include combatting fatigue failure, preventing corrosion and stress corrosion, texturing surfaces to eliminate visual defects, closing porosity, creating surface compressive residual stress.
There are several forms of burnishing processes, the most common are roller burnishing and ball burnishing (a subset of which is also referred to as ballizing). In both cases, a burnishing tool runs against the workpiece and plastically deforms its surface. In some instances of the latter case (and always in ballizing), it rubs, in the former it generally rotates and rolls. The workpiece may be at ambient temperature, or heated to reduce the forces and wear on the tool. The tool is usually hardened and coated with special materials to increase its life.
Ball burnishing, or ballizing, is a replacement for other bore finishing operations such as grinding, honing, or polishing. A ballizing tool consists of one or more over-sized balls that are pushed through a hole. The tool is similar to a broach, but instead of cutting away material, it plows it out of the way.
Ball burnishing is also used as a deburring operation. It is especially useful for removing the burr in the middle of a through hole that was drilled from both sides.
Ball burnishing tools of another type are sometimes used in CNC milling centres to follow a ball-nosed milling operation: the hardened ball is applied along a zig-zag toolpath in a holder similar to a ball-point pen, except that the 'ink' is pressurised, recycled lubricant. This combines the productivity of a machined finish which is achieved by a 'semi-finishing' cut, with a better finish than obtainable with slow and time-consuming finish cuts. The feed rate for burnishing is that associated with 'rapid traverse' rather than finish machining.
Roller burnishing, or surface rolling, is used on cylindrical, conical, or disk shaped workpieces. The tool resembles a roller bearing, but the rollers are generally very slightly tapered so that their envelope diameter can be accurately adjusted. The rollers typically rotate within a cage, as in a roller bearing. Typical applications for roller burnishing include hydraulic system components, shaft fillets, and sealing surfaces.
Very close control of size can be exercised.
Burnishing also occurs to some extent in machining processes. In turning, burnishing occurs if the cutting tool is not sharp, if a large negative rake angle is used, if a very small depth of cut is used, or if the workpiece material is gummy. As a cutting tool wears, it becomes more blunt and the burnishing effect becomes more pronounced. In grinding, since the abrasive grains are randomly oriented and some are not sharp, there is always some amount of burnishing. This is one reason the grinding is less efficient and generates more heat than turning. In drilling, burnishing occurs with drills that have lands to burnish the material as it drills into it. Regular twist drills or straight fluted drills have 2 lands to guide them through the hole. On burnishing drills there are 4 or more lands, similar to reamers.
Burnish setting, also known as flush, gypsy, or shot setting, is a setting technique used in stonesetting. A space is drilled, into which a stone is inserted such that the girdle of the stone, the point of maximum diameter, is just below the surface of the metal. A burnishing tool is used to push metal all around the stone to hold the stone and give a flush appearance, with a burnished edge around it. This type of setting has a long history but is gaining a resurgence in contemporary jewelry.
See also
Low plasticity burnishing
References
External links
Metal Burnishing (Cutlery, Pewter, Silver) Spons' Workshop
Mechanical engineering
Mechanical failure modes
Metalworking | Burnishing (metal) | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 1,911 | [
"Structural engineering",
"Mechanical failure modes",
"Applied and interdisciplinary physics",
"Technological failures",
"Mechanical engineering",
"Mechanical failure"
] |
17,550,278 | https://en.wikipedia.org/wiki/SN%202008D | SN 2008D is a supernova detected with NASA's Swift X-ray telescope. The explosion of the supernova precursor star, in the spiral galaxy NGC 2770 (88 million light years away (27 Mpc), was detected on January 9, 2008, by Carnegie-Princeton fellows Alicia Soderberg and Edo Berger, and Albert Kong and Tom Maccarone independently using Swift. They alerted eight other orbiting and ground-based observatories to record the event. This was the first time that astronomers have ever observed a supernova as it occurred.
The supernova was determined to be of Type Ibc. The velocities measured from SN2008D indicated expansion rates of more than 10,000 kilometers per second. The explosion was off-center, with gas on one side of the explosion moving outward faster than on the other. This was the first time the X-ray emission pattern of a supernova (which only lasted about five minutes) was captured at the moment of its birth. Now that it is known what X-ray pattern to look for, the next generation of X-ray satellites is expected to find hundreds of supernovae every year exactly when they explode, which will allow searches for neutrino and gravitational wave bursts that are predicted to accompany the collapse of stellar cores and the birth of neutron stars.
See also
List of supernovae
History of supernova observation
List of supernova remnants
List of supernova candidates
References
External links
Light curves and spectra on the Open Supernova Catalog
20080109
Supernovae
Lynx (constellation) | SN 2008D | [
"Chemistry",
"Astronomy"
] | 322 | [
"Supernovae",
"Lynx (constellation)",
"Astronomical events",
"Constellations",
"Explosions"
] |
17,550,418 | https://en.wikipedia.org/wiki/Immigration%20%28European%20Economic%20Area%29%20Regulations%202006 | The Immigration (European Economic Area) Regulations 2006 (or EEA Regulations for short), amended by SI 2009/1117, SI 2011/1247 and SI 2015/694 and which have now been mostly repealed and superseded by the Immigration (European Economic Area) Regulations 2016, was a piece of British legislation which implemented the right of free movement of European Economic Area (EEA) nationals and their family members in the United Kingdom. It is based on Directive 2004/38/EC. It allows EEA citizens and their family members to live and work in the UK without explicit permission. Although Swiss citizens are covered by a separate bilateral agreement; they are treated basically the same as EEA nationals. Family members may need a special entry clearance (the EEA family permit) to enter the UK.
Legal context
The basis of the Immigration EEA Regulations 2006 is Directive 2004/38/EC. Member states are bound by the EC treaties to implement Directives into national law. However, a significant amount of case law (or precedents), many of them predating the directive, and the historical development (see Freedom of movement for workers) must also be taken into account to correctly interpret EU law. Still, ambiguities in the Directive and misinterpretation by the member states exist,
which may require further clarification through national courts and the European Court of Justice.
Terminology and applications
The EEA Regulations define a number of terms in addition to the terms in the Directive 2004/38.
Core and extended family members
The definition of a Core family member (of an EEA national) only includes a spouse or civil partner, children under 21, or dependant children of any age and dependent parents. A person outside of this definition (especially unmarried partners) may fall under the category of an extended family member. These include dependents of the EU citizens, members of the household, and a partner in a "durable relationship". While the Directive 2004/38 requires member states to "facilitate entry" for extended family members, the details are not defined. The Directive does not seem to grant any rights to extended family members.
In the EEA Regulations, the acceptance of extended family members is not explicit. UK regulations have specific criteria for extended family members, including unmarried and same sex partners. Once an extended family member has been issued with an EEA family permit, Residence Card, or Residence Certificate, they are regarded under UK regulations as family members.
Accession states
Workers from recent member states have the right to move to the UK, but their access to the labour market is limited. The details are defined in the Worker Registration Scheme (WRS). Nationals of A8 countries will cease to be subject to the Worker Registration scheme and may benefit from free movement under the directive once 12 months of employment with a single employer has been completed. The transitional WRS scheme became obsolete in 2011 when A8 national workers will have the same rights as all other EEA nationals.
Completeness
The implementation is reasonably complete, although there are areas where the Directive has not been fully implemented. One example is a failure to correctly implement the Surinder Singh ruling of the European Court of Justice.
Another issue with the UK implementation of the Directive is that the UK has kept national immigration law (the "Immigration Rules") separate from the implementation of the European law (the "EEA Regulations"). While it is possible to switch from the UK law to the European law, this does reset the clock for acquiring permanent residence. The legal situation of extended family members during this switch is uncertain, because they have to conform to both laws. Switching from European law to UK law is possible only after the EEA citizen became settled in the UK. The EEA national is considered settled when having attained permanent residence.
References
External links
Immigration (European Economic Area) Regulations 2006 – UK Government
Immigration (European Economic Area) Regulations Latest Consolidated Version – EEARegulations.co.uk
Image of an EEA Registration Certificate (pre 2011), the Residence Card looks the same but with different text.
http://www.legislation.gov.uk/uksi/2006/1003/contents/made
European Union law
Schengen, Luxembourg
Law enforcement in Europe
Boundary treaties
Borders of the United Kingdom
International transport
Immigration law in the United Kingdom
Statutory instruments of the United Kingdom
2006 in British law
Immigration to the European Union
European Economic Area
Visa policy of the United Kingdom | Immigration (European Economic Area) Regulations 2006 | [
"Physics"
] | 896 | [
"Physical systems",
"Transport",
"International transport"
] |
17,550,575 | https://en.wikipedia.org/wiki/Earthquake%20simulation | Earthquake simulation applies a real or simulated vibrational input to a structure that possesses the essential features of a real seismic event. Earthquake simulations are generally performed to study the effects of earthquakes on man-made engineered structures, or on natural features which may present a hazard during an earthquake.
Dynamic experiments on building and non-building structures may be physical – as with shake-table testing – or virtual (based on computer simulation). In all cases, to verify a structure's expected seismic performance, researchers prefer to deal with so called 'real time-histories' though the last cannot be 'real' for a hypothetical earthquake specified by either a building code or by some particular research requirements.
Shake-table testing
Studying a building's response to an earthquake is performed by putting a model of the structure on a shake-table that simulates the seismic loading. The earliest such experiments were performed more than a century ago.
Computational approaches
Another way is to evaluate the earthquake performance analytically.
The very first earthquake simulations were performed by statically applying some horizontal inertia forces, based on scaled peak ground accelerations, to a mathematical model of a building. With the further development of computational technologies, static approaches began to give way to dynamic ones.
Traditionally, numerical simulation and physical tests have been uncoupled and performed separately. So-called hybrid testing systems employ rapid, parallel analyses using both physical and computational tests.
See also
Seismic analysis
References
External links
Network for Earthquake Engineering Simulation (NEES)
AEM Earthquake Simulation
Building
Earthquake engineering | Earthquake simulation | [
"Engineering"
] | 307 | [
"Structural engineering",
"Building",
"Construction",
"Civil engineering",
"Earthquake engineering"
] |
17,551,338 | https://en.wikipedia.org/wiki/CoRoT-3b | CoRoT-3b (formerly known as CoRoT-Exo-3b) is a brown dwarf or massive extrasolar planet with a mass 21.66 times that of Jupiter. The object orbits an F-type star in the constellation of Aquila. The orbit is circular and takes 4.2568 days to complete. It was discovered by the French-led CoRoT mission which detected the dimming of the parent star's light as CoRoT-3b passes in front of it (a situation called a transit).
Physical properties
The mass of CoRoT-3b was determined by the radial velocity method, which involves detecting the Doppler shift of the parent star's spectrum as it moves towards and away from Earth as a result of the orbiting companion. This method usually gives only a lower limit on the object's true mass: the measured quantity is the true mass multiplied by the sine of the inclination angle between the normal vector to the orbital plane of the companion and the line of sight between Earth and the star, an angle which in general is unknown. However, in the case of CoRoT-3b, the transits reveal the inclination angle and thus the true mass can be determined. In the case of CoRoT-3b, the mass is 21.66 times the mass of the planet Jupiter.
As CoRoT-3b is a transiting object, its radius can be calculated from the amount of light blocked when it passes in front of the star and an estimate of the stellar radius. When CoRoT-3b was originally discovered, it was believed to have a radius significantly smaller than that of Jupiter. This would have implied it had properties intermediate between those of planets and brown dwarfs. Later more detailed analysis revealed that the object's radius is similar to that of Jupiter, which fits with the expected properties of a brown dwarf with the mass of CoRoT-3b.
The mean density of CoRoT-3b is 26,400 kg/m3, greater than that of osmium under standard conditions. This high density is reached because of the extreme compression of matter in the object's interior: in fact, the radius of CoRoT-3b is in agreement with predictions for an object composed mainly of hydrogen. The surface gravity is correspondingly high, over 50 times the gravity felt at the surface of the Earth.
A later study called this density into question using data from Gaia data release 2, arriving at a lower density of kg/m3, but finding the exoplanet KELT-1b to be denser at kg/m3.
The study in 2012, utilizing a Rossiter–McLaughlin effect, have determined the planetary orbit is mildly misaligned with the rotational axis of the star, misalignment equal to 37.6°.
Classification
The issue of whether CoRoT-3b is a planet or a brown dwarf depends on the definition chosen for these terms. According to one definition, a brown dwarf is an object capable of fusing deuterium, a process which occurs in objects more massive than 13 times Jupiter's mass. According to this definition, which is the one adopted by the International Astronomical Union's Working Group on Extrasolar Planets, CoRoT-3b is a brown dwarf. However, some models of planet formation predict that planets with masses up to 25–30 Jupiter masses can form via core accretion. If this formation-based distinction between brown dwarfs and planets is used, the status of CoRoT-3b becomes less clear as the method of formation for this object is not known. The issue is clouded further by the orbital properties of the object: brown dwarfs located close to their stars are rare (a phenomenon known as the brown-dwarf desert), while the majority of the known massive close-in planets (for example XO-3b, HAT-P-2b and WASP-14b) are in highly eccentric orbits, in contrast to the circular orbit of CoRoT-3b.
References
External links
Brown dwarfs
Aquila (constellation)
Transiting exoplanets
Hot Jupiters
Exoplanets discovered in 2008
3b | CoRoT-3b | [
"Astronomy"
] | 845 | [
"Aquila (constellation)",
"Constellations"
] |
17,551,727 | https://en.wikipedia.org/wiki/NGC%202770 | NGC 2770 is a spiral galaxy in the northern constellation of Lynx, near the northern constellation border with Cancer. It was discovered by German-born astronomer William Herschel on December 7, 1785. J. L. E. Dreyer described it as, "faint, large, much extended 150°, mottled but not resolved, 2 stars to north". NGC 2770 was the target for the first binocular image produced by the Large Binocular Telescope.
The morphological classification of SBc indicates a barred spiral with moderately-wound arms. The physical properties of this galaxy are similar to those of the Milky Way. The combined mass of stars in the galaxy is estimated at , and it has a star formation rate of y−1. There are no apparent perturbations of the galaxy due to suspected interaction with the companion galaxy, NGC 2770B.
Supernovae
Four supernovae have been observed in NGC 2770:
SN 1999eh (type Ib, mag. 17.5) was discovered by Mark Armstrong on 12 October 1999.
SN 2007uy (type Ib, mag. 17.2) was discovered by Yoji Hirose on 31 December 2007.
SN 2008D (type Ib, mag. 17.5) was discovered by NASA's Swift X-ray telescope on 9 January 2008, while observing SN 2007uy. It was the first supernova detected by the X-rays released very early on in its formation, rather than by the optical light emitted during the later stages, which allowed the first moments of the outburst to be observed. It is possible that NGC 2770's interactions with a suspected companion galaxy may have created the massive stars causing this activity.
SN 2015bh was discovered by the Catalina Real-time Transient Survey and Stan Howerton on 7 February 2015, and was either a Type II supernova or the hyper-eruption of a luminous blue variable.
See also
List of NGC objects (2001–3000)
References
Further reading
External links
Astronomers Witness Supernova's First Moments
Barred spiral galaxies
2770
Lynx (constellation)
025806
+06-20-038
04806
09065+3319
17851207
Discoveries by William Herschel | NGC 2770 | [
"Astronomy"
] | 452 | [
"Lynx (constellation)",
"Constellations"
] |
17,553,405 | https://en.wikipedia.org/wiki/Material%20failure%20theory | Material failure theory is an interdisciplinary field of materials science and solid mechanics which attempts to predict the conditions under which solid materials fail under the action of external loads. The failure of a material is usually classified into brittle failure (fracture) or ductile failure (yield). Depending on the conditions (such as temperature, state of stress, loading rate) most materials can fail in a brittle or ductile manner or both. However, for most practical situations, a material may be classified as either brittle or ductile.
In mathematical terms, failure theory is expressed in the form of various failure criteria which are valid for specific materials. Failure criteria are functions in stress or strain space which separate "failed" states from "unfailed" states. A precise physical definition of a "failed" state is not easily quantified and several working definitions are in use in the engineering community. Quite often, phenomenological failure criteria of the same form are used to predict brittle failure and ductile yields.
Material failure
In materials science, material failure is the loss of load carrying capacity of a material unit. This definition introduces to the fact that material failure can be examined in different scales, from microscopic, to macroscopic. In structural problems, where the structural response may be beyond the initiation of nonlinear material behaviour, material failure is of profound importance for the determination of the integrity of the structure. On the other hand, due to the lack of globally accepted fracture criteria, the determination of the structure's damage, due to material failure, is still under intensive research.
Types of material failure
Material failure can be distinguished in two broader categories depending on the scale in which the material is examined:
Microscopic failure
Microscopic material failure is defined in terms of crack initiation and propagation. Such methodologies are useful for gaining insight in the cracking of specimens and simple structures under well defined global load distributions. Microscopic failure considers the initiation and propagation of a crack. Failure criteria in this case are related to microscopic fracture. Some of the most popular failure models in this area are the micromechanical failure models, which combine the advantages of continuum mechanics and classical fracture mechanics. Such models are based on the concept that during plastic deformation, microvoids nucleate and grow until a local plastic neck or fracture of the intervoid matrix occurs, which causes the coalescence of neighbouring voids. Such a model, proposed by Gurson and extended by Tvergaard and Needleman, is known as GTN. Another approach, proposed by Rousselier, is based on continuum damage mechanics (CDM) and thermodynamics. Both models form a modification of the von Mises yield potential by introducing a scalar damage quantity, which represents the void volume fraction of cavities, the porosity f.
Macroscopic failure
Macroscopic material failure is defined in terms of load carrying capacity or energy storage capacity, equivalently. Li presents a classification of macroscopic failure criteria in four categories:
Stress or strain failure
Energy type failure (S-criterion, T-criterion)
Damage failure
Empirical failure
Five general levels are considered, at which the meaning of deformation and failure is interpreted differently: the structural element scale, the macroscopic scale where macroscopic stress and strain are defined, the mesoscale which is represented by a typical void, the microscale and the atomic scale. The material behavior at one level is considered as a collective of its behavior at a sub-level. An efficient deformation and failure model should be consistent at every level.
Brittle material failure criteria
Failure of brittle materials can be determined using several approaches:
Phenomenological failure criteria
Linear elastic fracture mechanics
Elastic-plastic fracture mechanics
Energy-based methods
Cohesive zone methods
Phenomenological failure criteria
The failure criteria that were developed for brittle solids were the maximum stress/strain criteria. The maximum stress criterion assumes that a material fails when the maximum principal stress in a material element exceeds the uniaxial tensile strength of the material. Alternatively, the material will fail if the minimum principal stress is less than the uniaxial compressive strength of the material. If the uniaxial tensile strength of the material is and the uniaxial compressive strength is , then the safe region for the material is assumed to be
Note that the convention that tension is positive has been used in the above expression.
The maximum strain criterion has a similar form except that the principal strains are compared with experimentally determined uniaxial strains at failure, i.e.,
The maximum principal stress and strain criteria continue to be widely used in spite of severe shortcomings.
Numerous other phenomenological failure criteria can be found in the engineering literature. The degree of success of these criteria in predicting failure has been limited. Some popular failure criteria for various type of materials are:
criteria based on invariants of the Cauchy stress tensor
the Tresca or maximum shear stress failure criterion
the von Mises or maximum elastic distortional energy criterion
the Mohr-Coulomb failure criterion for cohesive-frictional solids
the Drucker-Prager failure criterion for pressure-dependent solids
the Bresler-Pister failure criterion for concrete
the Willam-Warnke failure criterion for concrete
the Hankinson criterion, an empirical failure criterion that is used for orthotropic materials such as wood
the Hill yield criteria for anisotropic solids
the Tsai-Wu failure criterion for anisotropic composites
the Johnson–Holmquist damage model for high-rate deformations of isotropic solids
the Hoek-Brown failure criterion for rock masses
the Cam-Clay failure theory for soil
Linear elastic fracture mechanics
The approach taken in linear elastic fracture mechanics is to estimate the amount of energy needed to grow a preexisting crack in a brittle material. The earliest fracture mechanics approach for unstable crack growth is Griffiths' theory. When applied to the mode I opening of a crack, Griffiths' theory predicts that the critical stress () needed to propagate the crack is given by
where is the Young's modulus of the material, is the surface energy per unit area of the crack, and is the crack length for edge cracks or is the crack length for plane cracks. The quantity is postulated as a material parameter called the fracture toughness. The mode I fracture toughness for plane strain is defined as
where is a critical value of the far field stress and is a dimensionless factor that depends on the geometry, material properties, and loading condition. The quantity is related to the stress intensity factor and is determined experimentally. Similar quantities and can be determined for mode II and model III loading conditions.
The state of stress around cracks of various shapes can be expressed in terms of their stress intensity factors. Linear elastic fracture mechanics predicts that a crack will extend when the stress intensity factor at the crack tip is greater than the fracture toughness of the material. Therefore, the critical applied stress can also be determined once the stress intensity factor at a crack tip is known.
Energy-based methods
The linear elastic fracture mechanics method is difficult to apply for anisotropic materials (such as composites) or for situations where the loading or the geometry are complex. The strain energy release rate approach has proved quite useful for such situations. The strain energy release rate for a mode I crack which runs through the thickness of a plate is defined as
where is the applied load, is the thickness of the plate, is the displacement at the point of application of the load due to crack growth, and is the crack length for edge cracks or is the crack length for plane cracks. The crack is expected to propagate when the strain energy release rate exceeds a critical value - called the critical strain energy release rate.
The fracture toughness and the critical strain energy release rate for plane stress are related by
where is the Young's modulus. If an initial crack size is known, then a critical stress can be determined using the strain energy release rate criterion.
Ductile material failure (yield) criteria
A yield criterion often expressed as yield surface, or yield locus, is a hypothesis concerning the limit of elasticity under any combination of stresses. There are two interpretations of yield criterion: one is purely mathematical in taking a statistical approach while other models attempt to provide a justification based on established physical principles. Since stress and strain are tensor qualities they can be described on the basis of three principal directions, in the case of stress these are denoted by , , and .
The following represent the most common yield criterion as applied to an isotropic material (uniform properties in all directions). Other equations have been proposed or are used in specialist situations.
Isotropic yield criteria
Maximum principal stress theory – by William Rankine (1850). Yield occurs when the largest principal stress exceeds the uniaxial tensile yield strength. Although this criterion allows for a quick and easy comparison with experimental data it is rarely suitable for design purposes. This theory gives good predictions for brittle materials.
Maximum principal strain theory – by St.Venant. Yield occurs when the maximum principal strain reaches the strain corresponding to the yield point during a simple tensile test. In terms of the principal stresses this is determined by the equation:
Maximum shear stress theory – Also known as the Tresca yield criterion, after the French scientist Henri Tresca. This assumes that yield occurs when the shear stress exceeds the shear yield strength :
Total strain energy theory – This theory assumes that the stored energy associated with elastic deformation at the point of yield is independent of the specific stress tensor. Thus yield occurs when the strain energy per unit volume is greater than the strain energy at the elastic limit in simple tension. For a 3-dimensional stress state this is given by:
Maximum distortion energy theory (von Mises yield criterion) also referred to as octahedral shear stress theory. – This theory proposes that the total strain energy can be separated into two components: the volumetric (hydrostatic) strain energy and the shape (distortion or shear) strain energy. It is proposed that yield occurs when the distortion component exceeds that at the yield point for a simple tensile test. This theory is also known as the von Mises yield criterion.
The yield surfaces corresponding to these criteria have a range of forms. However, most isotropic yield criteria correspond to convex yield surfaces.
Anisotropic yield criteria
When a metal is subjected to large plastic deformations the grain sizes and orientations change in the direction of deformation. As a result, the plastic yield behavior of the material shows directional dependency. Under such circumstances, the isotropic yield criteria such as the von Mises yield criterion are unable to predict the yield behavior accurately. Several anisotropic yield criteria have been developed to deal with such situations.
Some of the more popular anisotropic yield criteria are:
Hill's quadratic yield criterion
Generalized Hill yield criterion
Hosford yield criterion
Yield surface
The yield surface of a ductile material usually changes as the material experiences increased deformation. Models for the evolution of the yield surface with increasing strain, temperature, and strain rate are used in conjunction with the above failure criteria for isotropic hardening, kinematic hardening, and viscoplasticity. Some such models are:
the Johnson-Cook model
the Steinberg-Guinan model
the Zerilli-Armstrong model
the Mechanical threshold stress model
the Preston-Tonks-Wallace model
There is another important aspect to ductile materials - the prediction of the ultimate failure strength of a ductile material. Several models for predicting the ultimate strength have been used by the engineering community with varying levels of success. For metals, such failure criteria are usually expressed in terms of a combination of porosity and strain to failure or in terms of a damage parameter.
See also
Fracture mechanics
Fracture
Stress intensity factor
Yield (engineering)
Yield surface
Plasticity (physics)
Structural failure
Strength of materials
Ultimate failure
Damage mechanics
Size effect on structural strength
Concrete fracture analysis
References
Mechanical failure
Plasticity (physics)
Solid mechanics
Mechanics
Materials science
Materials degradation
Fracture mechanics | Material failure theory | [
"Physics",
"Materials_science",
"Engineering"
] | 2,421 | [
"Structural engineering",
"Solid mechanics",
"Applied and interdisciplinary physics",
"Fracture mechanics",
"Deformation (mechanics)",
"Materials science",
"Plasticity (physics)",
"Mechanics",
"nan",
"Mechanical engineering",
"Materials degradation",
"Mechanical failure"
] |
17,553,728 | https://en.wikipedia.org/wiki/China%20Aerodynamics%20Research%20and%20Development%20Center | China Aerodynamics Research and Development Center (CARDC) () was founded in 1968. It is the largest research and testing institute of aerodynamics in China, involved in the development of hypersonic missile technology. The center is located in Mianyang City, Sichuan Province. Currently there are more than 1,600 scientists and technicians working there.
The center has been on the United States Department of Commerce's Entity List since 1999.
References
External links
Official website (English language version)
Research institutes in China
Research institutes established in 1968 | China Aerodynamics Research and Development Center | [
"Astronomy"
] | 108 | [
"Outer space stubs",
"Outer space",
"Astronomy stubs"
] |
17,554,860 | https://en.wikipedia.org/wiki/Eagle%20Test%20Systems | Eagle Test Systems is a supplier of automatic test equipment (ATE) and operates as a business unit within the Teradyne Semiconductor Test Division. Eagle's test equipment was designed to address volume production. Customers, including semiconductor manufacturers and assembly and test subcontractors, use the products to test analog, a combination of digital and analog, known as mixed-signal, and radio frequency (RF) semiconductors.
History
Eagle Test Systems was founded by Len Foxman and began providing test solutions in 1976. Since October 1, 2003, they have delivered over 600 test systems to more than 60 customers worldwide. Prior to being acquired by Teradyne, global headquarters were located at its manufacturing facility in Buffalo Grove, Illinois. There, Eagle Test Systems operated sales, services and engineering support facilities in the United States through regional offices and globally through their offices in Korea, Singapore, Taiwan, Italy, Germany, China, Malaysia and the Philippines.
Eagle Test Systems completed their initial public offering on March 14, 2006 (original stock ticker EGLT). On November 14, 2008, Eagle Test was acquired by Teradyne.
Competition
Teradyne's principal competitors in the automatic test equipment business are:
Advantest
SPEA (company)
LTX - Credence Systems Corporation (including its NPTest acquisition which was formerly Schlumberger Limited, formerly Fairchild Test Systems Group, a division of Fairchild Camera and Instrument)
Verigy (formerly a division of Hewlett-Packard and then Agilent Technologies)
References
Buffalo Grove, Illinois
Companies based in Lake County, Illinois
Equipment semiconductor companies
1976 establishments in Massachusetts | Eagle Test Systems | [
"Engineering"
] | 326 | [
"Equipment semiconductor companies",
"Semiconductor fabrication equipment"
] |
17,555,165 | https://en.wikipedia.org/wiki/Bs%20space | In the mathematical field of functional analysis, the space bs consists of all infinite sequences (xi) of real numbers or complex numbers such that
is finite. The set of such sequences forms a normed space with the vector space operations defined componentwise, and the norm given by
Furthermore, with respect to metric induced by this norm, bs is complete: it is a Banach space.
The space of all sequences such that the series
is convergent (possibly conditionally) is denoted by cs. This is a closed vector subspace of bs, and so is also a Banach space with the same norm.
The space bs is isometrically isomorphic to the Space of bounded sequences via the mapping
Furthermore, the space of convergent sequences c is the image of cs under
See also
References
.
Banach spaces
Functional analysis | Bs space | [
"Mathematics"
] | 165 | [
"Mathematical analysis",
"Functions and mappings",
"Functional analysis",
"Mathematical analysis stubs",
"Mathematical objects",
"Mathematical relations"
] |
17,555,222 | https://en.wikipedia.org/wiki/Cyamemazine | Cyamemazine (Tercian), also known as cyamepromazine, is a typical antipsychotic drug of the phenothiazine class which was introduced by Theraplix in France in 1972 and later in Portugal as well.
Medical use
It is used for the treatment of schizophrenia and, especially, for psychosis-associated anxiety, due to its unique anxiolytic efficacy.
It is also used to reduce anxiety associated with benzodiazepine withdrawal syndrome and anxiety in depression with suicidal tendency.
Side effects
Here are some of the most common side effects and related incidence:
Sedation (20%)
Vertigo (7.9%)
Constipation (4%)
Dyskinesia (4.4%)
Dryness of mouth (5.9%)
Hypotension (7.4%)
Tachycardia (3.2%)
Mechanism
Cyamemazine differs from other phenothiazine neuroleptics in that aside from the usual profile of dopamine, α1-adrenergic, H1, and mACh receptor antagonism, it additionally produces potent blockade of several serotonin receptors, including 5-HT2A, 5-HT2C, and 5-HT7. These actions have been implicated in cyamemazine's anxiolytic effects (5-HT2C) and lack of extrapyramidal side effects (5-HT2A), and despite being classified as a typical antipsychotic, it actually behaves like an atypical antipsychotic.
Synthesis
2-Cyanophenothiazine [38642-74-9] (1)
3-Chloro-2-methylpropyl(dimethyl)amine [23349-86-2] (2)
References
Alpha-1 blockers
Dimethylamino compounds
Dopamine antagonists
H1 receptor antagonists
M1 receptor antagonists
M2 receptor antagonists
M3 receptor antagonists
M4 receptor antagonists
M5 receptor antagonists
Nitriles
Phenothiazines
Serotonin receptor antagonists
Typical antipsychotics | Cyamemazine | [
"Chemistry"
] | 453 | [
"Nitriles",
"Functional groups"
] |
17,555,258 | https://en.wikipedia.org/wiki/Bakery%20mix | Bakery mix is an add water only pre-mixed baking product consisting of flour, dry milk, shortening, salt, and baking powder (a leavening agent). A bakery mix can be used to make a wide variety of baked goods from pizza dough to dumplings to pretzels. The typical flavor profile of bakery mix differs from that of pancake mix. Bakery mixes do not require refrigeration.
History
Chris Rutt and Charles Underwood of the Pearl Milling Company, developed Aunt Jemima, the first "ready mix." The baking mix was designed for people to just add water to create a mixture that could create pancakes.
Carl Smith, a sales executive at General Mills, got the idea of selling a pre-mixed blend of flour, salt, baking powder, and lard to create biscuits from a chef on a train in 1930. After Smith pitched the idea for a biscuit mix to sell, head chemist of General Mills, Charlie Kress, created Bisquick. Bisquick entered the market in 1931. In the 1940s, Bisquick began using "a world of baking in a box," and printed recipes for other baked goods such as dumplings, muffins, and coffee cake.
In 1933, Pittsburgh molasses company, P. Duff and Sons, patented the first cake mix after blending dehydrated molasses with dehydrated flour, sugar, eggs, and other ingredients. P. Duff and Sons created the cake mix to move surplus molasses, requiring 100 pounds of molasses for every 100 pounds of wheat flour.
After World War Two, flour companies such as General Mills, and Pillsbury, began selling cake mixes due to the surplus of flour. By the 1950s, there were hundreds of cake mix companies. In 1948, Pillsbury introduced the first chocolate cake mix.
Use of Eggs
The Duff company patented a baking mixture requiring fresh eggs in 1935 with writing in the patent application, “The housewife and the purchasing public in general seem to prefer fresh eggs and hence the use of dried or powdered eggs is somewhat of a handicap from a psychological standpoint."
Other companies would continue to use powdered eggs in their bakery mixes, and it's not until sales flattened between 1956 and 1960 that major food companies would revise their formula to incorporate fresh eggs. Ernest Dichter, an analyst for General Mills, interviewed women that used the cake mixes and reported that the simplicity of the mixes made women feel too self indulgent because there wasn't enough work involved.
While some say that the difference caused by using fresh eggs was purely psychological, others argued that the inclusion of fresh eggs simply created better products. The cake mix with dried eggs frequently tasted of eggs, stuck to the pan and had poorer texture.
References
Food ingredients | Bakery mix | [
"Technology"
] | 569 | [
"Food ingredients",
"Components"
] |
17,555,375 | https://en.wikipedia.org/wiki/Tsai%E2%80%93Wu%20failure%20criterion | The Tsai–Wu failure criterion is a phenomenological material failure theory which is widely used for anisotropic composite materials which have different strengths in tension and compression. The Tsai-Wu criterion predicts failure when the failure index in a laminate reaches 1. This failure criterion is a specialization of the general quadratic failure criterion proposed by Gol'denblat and Kopnov and can be expressed in the form
where and repeated indices indicate summation, and are experimentally determined material strength parameters. The stresses are expressed in Voigt notation. If the failure surface is to be closed and convex, the interaction terms must satisfy
which implies that all the terms must be positive.
Tsai–Wu failure criterion for orthotropic materials
For orthotropic materials with three planes of symmetry oriented with the coordinate directions, if we assume that and that there is no coupling between the normal and shear stress terms (and between the shear terms), the general form of the Tsai–Wu failure criterion reduces to
Let the failure strength in uniaxial tension and compression in the three directions of anisotropy be . Also, let us assume that the shear strengths in the three planes of symmetry are (and have the same magnitude on a plane even if the signs are different). Then the coefficients of the orthotropic Tsai–Wu failure criterion are
The coefficients can be determined using equibiaxial tests. If the failure strengths in equibiaxial tension are then
The near impossibility of performing these equibiaxial tests has led to there being a severe lack of experimental data on the parameters .
It can be shown that the Tsai-Wu criterion is a particular case of the generalized Hill yield criterion.
Tsai-Wu failure criterion for transversely isotropic materials
For a transversely isotropic material, if the plane of isotropy is 1–2, then
Then the Tsai–Wu failure criterion reduces to
where . This theory is applicable to a unidirectional composite lamina where the fiber direction is in the '3'-direction.
In order to maintain closed and ellipsoidal failure surfaces for all stress states, Tsai and Wu also proposed stability conditions which take the following form for transversely isotropic materials
Tsai–Wu failure criterion in plane stress
For the case of plane stress with , the Tsai–Wu failure criterion reduces to
The strengths in the expressions for may be interpreted, in the case of a lamina, as
= transverse compressive strength, = transverse tensile strength, = longitudinal compressive strength, = longitudinal strength, = longitudinal shear strength, = transverse shear strength.
Tsai–Wu criterion for foams
The Tsai–Wu criterion for closed cell PVC foams under plane strain conditions may be expressed as
where
For DIAB Divinycell H250 PVC foam (density 250 kg/cu.m.), the values of the strengths are MPa, MPa, MPa, MPa.
For aluminum foams in plane stress, a simplified form of the Tsai–Wu criterion may be used if we assume that the tensile and compressive failure strengths are the same and that there are no shear effects on the failure strength. This criterion may be written as
where
Tsai–Wu criterion for bone
The Tsai–Wu failure criterion has also been applied to trabecular bone/cancellous bone with varying degrees of success. The quantity has been shown to have a nonlinear dependence on the density of the bone.
See also
Material failure theory
Yield (engineering)
References
Engineering failures
Plasticity (physics)
Solid mechanics
Mechanics | Tsai–Wu failure criterion | [
"Materials_science",
"Technology",
"Engineering"
] | 747 | [
"Systems engineering",
"Reliability engineering",
"Deformation (mechanics)",
"Technological failures",
"Plasticity (physics)",
"Engineering failures",
"Civil engineering"
] |
17,556,082 | https://en.wikipedia.org/wiki/Niosome | Niosomes are vesicles composed of non-ionic surfactants, incorporating cholesterol as an excipient. Niosomes are utilized for drug delivery to specific sites to achieve desired therapeutic effects. Structurally, niosomes are similar to liposomes as both consist of a lipid bilayer. However, niosomes are more stable than liposomes during formation processes and storage. Niosomes trap hydrophilic and lipophilic drugs, either in an aqueous compartment (for hydrophilic drugs) or in a vesicular membrane compartment composed of lipid material (for lipophilic drugs).
Structure
Niosomes are microscopic lamellar structures formed by non-ionic surfactants and cholesterol. They exhibit a bilayer structure, with hydrophilic ends facing outward and hydrophobic ends facing inward. Their unique structure makes them ideal for diverse applications, notably in drug delivery systems. Niosomes excel in encapsulating both hydrophilic and hydrophobic drugs, enhancing drug stability and bioavailability. They are adaptable for tailored drug release and have garnered interest across pharmaceuticals, cosmetics, and agriculture for their biocompatibility and versatile properties.
Methods of preparation
Various methods used to prepare liposomes are also suitable for niosome preparation, such as the ether injection method, the handshaking method, the reverse phase evaporation method, the trans-membrane pH gradient method, the "bubble" method, the microfluidization method, formation from proteasomes, the thin-film hydration method, the heating method, the freeze and thaw method, and the dehydration-rehydration method.
Uses
Niosomes are used as biodegradable and non-immunogenic drug delivery compounds, as they have a low toxicity risk in biological systems. They can also be used to entrap hydrophilic pharmaceuticals within aqueous compartments or lipophilic drugs into vesicular bilayer membranes. Niosomes shield drug molecules from the biological environment, which can be utilized to improve the therapeutic performance of various drug molecules. Additionally, they can be used in a sustained drug delivery system to more directly affect target cells and delay clearance from circulation. Niosomes are used in a variety of applications, including gene delivery, drug targeting, antineoplastic treatment, delivery of peptide drugs, carriers for hemoglobin, transdermal drug delivery systems, and cosmetics. They are also being studied for their potential use as a treatment for different forms of leishmaniasis
References
Membrane biology | Niosome | [
"Chemistry"
] | 533 | [
"Membrane biology",
"Molecular biology"
] |
17,556,569 | https://en.wikipedia.org/wiki/SN%202007uy | SN 2007uy was a supernova that occurred in the spiral galaxy NGC 2770. It was discovered by Yoji Hirose on December 31, 2007 from Chigasaki city in Japan, approximately four days after the explosion. The position of the supernova was offset east and south of the galaxy's nucleus, near a star-forming region. It was identified as a Type Ib supernova from its spectrum a week before reaching maximum, and appeared the most similar to SN 2004gq.
Emissions from SN 2007uy were detected from the X-ray to the radio band. The light from this event was heavily reddened due to intervening dust in the host galaxy. This energetic explosion released in energy and ejected a mass of . The progenitor was likely a massive star that had been stripped of its hydrogen envelope by a binary companion. There is no radio evidence of a relativistic jet of the type that would be associated with a gamma-ray burst.
While interesting in its own right, SN 2007uy was overshadowed by SN 2008D, a supernova whose burst was observed serendipitously while SN 2007uy was being studied by Swift, something unprecedented in astronomy. This second supernova occurred within ten days of the first.
References
External links
Light curves and spectra on the Open Supernova Catalog
IAUC 8908 IAU Circular announcing the discovery.
Supernovae
Lynx (constellation)
20071231 | SN 2007uy | [
"Chemistry",
"Astronomy"
] | 293 | [
"Supernovae",
"Lynx (constellation)",
"Astronomical events",
"Constellations",
"Explosions"
] |
17,557,798 | https://en.wikipedia.org/wiki/Heat%20illness | Heat illness is a spectrum of disorders due to increased body temperature. It can be caused by either environmental conditions or by exertion. It includes minor conditions such as heat cramps, heat syncope, and heat exhaustion as well as the more severe condition known as heat stroke. It can affect any or all anatomical systems. Heat illnesses include: heat stroke, heat exhaustion, heat syncope, heat edema, heat cramps, heat rash, heat tetany.
Prevention includes avoiding medications that can increase the risk of heat illness, gradual adjustment to heat, and sufficient fluids and electrolytes.
Classification
A number of heat illnesses exist including:
Heat stroke - Defined by a body temperature of greater than due to environmental heat exposure with lack of thermoregulation. Symptoms include dry skin, rapid, strong pulse and dizziness.
Heat exhaustion - Can be a precursor of heatstroke; the symptoms include heavy sweating, rapid breathing and a fast, weak pulse.
Heat syncope - Fainting or dizziness as a result of overheating.
Heat edema - Swelling of extremities due to water retention following dilation of blood vessels in response to heat.
Heat cramps - Muscle pains that happen during heavy exercise in hot weather.
Heat rash - Skin irritation from excessive sweating.
Heat tetany - Usually results from short periods of stress in intense heat. Symptoms may include hyperventilation, respiratory problems, numbness or tingling, or muscle spasms.
Overview of diseases
Hyperthermia, also known as heat stroke, becomes commonplace during periods of sustained high temperature and humidity. Older adults, very young children, and those who are sick or overweight are at a higher risk for heat-related illness. The chronically ill and elderly are often taking prescription medications (e.g., diuretics, anticholinergics, antipsychotics, and antihypertensives) that interfere with the body's ability to dissipate heat.
Heat edema presents as a transient swelling of the hands, feet, and ankles and is generally secondary to increased aldosterone secretion, which enhances water retention. When combined with peripheral vasodilation and venous stasis, the excess fluid accumulates in the dependent areas of the extremities. The heat edema usually resolves within several days after the patient becomes acclimated to the warmer environment. No treatment is required, although wearing support stockings and elevating the affected legs will help minimize the edema.
Heat rash, also known as prickly heat, is a maculopapular rash accompanied by acute inflammation and blocked sweat ducts. The sweat ducts may become dilated and may eventually rupture, producing small pruritic vesicles on an erythematous base. Heat rash affects areas of the body covered by tight clothing. If this continues for a duration of time it can lead to the development of chronic dermatitis or a secondary bacterial infection. Prevention is the best therapy. It is also advised to wear loose-fitting clothing in the heat. Once heat rash has developed, the initial treatment involves the application of chlorhexidine lotion to remove any desquamated skin. The associated itching may be treated with topical or systemic antihistamines. If infection occurs a regimen of antibiotics is required.
Heat cramps are painful, often severe, involuntary spasms of the large muscle groups used in strenuous exercise. Heat cramps tend to occur after intense exertion. They usually develop in people performing heavy exercise while sweating profusely and replenishing fluid loss with non-electrolyte containing water. This is believed to lead to hyponatremia that induces cramping in stressed muscles. Rehydration with salt-containing fluids provides rapid relief. Patients with mild cramps can be given oral .2% salt solutions, while those with severe cramps require IV isotonic fluids. The many sport drinks on the market are a good source of electrolytes and are readily accessible.
Heat syncope is related to heat exposure that produces orthostatic hypotension. This hypotension can precipitate a near-syncopal episode. Heat syncope is believed to result from intense sweating, which leads to dehydration, followed by peripheral vasodilation and reduced venous blood return in the face of decreased vasomotor control. Management of heat syncope consists of cooling and rehydration of the patient using oral rehydration therapy (sport drinks) or isotonic IV fluids. People who experience heat syncope should avoid standing in the heat for long periods of time. They should move to a cooler environment and lie down if they recognize the initial symptoms. Wearing support stockings and engaging in deep knee-bending movements can help promote venous blood return.
Heat exhaustion is considered by experts to be the forerunner of heat stroke (hyperthermia). It may even resemble heat stroke, with the difference being that the neurologic function remains intact. Heat exhaustion is marked by excessive dehydration and electrolyte depletion. Symptoms may include diarrhea, headache, nausea and vomiting, dizziness, tachycardia, malaise, and myalgia. Definitive therapy includes removing patients from the heat and replenishing their fluids. Most patients will require fluid replacement with IV isotonic fluids at first. The salt content is adjusted as necessary once the electrolyte levels are known. After discharge from the hospital, patients are instructed to rest, drink plenty of fluids for 2–3 hours, and avoid the heat for several days. If this advice is not followed it may then lead to heat stroke.
Symptoms
Increased temperatures have been reported to cause heat stroke, heat exhaustion, heat syncope, and heat cramps. Some studies have also looked at how severe heat stroke can lead to permanent damage to organ systems. This damage can increase the risk of early mortality because the damage can cause severe impairment in organ function. Other complications of heat stroke include respiratory distress syndrome in adults and disseminated intravascular coagulation. Some researchers have noted that any compromise to the human body's ability to thermoregulate would in theory increase risk of mortality. This includes illnesses that may affect a person's mobility, awareness, or behavior.
Prevention
Prevention includes avoiding medications that can increase the risk of heat illness (e.g. antihypertensives, diuretics, and anticholinergics), gradual adjustment to heat, and sufficient fluids and electrolytes.
Some common medications that have an effect on thermoregulation can also increase the risk of mortality. Specific examples include anticholinergics, diuretics, phenothiazines and barbiturates.
Epidemiology
Heat stroke is relatively common in sports. About 2 percent of sports-related deaths that occurred in the United States between 1980 and 2006 were caused by exertional heat stroke. Football in the United States has the highest rates. The month of August, which is associated with pre-season football camps across the country, accounts for 66.3% of exertion heat-related illness time-loss events. Heat illness is also not limited geographically and is widely distributed throughout the United States. An average of 5946 persons were treated annually in US hospital emergency departments (2 visits/ 100,00 population) with a hospitalization rate of 7.1%. Most commonly males are brought in 72.5% and persons 15–19 years of age 35.6% When taking into consideration all high school athletes, heat illness occurs at a rate of 1.2 per 100,000 kids. When comparing risk by sport, Football was 11.4 times more likely than all other sports combined to be exposed to an exertional heat illness.
Between 1999 and 2003, the US had a total of 3442 deaths from heat illness. Those who work outdoors are at particular risk for heat illness, though those who work in poorly-cooled spaces indoors are also at risk. Between 1992 and 2006, 423 workers died from heat illness in the US. Exposure to environmental heat led to 37 work-related deaths. There were 2,830 nonfatal occupational injuries and illnesses involving days away from work as well, in 2015. Kansas had the highest heat related injury while on the job with a rate of 1.3 per 10,000 workers, while Texas had the most overall. Due to the much higher state population of Texas, their prevalence was only 0.4 per 10,000 or 4 per 100,000. Of the 37 deaths reported heat illnesses, 33 of the 37 occurred between the summer months of June through September. The most dangerous profession that was documented was transportation and material moving. Transportation and material moving accounted for 720 of the 2,830 reported nonfatal occupational injuries or 25.4 percent. After transportation and material moving, Production placed second followed by protective services, installation, maintenance, and repair and construction all in succession
Effects of climate change
A 2016 U.S. government report said that climate change could result in "tens of thousands of additional premature deaths per year across the United States by the end of this century." Indeed, between 2014 and 2017, heat exposure deaths tripled in Arizona (76 deaths in 2014; 235 deaths in 2017) and increased fivefold in Nevada (29 deaths in 2014; 139 deaths in 2017).
History
Heat illness used to be blamed on a tropical fever named calenture.
See also
Occupational heat stress
References
External links
"Heat Exhaustion" on Medicine.net
Emergency medicine
Effects of external causes
Thermoregulation | Heat illness | [
"Biology"
] | 1,977 | [
"Thermoregulation",
"Homeostasis"
] |
17,558,897 | https://en.wikipedia.org/wiki/Ultra-low-voltage%20processor | Ultra-low-voltage processors (ULV processors) are a class of microprocessor that are deliberately underclocked to consume less power (typically 17 W or below), at the expense of performance.
These processors are commonly used in subnotebooks, netbooks, ultraportables and embedded devices, where low heat dissipation and long battery life are required.
Notable examples
Intel Atom – Up to 2.0 GHz at 2.4 W (Z550)
Intel Pentium M – Up to 1.3 GHz at 5 W (ULV 773)
Intel Core 2 Solo – Up to 1.4 GHz at 5.5 W (SU3500)
Intel Core Solo – Up to 1.3 GHz at 5.5 W (U1500)
Intel Celeron M – Up to 1.2 GHz at 5.5 W (ULV 722)
VIA Eden – Up to 1.5 GHz at 7.5 W
VIA C7 – Up to 1.6 GHz at 8 W (C7-M ULV)
VIA Nano – Up to 1.3 GHz at 8 W (U2250)
AMD Athlon Neo – Up to 1 GHz at 8 W (Sempron 200U)
AMD Geode – Up to 1 GHz at 9 W (NX 1500)
Intel Core 2 Duo – Up to 1.3 GHz at 10 W (U7700)
Intel Core i3/i5/i7 – Up to 1.5 GHz at 13 W (Core i7 3689Y)
AMD A Series – Up to 3.2 GHz at 15 W (A10-7300P)
See also
Consumer Ultra-Low Voltage – a low power platform developed by Intel
References
Embedded systems
Microprocessors | Ultra-low-voltage processor | [
"Technology",
"Engineering"
] | 370 | [
"Computer engineering",
"Embedded systems",
"Computer hardware stubs",
"Computer systems",
"Computer science",
"Computing stubs"
] |
17,560,083 | https://en.wikipedia.org/wiki/CoRoT-4b | CoRoT-4b (formerly known as CoRoT-Exo-4b) is an extrasolar planet orbiting the star CoRoT-4. It is probably in synchronous orbit with stellar rotation. It was discovered by the French CoRoT mission in 2008.
References
External links
Hot Jupiters
Transiting exoplanets
Exoplanets discovered in 2008
Giant planets
Monoceros
4b
de:Extrasolarer Planet#CoRoT-4 b | CoRoT-4b | [
"Astronomy"
] | 96 | [
"Monoceros",
"Constellations"
] |
17,560,128 | https://en.wikipedia.org/wiki/Chloroformate | Chloroformates are a class of organic compounds with the formula ROC(O)Cl. They are formally esters of chloroformic acid. Most are colorless, volatile liquids that degrade in moist air. A simple example is methyl chloroformate, which is commercially available.
Chloroformates are used as reagents in organic chemistry. For example, benzyl chloroformate is used to introduce the Cbz (carboxybenzyl) protecting group and fluorenylmethyloxycarbonyl chloride is used to introduce the FMOC protecting group. Chloroformates are popular in the field of chromatography as derivatization agents. They convert polar compounds into less polar more volatile derivatives. In this way, chloroformates enable relatively simple transformation of large array of metabolites (aminoacids, amines, carboxylic acids, phenols) for analysis by gas chromatography / mass spectrometry.
Reactions
The reactivity of chloroformates and acyl chlorides are similar. Representative reactions are:
Reaction with amines to form carbamates:
ROC(O)Cl + H2NR' → ROC(O)-N(H)R' + HCl
Reaction with alcohols to form carbonate esters:
ROC(O)Cl + HOR' → ROC(O)-OR' + HCl
Reaction with carboxylic acids to form mixed anhydrides:
Typically these reactions would be conducted in the presence of a base which serves to absorb the HCl.
Alkyl chloroformate esters degrate to give the alkyl chloride, with retention of configuration:
The reaction is proposed to proceed via a substitution nucleophilic internal mechanism.
References
Functional groups | Chloroformate | [
"Chemistry"
] | 382 | [
"Functional groups"
] |
17,560,161 | https://en.wikipedia.org/wiki/CoRoT-5b | CoRoT-5b (previously named CoRoT-Exo-5b) is an extrasolar planet orbiting the F-type star CoRoT-5. It was first reported by the CoRoT mission team in 2008 using the transit method.
This planet has been confirmed by a Doppler follow-up study.
Properties and location
This planetary object is reported to be about half the mass of Jupiter but slightly larger in terms of radius at 0.467 and 1.388 .
See also
CoRoT-6b
References
External links
ESA Portal - Exoplanet hunt update
Hot Jupiters
Transiting exoplanets
Exoplanets discovered in 2008
Giant planets
5b
Monoceros | CoRoT-5b | [
"Astronomy"
] | 143 | [
"Monoceros",
"Constellations"
] |
17,560,201 | https://en.wikipedia.org/wiki/Ajmalan | Ajmalan is a parent hydride used in the IUPAC nomenclature of natural products and also in CAS nomenclature. It is a 20-carbon alkaloid with six rings and seven chiral centres.
The name is derived from ajmaline, an antiarrhythmic alkaloid isolated from the roots of Rauvolfia serpentina which is formally a dihydroxy-derivative of ajmalan. The –an ending indicates that ajmalan is partially saturated. Ajmaline itself is named after Hakim Ajmal Khan, a distinguished practitioner of the Unani school of traditional medicine in South Asia.
The absolute configuration of the seven chiral carbon atoms in ajmalan is defined by convention, as is the numbering system. The stereochemistry is the same as that in naturally occurring ajmaline, and corresponds to (2R,3S,5S,7S,15S,16R,20S) using conventional numbering.
Ajmalan can be systematically named as
or as
.
Note that the numbering of the atoms in the systematic names is different from the conventional numbering of ajmalan.
The ajmalan skeleton is similar to those of certain other alkaloids, and ajmalan could also be given the following semisystematic names:
(2β,5β,7β,16R,20β)-1-methyl-2,7-dihydro-5,16:7,17-dicyclocorynan;
(2β,7β,16R,20β)-1-methyl-2,7,19,20-tetrahydro-7,17-cyclosarpagan;
(2β,3α,7β,20β)-1-methyl-2,7,19,20-tetrahydro-3,4:7,17-dicyclo-22-norvobasan;
(2β,5β,7β,16R,20β)-1-methyl-2,7-dihydro-5,16:7,17-dicyclo-17-secoyohimban.
However, the relative complexity even of these names justifies the use of ajmalan as a defined parent hydride in alkaloid nomenclature.
References
Alkaloids found in Apocynaceae
Chemical nomenclature
Heterocyclic compounds with 6 rings | Ajmalan | [
"Chemistry"
] | 504 | [
"nan"
] |
17,560,674 | https://en.wikipedia.org/wiki/Continuous%20functions%20on%20a%20compact%20Hausdorff%20space | In mathematical analysis, and especially functional analysis, a fundamental role is played by the space of continuous functions on a compact Hausdorff space with values in the real or complex numbers. This space, denoted by is a vector space with respect to the pointwise addition of functions and scalar multiplication by constants. It is, moreover, a normed space with norm defined by
the uniform norm. The uniform norm defines the topology of uniform convergence of functions on The space is a Banach algebra with respect to this norm.
Properties
By Urysohn's lemma, separates points of : If are distinct points, then there is an such that
The space is infinite-dimensional whenever is an infinite space (since it separates points). Hence, in particular, it is generally not locally compact.
The Riesz–Markov–Kakutani representation theorem gives a characterization of the continuous dual space of Specifically, this dual space is the space of Radon measures on (regular Borel measures), denoted by This space, with the norm given by the total variation of a measure, is also a Banach space belonging to the class of ba spaces.
Positive linear functionals on correspond to (positive) regular Borel measures on by a different form of the Riesz representation theorem.
If is infinite, then is not reflexive, nor is it weakly complete.
The Arzelà–Ascoli theorem holds: A subset of is relatively compact if and only if it is bounded in the norm of and equicontinuous.
The Stone–Weierstrass theorem holds for In the case of real functions, if is a subring of that contains all constants and separates points, then the closure of is In the case of complex functions, the statement holds with the additional hypothesis that is closed under complex conjugation.
If and are two compact Hausdorff spaces, and is a homomorphism of algebras which commutes with complex conjugation, then is continuous. Furthermore, has the form for some continuous function In particular, if and are isomorphic as algebras, then and are homeomorphic topological spaces.
Let be the space of maximal ideals in Then there is a one-to-one correspondence between Δ and the points of Furthermore, can be identified with the collection of all complex homomorphisms Equip with the initial topology with respect to this pairing with (that is, the Gelfand transform). Then is homeomorphic to Δ equipped with this topology.
A sequence in is weakly Cauchy if and only if it is (uniformly) bounded in and pointwise convergent. In particular, is only weakly complete for a finite set.
The vague topology is the weak* topology on the dual of
The Banach–Alaoglu theorem implies that any normed space is isometrically isomorphic to a subspace of for some
Generalizations
The space of real or complex-valued continuous functions can be defined on any topological space In the non-compact case, however, is not in general a Banach space with respect to the uniform norm since it may contain unbounded functions. Hence it is more typical to consider the space, denoted here of bounded continuous functions on This is a Banach space (in fact a commutative Banach algebra with identity) with respect to the uniform norm.
It is sometimes desirable, particularly in measure theory, to further refine this general definition by considering the special case when is a locally compact Hausdorff space. In this case, it is possible to identify a pair of distinguished subsets of :
the subset of consisting of functions with compact support. This is called the space of functions vanishing in a neighborhood of infinity.
the subset of consisting of functions such that for every there is a compact set such that for all This is called the space of functions vanishing at infinity.
The closure of is precisely In particular, the latter is a Banach space.
References
.
.
.
Banach spaces
Complex analysis
Theory of continuous functions
Functional analysis
Real analysis
Types of functions | Continuous functions on a compact Hausdorff space | [
"Mathematics"
] | 816 | [
"Functions and mappings",
"Functional analysis",
"Theory of continuous functions",
"Mathematical objects",
"Topology",
"Mathematical relations",
"Types of functions"
] |
17,561,319 | https://en.wikipedia.org/wiki/Electrochemical%20gas%20sensor | Electrochemical gas sensors are gas detectors that measure the concentration of a target gas by oxidizing or reducing the target gas at an electrode and measuring the resulting current.
History
Beginning his research in 1962, Mr. Naoyoshi Taguchi became the first person in the world to develop a semiconductor device that could detect low concentrations of combustible and reducing gases when used with a simple electrical circuit. Devices based on this technology are often called "TGS" (Taguchi Gas Sensors).
Construction
The sensors contain two or three electrodes, occasionally four, in contact with an electrolyte. The electrodes are typically fabricated by fixing a high surface area of precious metal onto the porous hydrophobic membrane. The working electrode contacts both the electrolyte and the ambient air to be monitored, usually via a porous membrane. The electrolyte most commonly used is a mineral acid, but organic electrolytes are also used for some sensors. The electrodes and housing are usually in a plastic housing which contains a gas entry hole for the gas and electrical contacts.
Theory of operation
The gas diffuses into the sensor, through the back of the porous membrane to the working electrode, where it is oxidized or reduced. This electrochemical reaction results in an electric current that passes through the external circuit. In addition to measuring, amplifying, and performing other signal processing functions, the external circuit maintains the voltage across the sensor between the working and counter electrodes for a two-electrode sensor or between the working and reference electrodes for a three-electrode cell. At the counter electrode, an equal and opposite reaction occurs, such that if the working electrode is an oxidation, then the counter electrode is a reduction.
Diffusion controlled response
The magnitude of the current is controlled by how much of the target gas is oxidized at the working electrode. Sensors are usually designed so that the gas supply is limited by diffusion, and thus the output from the sensor is linearly proportional to the gas concentration. This linear output is one of the advantages of electrochemical sensors over other sensor technologies (e.g. infrared), whose output must be linearized before they can be used. A linear output allows for more precise measurement of low concentrations and much simpler calibration (only a baseline and one point are needed).
Diffusion control offers another advantage. Changing the diffusion barrier allows the sensor manufacturer to tailor the sensor to a particular target gas concentration range. In addition, since the diffusion barrier is primarily mechanical, the calibration of electrochemical sensors tends to be more stable over time and so electrochemical sensor-based instruments require much less maintenance than some other detection technologies. In principle, the sensitivity can be calculated based on the diffusion properties of the gas path into the sensor, though experimental errors in the measurement of the diffusion properties make the calculation less accurate than calibrating with test gas.
Cross sensitivity
For some gases, such as ethylene oxide, cross-sensitivity can be a problem because ethylene oxide requires a very active working electrode catalyst and high operating potential for its oxidation. Therefore, gases that are more easily oxidized, such as alcohols and carbon monoxide will also give a response. Cross-sensitivity problems can be eliminated through the use of a chemical filter, for example, filters that allow the target gas to pass through unimpeded but which reacts with and removes common interferences.
While electrochemical sensors offer many advantages, they are not suitable for every gas. Since the detection mechanism involves the oxidation or reduction of the gas, electrochemical sensors are usually only suitable for electrochemically active gases, though it is possible to detect electrochemically inert gases indirectly if the gas interacts with another species in the sensor that then produces a response. Sensors for carbon dioxide are an example of this approach and they have been commercially available for several years.
Cross-sensitivity of electronic chemical sensors may also be utilized to design chemical sensor arrays, which utilize a variety of specific sensors that are cross-reactive for fingerprint detection of target gases in complex mixtures.
See also
Carbon monoxide detector
Karlsruhe Institute of Technology (KIT) - Forschungsstelle für Brandschutztechnik: KAMINA - gas sensor microarrays for rapid smoke analysis
Gas diffusion electrode
References
Gas sensors
Measuring instruments
Safety equipment | Electrochemical gas sensor | [
"Technology",
"Engineering"
] | 872 | [
"Measuring instruments"
] |
17,561,485 | https://en.wikipedia.org/wiki/Heinz%20A.%20Lowenstam | Heinz Adolf Lowenstam (October 9, 1912 – June 7, 1993) was a German-born, Jewish-American paleoecologist celebrated for his discoveries in biomineralization: that living organisms manufacture substances such as the iron-containing mineral magnetite within their bodies. He is also renowned for his pioneering research on coral reefs and their influence on biologic processes in the geologic record.
Early life and education
Heinz Adolf Lowenstam was born in 1912 in Upper Silesia, which was then southeastern Germany but was ceded to Poland following World War I. His father, Kurt (1883–1965), was the younger brother of Rabbi Arthur Löwenstamm. His mother was Frieda Sternberg (b. 1889). He had a younger sister, Hildegard (Hilda), who married Kurt Weissenberg and had a daughter, Doris.
Heinz's hometown of Siemjanowicz was located in a mining district, and his fascination with geology began as a child playing on the piles of mine tailings, against the backdrop of Germany's great economic depression of the 1920s. His scientific interests were encouraged by his family and fostered through his attendance at an experimental hochschule that focused on mathematics, physics, and chemistry. It was here that Heinz started his first fossil collection and shaped his desire to become a paleontologist.
Professional career
Lowenstam began his collegiate studies in the vertebrate paleontology program at the University of Frankfurt, but arrived to find the program collapsing due to the recent death of the university's leading paleontologist. He transferred to the University of Munich in the fall of 1933, studying under Professors Broili, Edgar Dacqué, and the biologist Karl von Frisch. Lowenstam's studies in Munich coincided with Adolf Hitler's rise to power and the deterioration of conditions for German Jews. According to his biographer, Joseph L. Kirschvink, "In 1935, he declared his intention of conducting his Ph.D. field research in Palestine, to the dismay of his pro-Nazi department chairman". After spending 18 months studying the geology of the Eastern Nazareth Mountains, he returned to Germany in 1936 to learn that a new law was passed one week prior to his thesis defense prohibited the awarding of doctorates to Jews. Left with no choice but to leave, Heinz and his wife Ilse emigrated to the United States, arriving in Chicago in June 1937. His parents and sister were able to escape to Brazil, but most of Heinz's relatives on his mother's side were murdered in the Holocaust.
Lowenstam discussed his situation with the geology faculty at the University of Chicago, and was accepted to complete his degree, on the merit of recommendations from his mentors Broili and Dacqué. He received his Ph.D. in 1939, whereupon he immediately enlisted in the U.S. Army to fight the Nazis. The U.S. military decided that his skills would be of more use in civilian work, developing coal and oil reserves with the Illinois Geological Survey. Subsequently, Lowenstam worked for a small oil company, then moved on to become a curator of invertebrate paleontology at the Illinois State Museum. There, Lowenstam conducted field research on the paleoecology of coral reef environments via the Stony Island line of the Chicago street-car system, which dead-ended at an area rich with fossilized coral reefs. This work ultimately resulted in Lowenstam's discovery of a "massive system of Silurian reefs that stretched from the edge of the Ozark Mountains to Greenland". Lowenstam was aware that the structure of the buried reef complex was an ideal trap for oil and gas; but, instead of exploiting his discovery for financial gain, he published his findings in the open scientific literature where all could reap the benefits.
During this time, the University of Chicago had emerged as the birthplace of isotope geochemistry, and Harold Urey's research group was making significant advancements in the use of deviations in stable isotopes to measure ancient ocean temperatures. While working as a geologist in Illinois State Geological Survey's Coal, and Stratigraphy and Paleontology Divisions, Lowenstam was invited to join Harold Urey's group to aid in acquiring fossil materials. He accepted a position as a research associate in geochemistry at the University of Chicago and by 1950, was convinced to accept a faculty position. This position allowed Lowenstam to "continue his research on Silurian reefs, as well as to extend his search for pristine fossil shell materials, an interest that later paved the way for his studies on biomineralization". In the early 1950s, Caltech and the University of California began the building of their isotope geochemistry programs and their recruitment of young scientists from Urey's group and the geochemists of the Chicago "mafia" to form the core of their departments. By the time Lowenstam accepted his faculty position at Caltech in 1954, many of his colleagues including Harrison Brown, Sam Epstein, Clair Patterson, and even Harold Urey had already made the migration. Under his chosen title as a "paleoecologist" Lowenstam continued to collaborate with his former research group (Brown, Patterson, and Epstein were all at Caltech), but he also used the opportunity to explore more comprehensive geochemical analyses of fossil formation.
Lowenstam sought to develop geochemical methods to gain insight into the biological processes through which organisms control mineralization as well as derive information about ancient ecosystems, such as salinity and barometric pressure. For these studies, he turned to the environments of modern coral reef systems in Bermuda. In his early work in the region, Lowenstam discovered that the aragonite (a CaCO3 mineral produce by reef organisms) "needles forming most of the sedimentary mass in Bermuda's back-reef lagoons of Bermuda were produced by microscopic algae; using carbon and oxygen isotopes to prove their biological origin". But it was Lowenstam's 1961 discovery of "biochemically-precipitated magnetite (Fe3O4) as a capping material in the radula (tongue plate) teeth of chitons (marine mollusks)" that was to shape the future of biomineralization. "Prior to this discovery, magnetite was thought to form only in igneous or metamorphic rocks under high temperatures and pressures". In his 1962 paper Lowenstam noted the implications of his discovery with his observation that the chitons were known for their local homing instinct, implying that they may be using a magnetite compass to aid in navigation. Subsequent researchers building upon this work have "confirmed the central role of magnetite as the biophysical transducer of the magnetic field in living organisms spanning the evolutionary spectrum from the magnetotactic bacteria to mammals, with a fossil record extending back at least 2 billion years on Earth and perhaps 4 billion years on Mars". Lowenstam left implications of biomagnetism for others to explore and continued to pursue answers to how organisms control mineral formation. Over the next two decades Lowenstam continued to discover and catalog biologically precipitated minerals and document their phyletic distribution, as well as attempt to track their evolutionary origin.
He remained at Caltech as a revered professor until his death in 1993.
Honours and awards
Heinz A. Lowenstam was elected to the National Academy of Sciences in 1980 and travelled to Germany in 1981 to receive an honorary Ph.D. from the University of Munich. He received the Paleontological Society Medal in 1986.
Personal life
Heinz A. Lowenstam married Ilse Weil (1912–2011) in Munich on 10 January 1937; they divorced in the 1960s. They had three children together: Ruth, Michael and Steven. Steven (1945–2003) was a professor of classics at the University of Oregon. Ruth's daughter, Lisa Goldstein, is a rabbi in New York City.
Legacy
Lowenstam's papers are held at the California Institute of Technology.
Every five years, the European Association of Geochemistry awards a Science Innovation Award medal named in Lowenstam's honour for work in biogeochemistry.
References
External links
National Academy of Sciences Biography
Lowenstam, Heinz A. (1991) Interview with Heinz A. Lowenstam. Oral History Project, California Institute of Technology Archives, Pasadena, California
Illinois Geological Survey Memorial
1912 births
1993 deaths
American ecologists
American people of German-Jewish descent
Biogeochemists
California Institute of Technology alumni
Emigrants from Nazi Germany to the United States
Geobiologists
Ludwig Maximilian University of Munich alumni
Members of the United States National Academy of Sciences
Silesian Jews
University of Chicago faculty
University of Chicago alumni | Heinz A. Lowenstam | [
"Chemistry"
] | 1,783 | [
"Geochemists",
"Biogeochemistry",
"Biogeochemists"
] |
17,561,681 | https://en.wikipedia.org/wiki/SAGEM%20Sigma%2030 | The Sigma 30 is an inertial navigation system produced by SAGEM for use with artillery applications including howitzers, multiple rocket launchers, mortars and light guns. It is currently produced for more than 40 international programs, including France (CAESAR, 2R2M, M270 MLRS), Serbia (Nora B 52), Sweden (FH77 BD, Archer), Germany (PzH2000, M270 MLRS), Italy (M270 MLRS), India (Pinaka MBRL), Polish PT-91M tank (build for Malaysia), and the United States (topographic survey).
Sigma 30 can also be integrated in more complex systems (Positioning and Azimuth Determination System).
References
External links
Jane's
Sagem Défense Sécurité Navigation Unit website
Avionics
Missile guidance
Navigational equipment | SAGEM Sigma 30 | [
"Technology"
] | 176 | [
"Avionics",
"Aircraft instruments"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.