id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
3,068,310 | https://en.wikipedia.org/wiki/Gaylussite | Gaylussite is a carbonate mineral, a hydrated sodium calcium carbonate, formula Na2Ca(CO3)2·5H2O. It occurs as translucent, vitreous white to grey to yellow monoclinic prismatic crystals. It is an unstable mineral which dehydrates in dry air and decomposes in water.
Discovery and occurrence
It is formed as an evaporite from alkali lacustrine waters. It also occurs rarely as veinlets in alkalic igneous rocks. It was first described in 1826 for an occurrence in Lagunillas, Mérida, Venezuela. It was named for French chemist Joseph Louis Gay-Lussac (1778–1850).
The mineral has been recently (2014) reported from drill core in Lonar lake in Buldhana district, Maharashtra, India. Lonar lake was created by a meteor impact during the Pleistocene Epoch and it is one of only four known hyper-velocity impact craters in basaltic rock anywhere on Earth.
References
Sodium minerals
Calcium minerals
Carbonate minerals
Monoclinic minerals
Minerals in space group 15
Evaporite
Luminescent minerals
Minerals described in 1826 | Gaylussite | [
"Chemistry"
] | 235 | [
"Luminescence",
"Luminescent minerals"
] |
3,068,500 | https://en.wikipedia.org/wiki/Cream%20%28pharmacy%29 | A cream is a preparation usually for application to the skin. Creams for application to mucous membranes such as those of the rectum or vagina are also used. Creams may be considered pharmaceutical products, since even cosmetic creams are manufactured using techniques developed by pharmacy and unmedicated creams are highly used in a variety of skin conditions (dermatoses). The use of the finger tip unit concept may be helpful in guiding how much topical cream is required to cover different areas.
Creams are semi-solid emulsions of oil and water. They are divided into two types: oil-in-water (O/W) creams which are composed of small droplets of oil dispersed in a continuous water phase, and water-in-oil (W/O) creams which are composed of small droplets of water dispersed in a continuous oily phase. Oil-in-water creams are more comfortable and cosmetically acceptable as they are less greasy and more easily washed off using water. Water-in-oil creams are more difficult to handle but many drugs which are incorporated into creams are hydrophobic and will be released more readily from a water-in-oil cream than an oil-in-water cream. Water-in-oil creams are also more moisturising as they provide an oily barrier which reduces water loss from the stratum corneum, the outermost layer of the skin.
Uses
The provision of a barrier to protect the skin
This may be a physical barrier or a chemical barrier as with sunscreens
To aid in the retention of moisture (especially water-in-oil creams)
Cleansing
Emollient effects
As a vehicle for drug substances such as local anaesthetics, anti-inflammatories (NSAIDs or corticosteroids), hormones, antibiotics, antifungals or counter-irritants.
Creams are semisolid dosage forms containing more than 20% water or volatile components and typically less than 50% hydrocarbons, waxes, or polyols as vehicles. They may also contain one or more drug substances dissolved or dispersed in a suitable cream base. This term has traditionally been applied to semisolids that possess a relatively fluid consistency formulated as either water-in-oil (e.g., cold cream) or oil-in-water (e.g., fluocinolone acetonide cream) emulsions. However, more recently the term has been restricted to products consisting of oil-in-water emulsions or aqueous microcrystalline dispersions of long-chain fatty acids or alcohols that are water washable and more cosmetically and aesthetically acceptable. Creams can be used for administering drugs via the vaginal route (e.g., Triple Sulfa vaginal cream). Creams are also used to treat sun burns.
Composition
There are four main ingredients of the cold cream:
Water
Oil
Emulsifier
Thickening agent
Topical medication forms
There are many types of preparations applied to a body surface, such as:
ointments – consist of a single-phase in which solids or liquids may be dispersed. There are hydrophobic, water-emulsifying, and hydrophilic ointments.
creams – consist of a lipophilic phase and an aqueous phase. There are lipophilic (W/O) and hydrophilic (O/W) creams, depending on the continuous phase.
gels – consist of liquids gelled by suitable gelling agents. There are lipophilic gels (oleogels) and Hydrophilic gels (hydrogels).
pastes – contain large proportions of solids finely dispersed in the basis.
poultices – consist of a hydrophilic heat-retentive basis in which solids or liquids are dispersed. They are usually spread thickly on a suitable dressing and heated before application to the skin.
topical powders – consist of solid, loose, dry particles of varying degrees of fineness.
medicated plasters – consist of an adhesive basis spread as a uniform layer on an appropriate support made of natural or synthetic material.
See also
Lotion topical
References
External links
Dosage forms
Drug delivery devices | Cream (pharmacy) | [
"Chemistry"
] | 869 | [
"Pharmacology",
"Drug delivery devices"
] |
3,069,014 | https://en.wikipedia.org/wiki/Section%20%28botany%29 | In botany, a section () is a taxonomic rank below the genus, but above the species. The subgenus, if present, is higher than the section, and the rank of series, if present, is below the section. Sections may in turn be divided into subsections.
Sections are typically used to help organise very large genera, which may have hundreds of species. A botanist wanting to distinguish groups of species may prefer to create a taxon at the rank of section or series to avoid making new combinations, i.e. many new binomial names for the species involved.
Examples:
Lilium sectio Martagon Rchb. are the Turks' cap lilies
Plagiochila aerea Taylor is the type species of Plagiochila sect. Bursatae
See also
Section (biology)
References
Section
Plant taxonomy
Fungus sections | Section (botany) | [
"Biology"
] | 172 | [
"Botanical nomenclature",
"Plants",
"Botanical terminology",
"Biological nomenclature",
"Plant taxonomy"
] |
3,069,092 | https://en.wikipedia.org/wiki/Enumerative%20geometry | In mathematics, enumerative geometry is the branch of algebraic geometry concerned with counting numbers of solutions to geometric questions, mainly by means of intersection theory.
History
The problem of Apollonius is one of the earliest examples of enumerative geometry. This problem asks for the number and construction of circles that are tangent to three given circles, points or lines. In general, the problem for three given circles has eight solutions, which can be seen as 23, each tangency condition imposing a quadratic condition on the space of circles. However, for special arrangements of the given circles, the number of solutions may also be any integer from 0 (no solutions) to six; there is no arrangement for which there are seven solutions to Apollonius' problem.
Key tools
A number of tools, ranging from the elementary to the more advanced, include:
Dimension counting
Bézout's theorem
Schubert calculus, and more generally characteristic classes in cohomology
The connection of counting intersections with cohomology is Poincaré duality
The study of moduli spaces of curves, maps and other geometric objects, sometimes via the theory of quantum cohomology. The study of quantum cohomology, Gromov–Witten invariants and mirror symmetry gave a significant progress in Clemens conjecture.
Enumerative geometry is very closely tied to intersection theory.
Schubert calculus
Enumerative geometry saw spectacular development towards the end of the nineteenth century, at the hands of Hermann Schubert. He introduced it for the purpose of Schubert calculus, which has proved of fundamental geometrical and topological value in broader areas. The specific needs of enumerative geometry were not addressed until some further attention was paid to them in the 1960s and 1970s (as pointed out for example by Steven Kleiman). Intersection numbers had been rigorously defined (by André Weil as part of his foundational programme 1942–6, and again subsequently), but this did not exhaust the proper domain of enumerative questions.
Fudge factors and Hilbert's fifteenth problem
Naïve application of dimension counting and Bézout's theorem yields incorrect results, as the following example shows. In response to these problems, algebraic geometers introduced vague "fudge factors", which were only rigorously justified decades later.
As an example, count the conic sections tangent to five given lines in the projective plane. The conics constitute a projective space of dimension 5, taking their six coefficients as homogeneous coordinates, and five points determine a conic, if the points are in general linear position, as passing through a given point imposes a linear condition. Similarly, tangency to a given line L (tangency is intersection with multiplicity two) is one quadratic condition, so determined a quadric in P5. However the linear system of divisors consisting of all such quadrics is not without a base locus. In fact each such quadric contains the Veronese surface, which parametrizes the conics
(aX + bY + cZ)2 = 0
called 'double lines'. This is because a double line intersects every line in the plane, since lines in the projective plane intersect, with multiplicity two because it is doubled, and thus satisfies the same intersection condition (intersection of multiplicity two) as a nondegenerate conic that is tangent to the line.
The general Bézout theorem says 5 general quadrics in 5-space will intersect in 32 = 25 points. But the relevant quadrics here are not in general position. From 32, 31 must be subtracted and attributed to the Veronese, to leave the correct answer (from the point of view of geometry), namely 1. This process of attributing intersections to 'degenerate' cases is a typical geometric introduction of a 'fudge factor'.
Hilbert's fifteenth problem was to overcome the apparently arbitrary nature of these interventions; this aspect goes beyond the foundational question of the Schubert calculus itself.
Clemens conjecture
In 1984 H. Clemens studied the counting of the number of rational curves on a quintic threefold and reached the following conjecture.
Let be a general quintic threefold, a positive integer, then there are only a finite number of rational curves with degree on .
This conjecture has been resolved in the case , but is still open for higher .
In 1991 the paper about mirror symmetry on the quintic threefold in from the string theoretical viewpoint gives numbers of degree d rational curves on for all . Prior to this, algebraic geometers could calculate these numbers only for .
Examples
Some of the historically important examples of enumerations in algebraic geometry include:
2 The number of lines meeting 4 general lines in space
8 The number of circles tangent to 3 general circles (the problem of Apollonius).
27 The number of lines on a smooth cubic surface (Salmon and Cayley)
2875 The number of lines on a general quintic threefold
3264 The number of conics tangent to 5 plane conics in general position (Chasles)
609250 The number of conics on a general quintic threefold
4407296 The number of conics tangent to 8 general quadric surfaces
666841088 The number of quadric surfaces tangent to 9 given quadric surfaces in general position in 3-space
5819539783680 The number of twisted cubic curves tangent to 12 given quadric surfaces in general position in 3-space
References
Bibliography
External links
Intersection theory
Algebraic geometry | Enumerative geometry | [
"Mathematics"
] | 1,116 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
3,069,122 | https://en.wikipedia.org/wiki/Andrey%20Zaliznyak | Andrey Anatolyevich Zaliznyak (; 29 April 1935 – 24 December 2017) was a Soviet and Russian linguist, an expert in historical linguistics, accentology, dialectology and grammar. Doctor of Philological Sciences (1965, while defending his Candidate thesis). In his later years he paid much attention to the popularization of linguistics and the struggle against pseudoscience.
Biography
Zaliznyak was born in Moscow and studied in the Moscow University before moving to the Sorbonne to further his studies with André Martinet. He was married to the linguist Elena V. Paducheva, with whom he also co-authored scientific publications. He was admitted into the Soviet Academy of Sciences as a corresponding member in 1987. Ten years later, he was elected a full academician.
Zaliznyak's first monograph, Russian Nominal Inflection (1967), remains a definitive study in the field. Ten years later, he published a highly authoritative Grammatical Dictionary of the Russian Language, which went through several reprints and provided a basis for Russian grammar software.
In 1982, Zaliznyak turned his interests towards the birch bark scrolls which have been unearthed in Novgorod since the 1950s. He has co-edited all publications of newly discovered birch scrolls since 1986. As the number of these ancient documents exceeded 700, Zaliznyak summed up his findings in the monograph Old Novgorod dialect (1995), which comprised the texts and comments of every birch scroll discovered. In particular, he demonstrated how the phonetics of the Old Novgorod dialect can be reconstructed from the typos in the birch scrolls.
In 2003, Zaliznyak published the first comprehensive study of the Novgorod Codex, the earliest extant East Slavic book, which had been sensationally discovered three years earlier.
In 2004, he published a study of The Tale of Igor's Campaign which examined all the significant linguistic arguments concerning its authenticity. Zaliznyak contends that no 20th-century (let alone 18th-century) forger could have reproduced the grammatical subtleties of the 12th-century Old East Slavic language.
Zaliznyak lectured in the Moscow University, University of Geneva, and University of Paris. For more data on his work, see Old Novgorod dialect, Novgorod Codex, and The Tale of Igor's Campaign.
Honors
1997: Demidov Prize
2007: State Prize of the Russian Federation
2007: Solzhenitsyn Prize
2007: Lomonosov Gold Medal
2015: , for his work, Древнерусское ударение: общие сведения и словарь
Major works
Andrey Zaliznyak. Russkoe imennoe slovoizmenenie. Moskva, 1967.
Andrey Zaliznyak. Grammaticheskij slovar' russkogo jazyka. Moskva, 1977, (further editions are 1980, 1987, 2003).
Andrey Zaliznyak. Grammaticheskij ocherk sanskrita. Appendix to Russian-Sanscrit dictionnary, ed. by V.A. Kochergina, Moskva, 1978.
Andrey Zaliznyak. Drevnenovgorodskij dialekt. Jazyki slavjanskoj kul'tury: Moskva. 2004.
Andrey Zaliznyak. About Faux Linguistics and Quasihistory
References
External links
А. А. Зализняк на сайте Института славяноведения РАН
Андрей Анатольевич Зализняк. Биографическая справка
Pavel Iosad, Maria Koptjevskaja-Tamm, Alexander Piperski and Dmitri Sitchinava (2018) "Depth, brilliance, clarity: Andrey Anatolyevich Zaliznyak (1935–2017)" (Obituary). Linguistic Typology 2018; 22(1): 175–184.
1935 births
2017 deaths
Writers from Moscow
Russian people of Ukrainian descent
Linguists of Slavic languages
Linguists from Russia
Linguists from the Soviet Union
20th-century linguists
Grammarians from Russia
Linguists of Russian
21st-century linguists
University of Paris alumni
Corresponding Members of the USSR Academy of Sciences
Full Members of the Russian Academy of Sciences
State Prize of the Russian Federation laureates
Demidov Prize laureates
Solzhenitsyn Prize winners
Recipients of the Lomonosov Gold Medal
Russian scientists | Andrey Zaliznyak | [
"Technology"
] | 970 | [
"Science and technology awards",
"Recipients of the Lomonosov Gold Medal"
] |
3,069,483 | https://en.wikipedia.org/wiki/Dynamic%20recrystallization | Dynamic recrystallization (DRX) is a type of recrystallization process, found within the fields of metallurgy and geology. In dynamic recrystallization, as opposed to static recrystallization, the nucleation and growth of new grains occurs during deformation rather than afterwards as part of a separate heat treatment. The reduction of grain size increases the risk of grain boundary sliding at elevated temperatures, while also decreasing dislocation mobility within the material. The new grains are less strained, causing a decrease in the hardening of a material. Dynamic recrystallization allows for new grain sizes and orientation, which can prevent crack propagation. Rather than strain causing the material to fracture, strain can initiate the growth of a new grain, consuming atoms from neighboring pre-existing grains. After dynamic recrystallization, the ductility of the material increases.
In a stress–strain curve, the onset of dynamic recrystallization can be recognized by a distinct peak in the flow stress in hot working data, due to the softening effect of recrystallization. However, not all materials display well-defined peaks when tested under hot working conditions. The onset of DRX can also be detected from inflection point in plots of the strain hardening rate against stress. It has been shown that this technique can be used to establish the occurrence of DRX when this cannot be determined unambiguously from the shape of the flow curve.
If stress oscillations appear before reaching the steady state, then several recrystallization and grain growth cycles occur and the stress behavior is said to be of the cyclic or multiple peak type. The particular stress behavior before reaching the steady state depends on the initial grain size, temperature, and strain rate.
DRX can occur in various forms, including:
Geometric dynamic recrystallization
Discontinuous dynamic recrystallization
Continuous dynamic recrystallization
Dynamic recrystallization is dependent on the rate of dislocation creation and movement. It is also dependent on the recovery rate (the rate at which dislocations annihilate). The interplay between work hardening and dynamic recovery determines grain structure. It also determines the susceptibility of grains to various types of dynamic recrystallization. Regardless of the mechanism, for dynamic crystallization to occur, the material must have experienced a critical deformation. The final grain size increases with increased stress. To achieve very fine-grained structures the stresses have to be high.
Some authors have used the term 'postdynamic' or 'metadynamic' to describe recrystallization that occurs during the cooling phase of a hot-working process or between successive passes. This emphasises the fact that the recrystallization is directly linked to the process in question, while acknowledging that there is no concurrent deformation.
Geometric Dynamic Recrystallization (GDRX)
Geometric dynamic recrystallization occurs in grains with local serrations. Upon deformation, grains undergoing GDRX elongate until the thickness of the grain falls below a threshold (below which the serration boundaries intersect and small grains pinch off into equiaxed grains). The serrations may predate stresses being exerted on the material, or may result from the material’s deformation.
Geometric Dynamic Recrystallization has 6 main characteristics:
It generally occurs with deformation at elevated temperatures, in materials with high stacking fault energy
Stress increases and then declines to a steady state
Subgrain formation requires a critical deformation
Subgrain misorientation peaks at 2˚
There is little texture change
Pinning of grain boundaries causes an increase in the required strain
While GDRX is primarily affected by the initial grain size and strain (geometry-dependent), other factors that occur during the hot working process complicate the development of predictive modeling (which tend to oversimplify the process) and can lead to incomplete recrystallization. The equiaxed grain formation does not occur immediately and uniformly along the entire grain once the threshold stress is reached, as individual regions are subjected to different strains/stresses. In practice, a generally sinusoidal edge (as predicted by Martorano et al.) gradually forms as the grains begin to pinch off as they each reach the threshold. More sophisticated models consider complex initial grain geometries, local pressures along grain boundaries, and hot working temperature, but the models are unable to make accurate predictions throughout the entire stress regime and the evolution of the overall microstructure. Additionally, grain boundaries may migrate during GDRX at high temperatures and GB curvatures, dragging along subgrain boundaries and resulting in unwanted growth of the original grain. This new, larger grain will require far more deformation for GDRX to occur, and the local area will be weaker rather than strengthened. Lastly, recrystallization can be accelerated as grains are shifted and stretched, causing subgrain boundaries to become grain boundaries (angle increases). The affected grains are thinner and longer, and thus more easily undergo deformation.
Discontinuous Dynamic Recrystallization
Discontinuous recrystallization is heterogeneous; there are distinct nucleation and growth stages. It is common in materials with low stacking-fault energy. Nucleation then occurs, generating new strain-free grains which absorb the pre-existing strained grains. It occurs more easily at grain boundaries, decreasing the grain size and thereby increasing the amount of nucleation sites. This further increases the rate of discontinuous dynamic recrystallization.
Discontinuous Dynamic Recrystallization has 5 main characteristics:
Recrystallization does not occur until the threshold strain has been reached
The stress-strain curve may have several peaks – there is not a universal equation
Nucleation generally occurs along pre-existing grain boundaries
Recrystallization rates increase as the initial grain size decreases
There is a steady grain size which is approached as recrystallization proceeds
Discontinuous dynamic recrystallization is caused by the interplay of work hardening and recovery. If the annihilation of dislocations is slow relative to the rate at which they are generated, dislocations accumulate. Once critical dislocation density is achieved, nucleation occurs on grain boundaries. Grain boundary migration, or the atoms transfer from a large pre-existing grain to a smaller nucleus, allows the growth of the new nuclei at the expense of the pre-existing grains. The nucleation can occur through the bulging of existing grain boundaries. A bulge forms if the subgrains abutting a grain boundary are of different sizes, causing a disparity in energy from the two subgrains. If the bulge achieves a critical radius, it will successfully transition to a stable nucleus and continue its growth. This can be modeled using Cahn’s theories pertaining to nucleation and growth.
Discontinuous dynamic recrystallization commonly produces a ‘necklace’ microstructure. Since new grain growth is energetically favorable along grain boundaries, new grain formation and bulging preferentially occurs along pre-existing grain boundaries. This generates layers of new, very fine grains along the grain boundary initially leaving the interior of the pre-existing grain unaffected. As the dynamic recrystallization continues, it consumes the unrecrystallized region. As deformation continues, the recrystallization does not maintain coherency between layers of new nuclei, producing a random texture.
Continuous Dynamic Recrystallization
Continuous dynamic recrystallization is common in materials with high stacking-fault energies. It occurs when low angle grain boundaries form and evolve into high angle boundaries, forming new grains in the process. For continuous dynamic recrystallization there is no clear distinction between nucleation and growth phases of the new grains.
Continuous Dynamic Recrystallization has 4 main characteristics:
As strain increases, stress increases
As strain increases, subgrain boundary misorientation increases
As low angle grain boundaries evolve into high angle grain boundaries, the misorientation increases homogeneously
As deformation increases, crystallite size decreases
There are three main mechanisms of continuous dynamic recrystallization:
First, continuous dynamic recrystallization can occur when low angle grain boundaries are assembled from dislocations formed within the grain. When the material is subjected to continued stress, the misorientation angle increases until the critical angle is achieved, creating a high angle grain boundary. This evolution can be promoted by the pinning of subgrain boundaries.
Second, continuous dynamic recrystallization can occur through subgrain rotation recrystallization; subgrains rotate increasing the misorientation angle. Once the misorientation angle exceeds the critical angle, the former subgrains qualify as independent grains.
Third, continuous dynamic recrystallization can occur due to deformation caused by microshear bands. Subgrains are assembled by dislocations within the grain formed during work hardening. If microshear bands are formed within the grain, the stress they introduce rapidly increases the misorientation of low angle grain boundaries, transforming them into high angle grain boundaries. However, the impact of microshear bands are localized, so this mechanism preferentially impacts regions which deform heterogeneously, such as microshear bands or areas near pre-existing grain boundaries. As recrystallization proceeds, it spreads out from these zones, generating a homogenous, equiaxed microstructure.
Mathematical Formulas
Based on the method developed by Poliak and Jonas, a few models are developed in order to describe the critical strain for the onset of DRX as a function of the peak strain of the stress–strain curve. The models are derived for the systems with single peak, i.e. for the materials with medium to low stacking fault energy values. The models can be found in the following papers:
Determination of flow stress and the critical strain for the onset of dynamic recrystallization using a sine function
Determination of flow stress and the critical strain for the onset of dynamic recrystallization using a hyperbolic tangent function
Determination of critical strain for initiation of dynamic recrystallization
Characteristic points of stress–strain curve at high temperature
The DRX behavior for systems with multiple peaks (and single peak as well) can be modeled considering the interaction of multiple grains during deformation. I. e. the ensemble model describes the transition between single and multi peak behavior based on the initial grain size. It can also describe the effect of transient changes of the strain rate on the shape of the flow curve. The model can be found in the following paper:
A new unified approach for modeling recrystallization during hot rolling of steel
Literature
A one-parmenter approach to determining the critical conditions for the initiation of dynamic recrystallization, onset of DRX
Flow Curve Analysis of 17–4 PH Stainless Steel under Hot Compression Test, comprehensive study of DRX
Constitutive relations to model the hot flow of commercial purity copper, chapter 6, doctoral thesis by V.G. García, UPC (2004)
A review of dynamic recrystallization phenomena in metallic materials, Latest review paper on DRX
A Cellular Automaton Model of Dynamic Recrystallization: Introduction & Source Code, Software simulating DRX by CA: Introduction, Video of software run
References
Metallurgy
Geology | Dynamic recrystallization | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,315 | [
"Metallurgy",
"Materials science",
"nan"
] |
3,069,503 | https://en.wikipedia.org/wiki/Structural%20health%20monitoring | Structural health monitoring (SHM) involves the observation and analysis of a system over time using periodically sampled response measurements to monitor changes to the material and geometric properties of engineering structures such as bridges and buildings.
In an operational environment, structures degrade with age and use. Long term SHM outputs periodically updated information regarding the ability of the structure to continue performing its intended function. After extreme events, such as earthquakes or blast loading, SHM is used for rapid condition screening. SHM is intended to provide reliable information regarding the integrity of the structure in near real time.
The SHM process involves selecting the excitation methods, the sensor types, number and locations, and the data acquisition/storage/transmittal hardware commonly called health and usage monitoring systems. Measurements may be taken to either directly detect any degradation or damage that may occur to a system or indirectly by measuring the size and frequency of loads experienced to allow the state of the system to be predicted.
To directly monitor the state of a system it is necessary to identify features in the acquired data that allows one to distinguish between the undamaged and damaged structure. One of the most common feature extraction methods is based on correlating measured system response quantities, such a vibration amplitude or frequency, with observations of the degraded system. Damage accumulation testing, during which significant structural components of the system under study are degraded by subjecting them to realistic loading conditions, can also be used to identify appropriate features. This process may involve induced-damage testing, fatigue testing, corrosion growth, or temperature cycling to accumulate certain types of damage in an accelerated fashion.
Introduction
Qualitative and non-continuous methods have long been used to evaluate structures for their capacity to serve their intended purpose. Since the beginning of the 19th century, railroad wheel-tappers have used the sound of a hammer striking the train wheel to evaluate if damage was present. In rotating machinery, vibration monitoring has been used for decades as a performance evaluation technique. Two techniques in the field of SHM are wave propagation based techniques and vibration based techniques. Broadly the literature for vibration based SHM can be divided into two aspects, the first wherein models are proposed for the damage to determine the dynamic characteristics, also known as the direct problem, and the second, wherein the dynamic characteristics are used to determine damage characteristics, also known as the inverse problem.
Several fundamental axioms, or general principles, have emerged:
Axiom I: All materials have inherent flaws or defects;
Axiom II: The assessment of damage requires a comparison between two system states;
Axiom III: Identifying the existence and location of damage can be done in an unsupervised learning mode, but identifying the type of damage present and the damage severity can generally only be done in a supervised learning mode;
Axiom IVa: Sensors cannot measure damage. Feature extraction through signal processing and statistical classification is necessary to convert sensor data into damage information;
Axiom IVb: Without intelligent feature extraction, the more sensitive a measurement is to damage, the more sensitive it is to changing operational and environmental conditions;
Axiom V: The length- and time-scales associated with damage initiation and evolution dictate the required properties of the SHM sensing system;
Axiom VI: There is a trade-off between the sensitivity to damage of an algorithm and its noise rejection capability;
Axiom VII: The size of damage that can be detected from changes in system dynamics is inversely proportional to the frequency range of excitation.
SHM System's elements typically include:
Structure
Sensors
Data acquisition systems
Data transfer and storage mechanism
Data management
Data interpretation and diagnosis:
System Identification
Structural model update
Structural condition assessment
Prediction of remaining service life
An example of this technology is embedding sensors in structures like bridges and aircraft. These sensors provide real time monitoring of various structural changes like stress and strain. In the case of civil engineering structures, the data provided by the sensors is usually transmitted to a remote data acquisition centres. With the aid of modern technology, real time control of structures (Active Structural Control) based on the information of sensors is possible
Health assessment of engineered structures of bridges, buildings and other related infrastructures
Commonly known as Structural Health Assessment (SHA) or SHM, this concept is widely applied to various forms of infrastructures, especially as countries all over the world enter into an even greater period of construction of various infrastructures ranging from bridges to skyscrapers. Especially so when damages to structures are concerned, it is important to note that there are stages of increasing difficulty that require the knowledge of previous stages, namely:
Detecting the existence of the damage on the structure
Locating the damage
Identifying the types of damage
Quantifying the severity of the damage
It is necessary to employ signal processing and statistical classification to convert sensor data on the infrastructural health status into damage info for assessment.
Operational evaluation
Operational evaluation attempts to answer four questions regarding the implementation of a damage identification capability:
i) What are the life-safety and/or economic justification for performing the SHM?
ii) How is damage defined for the system being investigated and, for multiple damage possibilities, which cases are of the most concern?
iii) What are the conditions, both operational and environmental, under which the system to be monitored functions?
iv) What are the limitations on acquiring data in the operational environment?
Operational evaluation begins to set the limitations on what will be monitored and how the monitoring will be accomplished. This evaluation starts to tailor the damage identification process to features that are unique to the system being monitored and tries to take advantage of unique features of the damage that is to be detected.
Data acquisition, normalization and cleansing
The data acquisition portion of the SHM process involves selecting the excitation methods, the sensor types, number and locations, and the data acquisition/storage/transmittal hardware. Again, this process will be application specific. Economic considerations will play a major role in making these decisions. The intervals at which data should be collected is another consideration that must be addressed.
Because data can be measured under varying conditions, the ability to normalize the data becomes very important to the damage identification process. As it applies to SHM, data normalization is the process of separating changes in sensor reading caused by damage from those caused by varying operational and environmental conditions. One of the most common procedures is to normalize the measured responses by the measured inputs. When environmental or operational variability is an issue, the need can arise to normalize the data in some temporal fashion to facilitate the comparison of data measured at similar times of an environmental or operational cycle. Sources of variability in the data acquisition process and with the system being monitored need to be identified and minimized to the extent possible. In general, not all sources of variability can be eliminated. Therefore, it is necessary to make the appropriate measurements such that these sources can be statistically quantified. Variability can arise from changing environmental and test conditions, changes in the data reduction process, and unit-to-unit inconsistencies.
Data cleansing is the process of selectively choosing data to pass on to or reject from the feature selection process. The data cleansing process is usually based on knowledge gained by individuals directly involved with the data acquisition. As an example, an inspection of the test setup may reveal that a sensor was loosely mounted and, hence, based on the judgment of the individuals performing the measurement, this set of data or the data from that particular sensor may be selectively deleted from the feature selection process. Signal processing techniques such as filtering and re-sampling can also be thought of as data cleansing procedures.
Finally, the data acquisition, normalization, and cleansing portion of SHM process should not be static. Insight gained from the feature selection process and the statistical model development process will provide information regarding changes that can improve the data acquisition process.
Feature extraction and data compression
The area of the SHM process that receives the most attention in the technical literature is the identification of data features that allows one to distinguish between the undamaged and damaged structure. Inherent in this feature selection process is the condensation of the data. The best features for damage identification are, again, application specific.
One of the most common feature extraction methods is based on correlating measured system response quantities, such a vibration amplitude or frequency, with the first-hand observations of the degrading system. Another method of developing features for damage identification is to apply engineered flaws, similar to ones expected in actual operating conditions, to systems and develop an initial understanding of the parameters that are sensitive to the expected damage. The flawed system can also be used to validate that the diagnostic measurements are sensitive enough to distinguish between features identified from the undamaged and damaged system. The use of analytical tools such as experimentally-validated finite element models can be a great asset in this process. In many cases the analytical tools are used to perform numerical experiments where the flaws are introduced through computer simulation. Damage accumulation testing, during which significant structural components of the system under study are degraded by subjecting them to realistic loading conditions, can also be used to identify appropriate features. This process may involve induced-damage testing, fatigue testing, corrosion growth, or temperature cycling to accumulate certain types of damage in an accelerated fashion. Insight into the appropriate features can be gained from several types of analytical and experimental studies as described above and is usually the result of information obtained from some combination of these studies.
The operational implementation and diagnostic measurement technologies needed to perform SHM produce more data than traditional uses of structural dynamics information. A condensation of the data is advantageous and necessary when comparisons of many feature sets obtained over the lifetime of the structure are envisioned. Also, because data will be acquired from a structure over an extended period of time and in an operational environment, robust data reduction techniques must be developed to retain feature sensitivity to the structural changes of interest in the presence of environmental and operational variability. To further aid in the extraction and recording of quality data needed to perform SHM, the statistical significance of the features should be characterized and used in the condensation process.
Statistical model development
The portion of the SHM process that has received the least attention in the technical literature is the development of statistical models for discrimination between features from the undamaged and damaged structures. Statistical model development is concerned with the implementation of the algorithms that operate on the extracted features to quantify the damage state of the structure. The algorithms used in statistical model development usually fall into three categories. When data are available from both the undamaged and damaged structure, the statistical pattern recognition algorithms fall into the general classification category, commonly referred to as supervised learning. Group classification and regression analysis are categories of supervised learning algorithms. Unsupervised learning refers to algorithms that are applied to data not containing examples from the damaged structure. Outlier or novelty detection is the primary class of algorithms applied in unsupervised learning applications. All of the algorithms analyze statistical distributions of the measured or derived features to enhance the damage identification process.
Specific structures
Bridges
Health monitoring of large bridges can be performed by simultaneous measurement of loads on the bridge and effects of these loads. It typically includes monitoring of:
Wind and weather
Traffic
Prestressing and stay cables
Deck
Pylons
Ground
Provided with this knowledge, the engineer can:
Estimate the loads and their effects
Estimate the state of fatigue or other limit state
Forecast the probable evolution of the bridge's health
The state of Oregon in the United States, Department of Transportation Bridge Engineering Department has developed and implemented a Structural Health Monitoring (SHM) program as referenced in this technical paper by Steven Lovejoy, Senior Engineer.
References are available that provide an introduction to the application of fiber optic sensors to Structural Health Monitoring on bridges.
Examples
The following projects are currently known as some of the biggest on-going bridge monitoring
The California Department of Transportation is supporting development of the Bridge rapid assessment center for extreme events (BRACE2) to facilitate real-time structural health monitoring across the California highway network.
Bridges in Hong Kong. The Wind and Structural Health Monitoring System is a sophisticated bridge monitoring system, costing US$1.3 million, used by the Hong Kong Highways Department to ensure road user comfort and safety of the Tsing Ma, Ting Kau, Kap Shui Mun and Stonecutters bridges. The sensory system consists of approximately 900 sensors and their relevant interfacing units. With more than 350 sensors on the Tsing Ma bridge, 350 on Ting Kau and 200 on Kap Shui Mun, the structural behaviour of the bridges is measured 24 hours a day, seven days a week. The sensors include accelerometers, strain gauges, displacement transducers, level sensing stations, anemometers, temperature sensors, dynamic weight-in-motion sensors and GPS receivers. They measure everything from tarmac temperature and strains in structural members to wind speed and the deflection and rotation of the kilometres of cables and any movement of the bridge decks and towers.
The Rio–Antirrio bridge, Greece: has more than 100 sensors monitoring the structure and the traffic in real time.
Millau Viaduc, France: has one of the largest systems with fiber optics in the world which is considered state of the art.
The Huey P Long bridge, USA: has over 800 static and dynamic strain gauges designed to measure axial and bending load effects.
The Fatih Sultan Mehmet Bridge, Turkey: also known as the Second Bosphorus Bridge. It has been monitored using an innovative wireless sensor network with normal traffic condition.
Masjid al-Haram#Third Saudi expansion, Mecca, Saudi Arabia : has more than 600 sensors ( Concrete pressure cell, Embedment type strain gauge, Sister bar strain gauge, etc.) installed at foundation and concrete columns. This project is under construction.
The Sydney Harbour Bridge in Australia is currently implementing a monitoring system involving over 2,400 sensors. Asset managers and bridge inspectors have mobile and web browser decision support tools based on analysis of sensor data.
The Queensferry Crossing, currently under construction across the Firth of Forth, will have a monitoring system including more than 2,000 sensors upon its completion. Asset managers will have access to data for all sensors from a web-based data management interface, including automated data analysis.
The Penang Second Bridge in Penang, Malaysia has completed the implementation and it's monitoring the bridge element with 3,000 sensors. For the safety of bridge users and as protection of such an investment, the firm responsible for the bridge wanted a structural health monitoring system. The system is used for disaster control, structural health management and data analysis. There were many considerations before implementation which included: force (wind, earthquake, temperature, vehicles); weather (air temperature, wind, humidity and precipitation); and response (strain, acceleration, cable tension, displacement and tilt).
The Lakhta Center, Russia: has more than 3000 sensors and more than 8000 parameters monitoring the structure in real time.
See also
Deformation monitoring
Civionics
Structural Health Monitoring, a peer-reviewed journal devoted to the subject
Value of structural health information
References
External links
NDT.net Open Access Database contains EWSHM proceedings and much more SHM articles
International Society for Structural Health Monitoring of Intelligent Infrastructure (ISHMII)
SHM at low cost for earthquake zones
Journals
SHM Proceedings (NDT.net)
Journal of Structural Health Monitoring (sagepub)
Journal of Intelligent Material Systems & Structures (sagepub)
Structural Durability & Health Monitoring (techscience)
Structural Control and Health Monitoring (John Wiley & Sons, Ltd.)
Journal of Civil Structural Health Monitoring (Springer)
Smart Materials and Structures (IOP)
Smart Materials Bulletin (science direct)
Structural engineering
Maintenance
Infrastructure asset management | Structural health monitoring | [
"Engineering"
] | 3,173 | [
"Structural engineering",
"Construction",
"Civil engineering",
"Maintenance",
"Mechanical engineering"
] |
3,069,520 | https://en.wikipedia.org/wiki/Measurement%20uncertainty | In metrology, measurement uncertainty is the expression of the statistical dispersion of the values attributed to a quantity measured on an interval or ratio scale.
All measurements are subject to uncertainty and a measurement result is complete only when it is accompanied by a statement of the associated uncertainty, such as the standard deviation. By international agreement, this uncertainty has a probabilistic basis and reflects incomplete knowledge of the quantity value. It is a non-negative parameter.
The measurement uncertainty is often taken as the standard deviation of a state-of-knowledge probability distribution over the possible values that could be attributed to a measured quantity. Relative uncertainty is the measurement uncertainty relative to the magnitude of a particular single choice for the value for the measured quantity, when this choice is nonzero. This particular single choice is usually called the measured value, which may be optimal in some well-defined sense (e.g., a mean, median, or mode). Thus, the relative measurement uncertainty is the measurement uncertainty divided by the absolute value of the measured value, when the measured value is not zero.
Background
The purpose of measurement is to provide information about a quantity of interest – a measurand. Measurands on ratio or interval scales include the size of a cylindrical feature, the volume of a vessel, the potential difference between the terminals of a battery, or the mass concentration of lead in a flask of water.
No measurement is exact. When a quantity is measured, the outcome depends on the measuring system, the measurement procedure, the skill of the operator, the environment, and other effects. Even if the quantity were to be measured several times, in the same way and in the same circumstances, a different measured value would in general be obtained each time, assuming the measuring system has sufficient resolution to distinguish between the values.
The dispersion of the measured values would relate to how well the measurement is performed. If measured on a ratio or interval scale, their average would provide an estimate of the true value of the quantity that generally would be more reliable than an individual measured value.
The dispersion and the number of measured values would provide information relating to the average value as an estimate of the true value.
However, this information would not generally be adequate.
The measuring system may provide measured values that are not dispersed about the true value, but about some value offset from it. Take a domestic bathroom scale. Suppose it is not set to show zero when there is nobody on the scale, but to show some value offset from zero. Then, no matter how many times the person's mass were re-measured, the effect of this offset would be inherently present in the average of the values.
The "Guide to the Expression of Uncertainty in Measurement" (commonly known as the GUM) is the definitive document on this subject. The GUM has been adopted by all major National Measurement Institutes (NMIs) and by international laboratory accreditation standards such as ISO/IEC 17025 General requirements for the competence of testing and calibration laboratories, which is required for international laboratory accreditation, and is employed in most modern national and international documentary standards on measurement methods and technology. See Joint Committee for Guides in Metrology.
Measurement uncertainty has important economic consequences for calibration and measurement activities. In calibration reports, the magnitude of the uncertainty is often taken as an indication of the quality of the laboratory, and smaller uncertainty values generally are of higher value and of higher cost. The American Society of Mechanical Engineers (ASME) has produced a suite of standards addressing various aspects of measurement uncertainty. For example, ASME standards are used to address the role of measurement uncertainty when accepting or rejecting products based on a measurement result and a product specification, to provide a simplified approach (relative to the GUM) to the evaluation of dimensional measurement uncertainty, to resolve disagreements over the magnitude of the measurement uncertainty statement, and to provide guidance on the risks involved in any product acceptance/rejection decision.
Indirect measurement
The above discussion concerns the direct measurement of a quantity, which incidentally occurs rarely. For example, the bathroom scale may convert a measured extension of a spring into an estimate of the measurand, the mass of the person on the scale. The particular relationship between extension and mass is determined by the calibration of the scale. A measurement model converts a quantity value into the corresponding value of the measurand.
There are many types of measurement in practice and therefore many models. A simple measurement model (for example for a scale, where the mass is proportional to the extension of the spring) might be sufficient for everyday domestic use. Alternatively, a more sophisticated model of a weighing, involving additional effects such as air buoyancy, is capable of delivering better results for industrial or scientific purposes. In general there are often several different quantities, for example temperature, humidity and displacement, that contribute to the definition of the measurand, and that need to be measured.
Correction terms should be included in the measurement model when the conditions of measurement are not exactly as stipulated. These terms correspond to systematic errors. Given an estimate of a correction term, the relevant quantity should be corrected by this estimate. There will be an uncertainty associated with the estimate, even if the estimate is zero, as is often the case. Instances of systematic errors arise in height measurement, when the alignment of the measuring instrument is not perfectly vertical, and the ambient temperature is different from that prescribed. Neither the alignment of the instrument nor the ambient temperature is specified exactly, but information concerning these effects is available, for example the lack of alignment is at most 0.001° and the ambient temperature at the time of measurement differs from that stipulated by at most 2 °C.
As well as raw data representing measured values, there is another form of data that is frequently needed in a measurement model. Some such data relate to quantities representing physical constants, each of which is known imperfectly. Examples are material constants such as modulus of elasticity and specific heat. There are often other relevant data given in reference books, calibration certificates, etc., regarded as estimates of further quantities.
The items required by a measurement model to define a measurand are known as input quantities in a measurement model. The model is often referred to as a functional relationship. The output quantity in a measurement model is the measurand.
Formally, the output quantity, denoted by , about which information is required, is often related to input quantities, denoted by , about which information is available, by a measurement model in the form of
where is known as the measurement function. A general expression for a measurement model is
It is taken that a procedure exists for calculating given , and that is uniquely defined by this equation.
Propagation of distributions
The true values of the input quantities are unknown.
In the GUM approach, are characterized by probability distributions and treated mathematically as random variables.
These distributions describe the respective probabilities of their true values lying in different intervals, and are assigned based on available knowledge concerning .
Sometimes, some or all of are interrelated and the relevant distributions, which are known as joint, apply to these quantities taken together.
Consider estimates , respectively, of the input quantities , obtained from certificates and reports, manufacturers' specifications, the analysis of measurement data, and so on.
The probability distributions characterizing are chosen such that the estimates , respectively, are the expectations of .
Moreover, for the th input quantity, consider a so-called standard uncertainty, given the symbol , defined as the standard deviation of the input quantity .
This standard uncertainty is said to be associated with the (corresponding) estimate .
The use of available knowledge to establish a probability distribution to characterize each quantity of interest applies to the and also to .
In the latter case, the characterizing probability distribution for is determined by the measurement model together with the probability distributions for the .
The determination of the probability distribution for from this information is known as the propagation of distributions.
The figure below depicts a measurement model in the case where and are each characterized by a (different) rectangular, or uniform, probability distribution.
has a symmetric trapezoidal probability distribution in this case.
Once the input quantities have been characterized by appropriate probability distributions, and the measurement model has been developed, the probability distribution for the measurand is fully specified in terms of this information. In particular, the expectation of is used as the estimate of , and the standard deviation of as the standard uncertainty associated with this estimate.
Often an interval containing with a specified probability is required. Such an interval, a coverage interval, can be deduced from the probability distribution for . The specified probability is known as the coverage probability. For a given coverage probability, there is more than one coverage interval. The probabilistically symmetric coverage interval is an interval for which the probabilities (summing to one minus the coverage probability) of a value to the left and the right of the interval are equal. The shortest coverage interval is an interval for which the length is least over all coverage intervals having the same coverage probability.
Prior knowledge about the true value of the output quantity can also be considered. For the domestic bathroom scale, the fact that the person's mass is positive, and that it is the mass of a person, rather than that of a motor car, that is being measured, both constitute prior knowledge about the possible values of the measurand in this example. Such additional information can be used to provide a probability distribution for that can give a smaller standard deviation for and hence a smaller standard uncertainty associated with the estimate of .
Type A and Type B evaluation of uncertainty
Knowledge about an input quantity is inferred from repeated measured values ("Type A evaluation of uncertainty"), or scientific judgement or other information concerning the possible values of the quantity ("Type B evaluation of uncertainty").
In Type A evaluations of measurement uncertainty, the assumption is often made that the distribution best describing an input quantity given repeated measured values of it (obtained independently) is a Gaussian distribution.
then has expectation equal to the average measured value and standard deviation equal to the standard deviation of the average.
When the uncertainty is evaluated from a small number of measured values (regarded as instances of a quantity characterized by a Gaussian distribution), the corresponding distribution can be taken as a t-distribution.
Other considerations apply when the measured values are not obtained independently.
For a Type B evaluation of uncertainty, often the only available information is that lies in a specified interval [].
In such a case, knowledge of the quantity can be characterized by a rectangular probability distribution with limits and .
If different information were available, a probability distribution consistent with that information would be used.
Sensitivity coefficients
Sensitivity coefficients describe how the estimate of would be influenced by small changes in the estimates of the input quantities .
For the measurement model , the sensitivity coefficient equals the partial derivative of first order of with respect to evaluated at , , etc.
For a linear measurement model
with independent, a change in equal to would give a change in
This statement would generally be approximate for measurement models .
The relative magnitudes of the terms are useful in assessing the respective contributions from the input quantities to the standard uncertainty associated with .
The standard uncertainty associated with the estimate of the output quantity is not given by the sum of the , but these terms combined in quadrature, namely by an expression that is generally approximate for measurement models :
which is known as the law of propagation of uncertainty.
When the input quantities contain dependencies, the above formula is augmented by terms containing covariances, which may increase or decrease .
Uncertainty evaluation
The main stages of uncertainty evaluation constitute formulation and calculation, the latter consisting of propagation and summarizing.
The formulation stage constitutes
defining the output quantity (the measurand),
identifying the input quantities on which depends,
developing a measurement model relating to the input quantities, and
on the basis of available knowledge, assigning probability distributions — Gaussian, rectangular, etc. — to the input quantities (or a joint probability distribution to those input quantities that are not independent).
The calculation stage consists of propagating the probability distributions for the input quantities through the measurement model to obtain the probability distribution for the output quantity , and summarizing by using this distribution to obtain
the expectation of , taken as an estimate of ,
the standard deviation of , taken as the standard uncertainty associated with , and
a coverage interval containing with a specified coverage probability.
The propagation stage of uncertainty evaluation is known as the propagation of distributions, various approaches for which are available, including
the GUM uncertainty framework, constituting the application of the law of propagation of uncertainty, and the characterization of the output quantity by a Gaussian or a -distribution,
analytic methods, in which mathematical analysis is used to derive an algebraic form for the probability distribution for , and
a Monte Carlo method, in which an approximation to the distribution function for is established numerically by making random draws from the probability distributions for the input quantities, and evaluating the model at the resulting values.
For any particular uncertainty evaluation problem, approach 1), 2) or 3) (or some other approach) is used, 1) being generally approximate, 2) exact, and 3) providing a solution with a numerical accuracy that can be controlled.
Models with any number of output quantities
When the measurement model is multivariate, that is, it has any number of output quantities, the above concepts can be extended. The output quantities are now described by a joint probability distribution, the coverage interval becomes a coverage region, the law of propagation of uncertainty has a natural generalization, and a calculation procedure that implements a multivariate Monte Carlo method is available.
Uncertainty as an interval
The most common view of measurement uncertainty uses random variables as mathematical models for uncertain quantities and simple probability distributions as sufficient for representing measurement uncertainties. In some situations, however, a mathematical interval might be a better model of uncertainty than a probability
distribution. This may include situations involving periodic measurements, binned data values, censoring, detection limits, or plus-minus ranges of measurements where no particular probability distribution seems justified or where one cannot assume that the errors among individual measurements are completely independent.
A more robust representation of measurement uncertainty in such cases can be fashioned from intervals. An interval [a, b] is different from a rectangular or uniform probability distribution over the same range in that the latter suggests that the true value lies inside the right half of the range [(a + b)/2, b] with probability one half, and within any subinterval of [a, b] with probability equal to the width of the subinterval divided by b − a. The interval makes no such claims, except simply that the measurement lies somewhere within the interval. Distributions of such measurement intervals can be summarized as probability boxes and Dempster–Shafer structures over the real numbers, which incorporate both aleatoric and epistemic uncertainties.
See also
References
Further reading
Bich, W., Cox, M. G., and Harris, P. M. Evolution of the "Guide to the Expression of Uncertainty in Measurement". Metrologia, 43(4):S161–S166, 2006.
Cox, M. G., and Harris, P. M. SSfM Best Practice Guide No. 6, Uncertainty evaluation. Technical report DEM-ES-011, National Physical Laboratory, 2006.
Ellison S. L. R., Williams A. (Eds). Eurachem/CITAC guide: Quantifying Uncertainty in Analytical Measurement, Third edition, (2012) ISBN 978-0-948926-30-3. Available from www.eurachem.org.
Grabe, M. Generalized Gaussian Error Calculus, Springer 2010.
EA. Expression of the uncertainty of measurement in calibration. Technical Report EA-4/02, European Co-operation for Accreditation, 1999.
Lira., I. Evaluating the Uncertainty of Measurement. Fundamentals and Practical Guidance. Institute of Physics, Bristol, UK, 2002.
Majcen N., Taylor P. (Editors), Practical examples on traceability, measurement uncertainty and validation in chemistry, Vol 1, 2010; .
Possolo A and Iyer H K 2017 Concepts and tools for the evaluation of measurement uncertainty Rev. Sci. Instrum.,88 011301 (2017).
UKAS M3003 The Expression of Uncertainty and Confidence in Measurement (Edition 3, November 2012) UKAS
External links
NPLUnc
Estimate of temperature and its uncertainty in small systems, 2011.
Introduction to evaluating uncertainty of measurement
JCGM 200:2008. International Vocabulary of Metrology – Basic and general concepts and associated terms, 3rd Edition. Joint Committee for Guides in Metrology.
ISO 3534-1:2006. Statistics – Vocabulary and symbols – Part 1: General statistical terms and terms used in probability. ISO
JCGM 106:2012. Evaluation of measurement data – The role of measurement uncertainty in conformity assessment. Joint Committee for Guides in Metrology.
NIST. Uncertainty of measurement results.
Measurement | Measurement uncertainty | [
"Physics",
"Mathematics"
] | 3,461 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
3,069,617 | https://en.wikipedia.org/wiki/Technology%20Compatibility%20Kit | A Technology Compatibility Kit (TCK) is a suite of tests that at least nominally checks a particular alleged implementation of a Java Specification Request (JSR) for compliance. It is one of the three required pieces for a ratified JSR in the Java Community Process, which are:
the JSR specification
the JSR reference implementation
the Technology Compatibility Kit (TCK)
Contents and architecture
TCKs tend to be obtained from the Specification Lead of a given JSR. They usually (but not always) consist of a graphical host application which communicates over TCP/IP with the device or Java virtual machine that is under test. Tests are typically obtained by the device over HTTP, and results are posted back to the host application in a similar way. This decoupling enables TCKs to be used to test virtual machines on devices such as CLDC mobile phones which do not have the power to run the full TCK host application.
The tests contained in the JSR are supposedly derived from the statements in the JSR specification. Any given API will have a set of tests to ensure that it behaves in the intended way, including in error conditions.
In order to state conformance with a given JSR, a Java implementation has to pass the associated TCK. Any (rare) exceptions have to be negotiated with the specification lead. Because of this, TCKs are of great importance when implementing a JSR. The first great milestone is to get the TCK running in the first place, which necessarily involves the Java implementation and underlying networking stack having a certain level of maturity. Next, the TCK must be properly configured - because they must be flexible enough to cope with any implementation, there are many options. (For example, listing all the supported media formats and associated optional controls for JSR135). Particular tests also require some setup activity - this tends to be particularly complex for the tests which ensure correct behaviour in error conditions, because the Java implementation must be put in the right state to cause each error. Finally, each failing test must be fixed, which is usually handled by the usual defect tracking mechanisms.
Some Java implementors consider their product to be mainly complete once the TCKs pass. Whilst it's true that the TCKs are quite comprehensive, there are many areas that they do not cover. These include performance, as well as the optional features. There's no alternative but to do much real-world testing to address these shortcomings, although additional test suites such as JDTS may help.
TCK for the Java platform
The Technology Compatibility Kit for a particular Java platform is called Java Compatibility Kit (JCK). It is an extensive test suite used by Oracle and licensees to ensure compatible implementations of the platform.
The JCK for Java 6.0 source code has been released. The associated license did not initially allow users to compile or run the tests, but the right to see the code is not associated with tainting concerns, and public comments on the source code are allowed. However, since the release of OpenJDK, a specific license allows running the JCK in the OpenJDK context, that is for any GPL implementation deriving substantially from OpenJDK.
The OpenJDK Community TCK License Agreement v 2.0 has been published for the Java SE 7 Specification since December 2011.
TCK framework
The JavaTest harness tool is today the most common unit testing framework used to verify the implementation compliance. It is a general purpose testing framework designed to run TCK tests. However, some specifications are also using JUnit or TestNG.
License and controversy
Subsequent to Sun's release of OpenJDK, Sun released a specific license to permit running the TCK in the OpenJDK context for any GPL implementation deriving substantially from OpenJDK.
This requirement denies the Apache Harmony project an Apache License-compatible right to use the TCK. On November 9, 2010, the Apache Software Foundation threatened to withdraw from the Java Community Process if they were not granted a TCK license for Harmony without additional restrictions.
On December 9, 2010, the Apache Software Foundation resigned its seat on the Java SE/EE Executive Committee.
See also
Java Community Process
JavaTest harness
References
External links
The Java Compatibility Test Tools
JCP Community Resources - TCK Tools
Java platform | Technology Compatibility Kit | [
"Technology"
] | 876 | [
"Computing platforms",
"Java platform"
] |
3,069,677 | https://en.wikipedia.org/wiki/Monkey | Monkey is a common name that may refer to most mammals of the infraorder Simiiformes, also known as simians. Traditionally, all animals in the group now known as simians are counted as monkeys except the apes. Thus monkeys, in that sense, constitute an incomplete paraphyletic grouping; however, in the broader sense based on cladistics, apes (Hominoidea) are also included, making the terms monkeys and simians synonyms in regard to their scope.
In 1812, Étienne Geoffroy grouped the apes and the Cercopithecidae group of monkeys together and established the name Catarrhini, "Old World monkeys" ("singes de l'Ancien Monde" in French). The extant sister of the Catarrhini in the monkey ("singes") group is the Platyrrhini (New World monkeys). Some nine million years before the divergence between the Cercopithecidae and the apes, the Platyrrhini emerged within "monkeys" by migration to South America likely by ocean. Apes are thus deep in the tree of extant and extinct monkeys, and any of the apes is distinctly closer related to the Cercopithecidae than the Platyrrhini are.
Many monkey species are tree-dwelling (arboreal), although there are species that live primarily on the ground, such as baboons. Most species are mainly active during the day (diurnal). Monkeys are generally considered to be intelligent, especially the Old World monkeys.
Within suborder Haplorhini, the simians are a sister group to the tarsiers – the two members diverged some 70 million years ago. New World monkeys and catarrhine monkeys emerged within the simians roughly 35 million years ago. Old World monkeys and apes emerged within the catarrhine monkeys about 25 million years ago. Extinct basal simians such as Aegyptopithecus or Parapithecus (35–32 million years ago) are also considered monkeys by primatologists.
Lemurs, lorises, and galagos are not monkeys, but strepsirrhine primates (suborder Strepsirrhini). The simians' sister group, the tarsiers, are also haplorhine primates; however, they are also not monkeys.
Apes emerged within monkeys as sister of the Cercopithecidae in the Catarrhini, so cladistically they are monkeys as well. However, there has been resistance to directly designate apes (and thus humans) as monkeys, so "Old World monkey" may be taken to mean either the Cercopithecoidea (not including apes) or the Catarrhini (including apes). That apes are monkeys was already realized by Georges-Louis Leclerc, Comte de Buffon in the 18th century. Linnaeus placed this group in 1758 together with the tarsiers, in a single genus "Simia" (sans Homo), an ensemble now recognised as the Haplorhini.
Monkeys, including apes, can be distinguished from other primates by having only two pectoral nipples, a pendulous penis, and a lack of sensory whiskers.
Historical and modern terminology
According to the Online Etymology Dictionary, the word "monkey" may originate in a German version of the Reynard the Fox fable, published . In this version of the fable, a character named Moneke is the son of Martin the Ape. In English, no clear distinction was originally made between "ape" and "monkey"; thus the 1911 Encyclopædia Britannica entry for "ape" notes that it is either a synonym for "monkey" or is used to mean a tailless humanlike primate. Colloquially, the terms "monkey" and "ape" are widely used interchangeably. Also, a few monkey species have the word "ape" in their common name, such as the Barbary ape.
Later in the first half of the 20th century, the idea developed that there were trends in primate evolution and that the living members of the order could be arranged in a series, leading through "monkeys" and "apes" to humans. Monkeys thus constituted a "grade" on the path to humans and were distinguished from "apes".
Scientific classifications are now more often based on monophyletic groups, that is groups consisting of all the descendants of a common ancestor. The New World monkeys and the Old World monkeys are each monophyletic groups, but their combination was not, since it excluded hominoids (apes and humans). Thus, the term "monkey" no longer referred to a recognized scientific taxon. The smallest accepted taxon which contains all the monkeys is the infraorder Simiiformes, or simians. However this also contains the hominoids, so that monkeys are, in terms of currently recognized taxa, non-hominoid simians. Colloquially and pop-culturally, the term is ambiguous and sometimes monkey includes non-human hominoids. In addition, frequent arguments are made for a monophyletic usage of the word "monkey" from the perspective that usage should reflect cladistics.
Several science-fiction and fantasy stories have depicted non-human (fantastical or alien) antagonistic characters refer to humans as monkeys, usually in a derogatory manner, as a form of metacommentary.
A group of monkeys may be commonly referred to as a tribe or a troop.
Two separate groups of primates are referred to as "monkeys": New World monkeys (platyrrhines) from South and Central America and Old World monkeys (catarrhines in the superfamily Cercopithecoidea) from Africa and Asia. Apes (hominoids)—consisting of gibbons, orangutans, gorillas, chimpanzees and bonobos, and humans—are also catarrhines but were classically distinguished from monkeys. Tailless monkeys may be called "apes", incorrectly according to modern usage; thus the tailless Barbary macaque is historically called the "Barbary ape".
Description
As apes have emerged in the monkey group as sister of the old world monkeys, characteristics that describe monkeys are generally shared by apes as well. Williams et al. outlined evolutionary features, including in stem groupings, contrasted against the other primates such as the tarsiers and the lemuriformes.
Monkeys range in size from the pygmy marmoset, which can be as small as with a tail and just over in weight, to the male mandrill, almost long and weighing up to . Some are arboreal (living in trees) while others live on the savanna; diets differ among the various species but may contain any of the following: fruit, leaves, seeds, nuts, flowers, eggs and small animals (including insects and spiders).
Some characteristics are shared among the groups; most New World monkeys have long tails, with those in the Atelidae family being prehensile, while Old World monkeys have non-prehensile tails or no visible tail at all. Old World monkeys have trichromatic color vision like that of humans, while New World monkeys may be trichromatic, dichromatic, or—as in the owl monkeys and greater galagos—monochromatic. Although both the New and Old World monkeys, like the apes, have forward-facing eyes, the faces of Old World and New World monkeys look very different, though again, each group shares some features such as the types of noses, cheeks and rumps.
Classification
The following list shows where the various monkey families (bolded) are placed in the classification of living (extant) primates.
Order Primates
Suborder Strepsirrhini: lemurs, lorises, and galagos
Suborder Haplorhini: tarsiers, monkeys, and apes
Infraorder Tarsiiformes
Family Tarsiidae: tarsiers
Infraorder Simiiformes: simians
Parvorder Platyrrhini: New World monkeys
Family Callitrichidae: marmosets and tamarins (42 species)
Family Cebidae: capuchins and squirrel monkeys (14 species)
Family Aotidae: night monkeys (11 species)
Family Pitheciidae: titis, sakis, and uakaris (41 species)
Family Atelidae: howler, spider, and woolly monkeys (24 species)
Parvorder Catarrhini
Superfamily Cercopithecoidea
Family Cercopithecidae: Old World monkeys (135 species)
Superfamily Hominoidea: apes
Family Hylobatidae: gibbons ("lesser apes") (20 species)
Family Hominidae: great apes (including humans, gorillas, chimpanzees, and orangutans) (8 species)
Cladogram with extinct families
Below is a cladogram with some extinct monkey families. Generally, extinct non-hominoid simians, including early catarrhines are discussed as monkeys as well as simians or anthropoids, which cladistically means that Hominoidea are monkeys as well, restoring monkeys as a single grouping. It is indicated approximately how many million years ago (Mya) the clades diverged into newer clades. It is thought the New World monkeys started as a drifted "Old World monkey" group from the Old World (probably Africa) to the New World (South America).
Relationship with humans
The many species of monkey have varied relationships with humans. Some are kept as pets, others used as model organisms in laboratories or in space missions. They may be killed in monkey drives (when they threaten agriculture) or used as service animals for the disabled.
In some areas, some species of monkey are considered agricultural pests, and can cause extensive damage to commercial and subsistence crops. This can have important implications for the conservation of endangered species, which may be subject to persecution. In some instances farmers' perceptions of the damage may exceed the actual damage. Monkeys that have become habituated to human presence in tourist locations may also be considered pests, attacking tourists.
Public exhibition
Many zoos have maintained a facility in which monkeys and other primates are kept within enclosures for public entertainment. Commonly known as a monkey house (primatarium), sometimes styled Monkey House, notable examples include London Zoo's Monkey Valley; Zoo Basel's Monkey house/exhibit; the Monkey Tropic House at Krefeld Zoo; Bronx Zoo's Monkey House; Monkey Jungle, Florida; Lahore Zoo's Monkey House; Monkey World, Dorset, England; and Edinburgh Zoo's Monkey House. Former cinema, The Scala, Kings Cross spent a short time as a primatarium.
As service animals for disabled people
Some organizations train capuchin monkeys as service animals to assist quadriplegics and other people with severe spinal cord injuries or mobility impairments. After being socialized in a human home as infants, the monkeys undergo extensive training before being placed with disabled people. Around the house, the monkeys assist with daily tasks such as feeding, fetching, manipulating objects, and personal care.
Helper monkeys are usually trained in schools by private organizations, taking seven years to train, and are able to serve 25–30 years (two to three times longer than a guide dog).
In 2010, the U.S. federal government revised its definition of service animal under the Americans with Disabilities Act (ADA). Non-human primates are no longer recognized as service animals under the ADA. The American Veterinary Medical Association does not support the use of non-human primates as assistance animals because of animal welfare concerns, the potential for serious injury to people, and risks that primates may transfer dangerous diseases to humans.
In experiments
The most common monkey species found in animal research are the grivet, the rhesus macaque, and the crab-eating macaque, which are either wild-caught or purpose-bred. They are used primarily because of their relative ease of handling, their fast reproductive cycle (compared to apes) and their psychological and physical similarity to humans. Worldwide, it is thought that between 100,000 and 200,000 non-human primates are used in research each year, 64.7% of which are Old World monkeys,
and 5.5% New World monkeys. This number makes a very small fraction of all animals used in research. Between 1994 and 2004 the United States has used an average of 54,000 non-human primates, while around 10,000 non-human primates were used in the European Union in 2002.
In space
A number of countries have used monkeys as part of their space exploration programmes, including the United States and France. The first monkey in space was Albert II, who flew in the US-launched V-2 rocket on June 14, 1949.
As food
Monkey brains are eaten as a delicacy in parts of South Asia, Africa and China. Monkeys are sometimes eaten in parts of Africa, where they can be sold as "bushmeat". In traditional Islamic dietary laws, the eating of monkeys is forbidden.
Literature
Sun Wukong (the "Monkey King"), a character who figures prominently in Chinese mythology, is the protagonist in the classic Chinese novel Journey to the West.
Monkeys are prevalent in numerous books, television programs, and movies. The television series Monkey and the literary characters Monsieur Eek and Curious George are all examples.
Informally, "monkey" may refer to apes, particularly chimpanzees, gibbons, and gorillas. Author Terry Pratchett alludes to this difference in usage in his Discworld novels, in which the Librarian of the Unseen University is an orangutan who gets very violent if referred to as a monkey. Another example is the use of Simians in Chinese poetry.
The winged monkeys are prominent characters in L. Frank Baum's Wizard of Oz books and in the 1939 film based on Baum's 1900 novel The Wonderful Wizard of Oz.
Religion and worship
Monkey is the symbol of fourth Tirthankara in Jainism, Abhinandananatha.
Hanuman, a prominent deity in Hinduism, is a human-like monkey god who is believed to bestow courage, strength and longevity to the person who thinks about him or Rama.
In Buddhism, the monkey is an early incarnation of Buddha but may also represent trickery and ugliness. The Chinese Buddhist "mind monkey" metaphor refers to the unsettled, restless state of human mind. Monkey is also one of the Three Senseless Creatures, symbolizing greed, with the tiger representing anger and the deer lovesickness.
The Sanzaru, or three wise monkeys, are revered in Japanese folklore; together they embody the proverbial principle to "see no evil, hear no evil, speak no evil".
The Moche people of ancient Peru worshipped nature. They placed emphasis on animals and often depicted monkeys in their art.
The Tzeltal people of Mexico worshipped monkeys as incarnations of their dead ancestors.
Zodiac
The Monkey (猴) is the ninth in the twelve-year cycle of animals which appear in the Chinese zodiac related to the Chinese calendar. .
See also
List of New World monkey species
List of cercopithecoids (Old World monkeys)
List of individual monkeys
List of fictional primates
List of primates
List of primates by population
International Primate Day
Monkey Day
Signifying monkey
Notes
References
Literature cited
Further reading
"How to Avoid Monkey Bites and Attacks in Southeast Asia" by Gregory Rodgers, Trip Savvy, 21 Dec 2018
"Monkeys and Monkey Gods in Mythology, Folklore, and Religion" by Anniina Jokinen, Luminarium: Anthology of English Literature
"The Impossible Housing and Handling Conditions of Monkeys in Research Laboratories", by Viktor Reinhardt, International Primate Protection League, August 2001
The Problem with Pet Monkeys: Reasons Monkeys Do Not Make Good Pets , an article by veterinarian Lianne McLeod on About.com
Helping Hands: Monkey helpers for the disabled, a U.S. national non-profit organization based in Boston Massachusetts that places specially trained capuchin monkeys with people who are paralyzed or who live with other severe mobility impairments
External links
Simians
Extant Eocene first appearances
Paraphyletic groups | Monkey | [
"Biology"
] | 3,360 | [
"Phylogenetics",
"Paraphyletic groups"
] |
3,069,688 | https://en.wikipedia.org/wiki/Bombykol | Bombykol is a pheromone released by the female silkworm moth to attract mates. It is also the sex pheromone in the wild silk moth (Bombyx mandarina). Discovered by Adolf Butenandt in 1959, it was the first pheromone to be characterized chemically.
Minute quantities of this pheromone can be used per acre of land to confuse male insects about the location of their female partners. It can thus serve as a lure in traps to remove insects effectively without spraying crops with large amounts of pesticides. Butenandt named the substance after the moth's Latin name Bombyx mori.
In vivo it appears that bombykol is the natural ligand for a pheromone binding protein, BmorPBP, which escorts the pheromone to the pheromone receptor.
Biosynthesis
Bombykol is known to be derived from acetyl-CoA via the C-16 fatty acyl palmitoyl-CoA. Palmitoyl-CoA is converted to bombykol in steps that involve desaturation and reductive modification of the carbonyl carbon. Compared to other Type I pheromones, bombykol biosynthesis does not need chain-shortening or any other kind of modification of the terminal hydroxyl group.
A desaturase enzyme encoded by the gene Bmpgdesat1 (Desat1), produces the monoene (11Z)-hexadecenoyl-CoA as well as the diene (10E,12Z)-10,12-hexadecadienoyl-CoA. This desaturase is the only enzyme necessary to catalyze these two consecutive desaturation steps.
The bombykol acyl precursor (10E,12Z)-10,12-hexadecadienoate is primarily found as a triacylglycerol ester in the cytoplasmic lipid droplets of pheromone gland cells of the moth. And when the adult females emerge from their pupae, the neurohormone PBAN (pheromone biosynthesis-activating neuropeptide) start signaling events that help control the lipolysis of the stored triacylglycerols, releasing (10E,12Z)-10,12-hexadecadienoate for its final reductive modification. The mechanism of the lipolytic release of (10E,12Z)-10,12-hexadecadienoate from triacylglycerols is not completely known but the candidate lipase-encoding genes have been identified.
References
Insect pheromones
Insect ecology
Primary alcohols
Conjugated dienes | Bombykol | [
"Chemistry"
] | 574 | [
"Insect pheromones",
"Chemical ecology"
] |
3,069,747 | https://en.wikipedia.org/wiki/Non-functional%20requirement | In systems engineering and requirements engineering, a non-functional requirement (NFR) is a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific behaviours. They are contrasted with functional requirements that define specific behavior or functions. The plan for implementing functional requirements is detailed in the system design. The plan for implementing non-functional requirements is detailed in the system architecture, because they are usually architecturally significant requirements.
In software architecture, non-functional requirements are known as "architectural characteristics". Note that synchronous communication between software architectural components, entangles them and they must share the same architectural characteristics.
Definition
Broadly, functional requirements define what a system is supposed to do and non-functional requirements define how a system is supposed to be. Functional requirements are usually in the form of "system shall do <requirement>", an individual action or part of the system, perhaps explicitly in the sense of a mathematical function, a black box description input, output, process and control functional model or IPO model. In contrast, non-functional requirements are in the form of "system shall be <requirement>", an overall property of the system as a whole or of a particular aspect and not a specific function. The system's overall properties commonly mark the difference between whether the development project has succeeded or failed.
Non-functional requirements are often called the "quality attributes" of a system. Other terms for non-functional requirements are "qualities", "quality goals", "quality of service requirements", "constraints", "non-behavioral requirements", or "technical requirements". Informally these are sometimes called the "ilities", from attributes like stability and portability. Qualities—that is non-functional requirements—can be divided into two main categories:
Execution qualities, such as safety, security and usability, which are observable during operation (at run time).
Evolution qualities, such as testability, maintainability, extensibility and scalability, which are embodied in the static structure of the system.
It is important to specify non-functional requirements in a specific and measurable way.
Examples
A system may be required to present the user with a display of the number of records in a database. This is a functional requirement. How current this number needs to be, is a non-functional requirement. If the number needs to be updated in real time, the system architects must ensure that the system is capable of displaying the record count within an acceptably short interval of the number of records changing.
Sufficient network bandwidth may be a non-functional requirement of a system. Other examples include:
Accessibility
Adaptability
Auditability and control
Availability (see service level agreement)
Backup
Boot up time
Capacity, current and forecast
Certification
Compliance
Configuration management
Conformance
Cost, initial and life-cycle cost
Data integrity
Data retention
Dependency on other parties
Deployment
Development environment
Disaster recovery
Documentation
Durability
Efficiency (resource consumption for given load)
Effectiveness (resulting performance in relation to effort)
Elasticity
Emotional factors (like fun or absorbing or has "wow factor")
Environmental protection
Escrow
Ethics
Exploitability
Extensibility (adding features, and carry-forward of customizations at next major version upgrade)
Failure management
Fault tolerance (e.g. operational system monitoring, measuring, and management)
Flexibility (e.g. to deal with future changes in requirements)
Footprint reduction - reduce the exe files size
Integrability (e.g. ability to integrate components)
Internationalization and localization
Interoperability
Legal and licensing issues or patent-infringement-avoidability
Maintainability (e.g. mean time to repair – MTTR)
Management
Memory optimization
Modifiability
Network topology
Open source
Operability
Performance / response time (performance engineering)
Platform compatibility
Privacy (compliance to privacy laws)
Portability
Quality (e.g. faults discovered, faults delivered, fault removal efficacy)
Readability
Reliability (e.g. mean time between/to failures – MTBF/MTTF)
Reporting
Resilience
Resource constraints (processor speed, memory, disk space, network bandwidth, etc.)
Response time
Reusability
Robustness
Safety or factor of safety
Scalability (horizontal, vertical)
Security (cyber and physical)
Software, tools, standards etc. Compatibility
Stability
Supportability
Testability
Throughput
Transparency
Usability (human factors) by target user community
Volume testing
See also
ISO/IEC 25010:2011
Consortium for IT Software Quality
ISO/IEC 9126
FURPS
Requirements analysis
Usability requirements
Non-Functional Requirements framework
Architecturally Significant Requirements
SNAP Points
References
External links
Software requirements
Systems engineering
Software quality
Management cybernetics | Non-functional requirement | [
"Engineering"
] | 935 | [
"Software engineering",
"Systems engineering",
"Software requirements"
] |
3,069,854 | https://en.wikipedia.org/wiki/Electrolyte%E2%80%93insulator%E2%80%93semiconductor%20sensor | Within electronics, an Electrolyte–insulator–semiconductor (EIS) sensor is a sensor that is made of these three components:
an electrolyte with the chemical that should be measured
an insulator that allows field-effect interaction, without leak currents between the two other components
a semiconductor to register the chemical changes
The EIS sensor can be used in combination with other structures, for example to construct a light-addressable potentiometric sensor (LAPS).
References
Sensors | Electrolyte–insulator–semiconductor sensor | [
"Technology",
"Engineering"
] | 99 | [
"Sensors",
"Measuring instruments"
] |
3,069,929 | https://en.wikipedia.org/wiki/Formula%20unit | In chemistry, a formula unit is the smallest unit of a non-molecular substance, such as an ionic compound, covalent network solid, or metal. It can also refer to the chemical formula for that unit. Those structures do not consist of discrete molecules, and so for them, the term formula unit is used. In contrast, the terms molecule or molecular formula are applied to molecules. The formula unit is used as an independent entity for stoichiometric calculations. Examples of formula units, include ionic compounds such as and and covalent networks such as and C (as diamond or graphite).
In most cases the formula representing a formula unit will also be an empirical formula, such as calcium carbonate () or sodium chloride (), but it is not always the case. For example, the ionic compounds potassium persulfate (), mercury(I) nitrate , and sodium peroxide , have empirical formulas of , , and , respectively, being presented in the simplest whole number ratios.
In mineralogy, as minerals are almost exclusively either ionic or network solids, the formula unit is used. The number of formula units (Z) and the dimensions of the crystallographic axes are used in defining the unit cell.
References
Chemical formulas | Formula unit | [
"Chemistry"
] | 252 | [
"Chemical formulas",
"Chemical structures"
] |
3,070,317 | https://en.wikipedia.org/wiki/CHAIN%20%28industry%20standard%29 | The CECED Convergence Working Group has defined a new platform, called CHAIN (Ceced Home Appliances Interoperating
Network), which defines a protocol for interconnecting different home appliances in a single multibrand system.
It allows for control and automation of all basic appliance-related services in a home: e.g., remote control of appliance operation, energy or load management, remote diagnostics and automatic maintenance support to appliances, downloading and updating of data, programs and services (possibly from the Internet).
See also
KNX/EIB
LonWorks
OSGi
References
Home automation
Interoperability
ja:欧州家電機器委員会#CHAIN | CHAIN (industry standard) | [
"Technology",
"Engineering"
] | 142 | [
"Home automation",
"Telecommunications engineering",
"Interoperability"
] |
3,070,397 | https://en.wikipedia.org/wiki/Software%20visualization | Software visualization or software visualisation refers to the visualization of information of and related to software systems—either the architecture of its source code or metrics of their runtime behavior—and their development process by means of static, interactive or animated 2-D or 3-D visual representations of their structure, execution, behavior, and evolution.
Software system information
Software visualization uses a variety of information available about software systems. Key information categories include:
implementation artifacts such as source codes,
software metric data from measurements or from reverse engineering,
traces that record execution behavior,
software testing data (e.g., test coverage)
software repository data that tracks changes.
Objectives
The objectives of software visualization are to support the understanding of software systems (i.e., its structure) and algorithms (e.g., by animating the behavior of sorting algorithms) as well as the analysis and exploration of software systems and their anomalies (e.g., by showing classes with high coupling) and their development and evolution. One of the strengths of software visualization is to combine and relate information of software systems that are not inherently linked, for example by projecting code changes onto software execution traces.
Software visualization can be used as tool and technique to explore and analyze software system information, e.g., to discover anomalies similar to the process of visual data mining. For example, software visualization is used to monitoring activities such as for code quality or team activity. Visualization is not inherently a method for software quality assurance. Software visualization participates to Software Intelligence in allowing to discover and take advantage of mastering inner components of software systems.
Types
Tools for software visualization might be used to visualize source code and quality defects during software development and maintenance activities. There are different approaches to map source code to a visual representation such as by software maps Their objective includes, for example, the automatic discovery and visualization of quality defects in object-oriented software systems and services. Commonly, they visualize the direct relationship of a class and its methods with other classes in the software system and mark potential quality defects. A further benefit is the support for visual navigation through the software system.
More or less specialized graph drawing software is used for software visualization. A small-scale 2003 survey of researchers active in the reverse engineering and software maintenance fields found that a wide variety of visualization tools were used, including general purpose graph drawing packages like GraphViz and GraphEd, UML tools like Rational Rose and Borland Together, and more specialized tools like Visualization of Compiler Graphs (VCG) and Rigi. The range of UML tools that can act as a visualizer by reverse engineering source is by no means short; a 2007 book noted that besides the two aforementioned tools, ESS-Model, BlueJ, and Fujaba also have this capability, and that Fujaba can also identify design patterns.
See also
Imagix 4D
NDepend
Sourcetrail
Application discovery and understanding
Software maintenance
Software maps
Software diagnosis
Cognitive dimensions of notations
Software archaeology
References
Further reading
External links
SoftVis the ACM Symposium on Software Visualization
VISSOFT 2nd IEEE Working Conference on Software Visualization
EPDV Eclipse Project Dependencies Viewer
Infographics
Software maintenance
Software metrics
Software development
Software quality
Source code
Software
Visualization software | Software visualization | [
"Mathematics",
"Technology",
"Engineering"
] | 672 | [
"Metrics",
"Quantity",
"Computer occupations",
"Software metrics",
"Software engineering",
"Computer science",
"nan",
"Software maintenance",
"Software",
"Software development"
] |
3,070,443 | https://en.wikipedia.org/wiki/Stem-cell%20line | A stem cell line is a group of stem cells that is cultured in vitro and can be propagated indefinitely. Stem cell lines are derived from either animal or human tissues and come from one of three sources: embryonic stem cells, adult stem cells, or induced pluripotent stem cells. They are commonly used in research and regenerative medicine.
Properties
By definition, stem cells possess two properties: (1) they can self-renew, which means that they can divide indefinitely while remaining in an undifferentiated state; and (2) they are pluripotent or multipotent, which means that they can differentiate to form specialized cell types. Due to the self-renewal capacity of stem cells, a stem cell line can be cultured in vitro indefinitely.
A stem-cell line is distinctly different from an immortalized cell line, such as the HeLa line. While stem cells can propagate indefinitely in culture due to their inherent properties, immortalized cells would not normally divide indefinitely but have gained this ability due to mutation. Immortalized cell lines can be generated from cells isolated from tumors, or mutations can be introduced to make the cells immortal.
A stem cell line is also distinct from primary cells. Primary cells are cells that have been isolated and then used immediately. Primary cells cannot divide indefinitely and thus cannot be cultured for long periods of time in vitro.
Types and methods of derivation
Embryonic stem cell line
An embryonic stem cell line is created from cells derived from the inner cell mass of a blastocyst, an early stage, pre-implantation embryo. In humans, the blastocyst stage occurs 4–5 days post fertilization. To create an embryonic stem cell line, the inner cell-mass is removed from the blastocyst, separated from the trophoectoderm, and cultured on a layer of supportive cells in vitro. In the derivation of human embryonic stem cell lines, embryos left over from in vitro fertilization (IVF) procedures are used. The fact that the blastocyst is destroyed during the process has raised controversy and ethical concerns.
Embryonic stem cells are pluripotent, meaning they can differentiate to form all cell types in the body. In vitro, embryonic stem cells can be cultured under defined conditions to keep them in their pluripotent state, or they can be stimulated with biochemical and physical cues to differentiate them to different cell types.
Adult stem cell line
Adult stem cells are found in juvenile or adult tissues. Adult stem cells are multipotent: they can generate a limited number of differentiated cell types (unlike pluripotent embryonic stem cells). Types of adult stem cells include hematopoietic stem cells and mesenchymal stem cells. Hematopoietic stem cells are found in the bone marrow and generate all cells of the immune system all blood cell types. Mesenchymal stem cells are found in umbilical cord blood, amniotic fluid, and adipose tissue and can generate a number of cell types, including osteoblasts, chondrocytes, and adipocytes. In medicine, adult stem cells are mostly commonly used in bone marrow transplants to treat many bone and blood cancers as well as some autoimmune diseases. (See Hematopoietic stem cell transplantation)
Of the types of adult stem cells have successfully been isolated and identified, only mesenchymal stem cells can successfully be grown in culture for long periods of time. Other adult stem cell types, such as hematopoietic stem cells, are difficult to grow and propagate in vitro. Identifying methods for maintaining hematopoietic stem cells in vitro is an active area of research. Thus, while mesenchymal stem cell lines exist, other types of adult stem cells that are grown in vitro can better be classified as primary cells.
Induced pluripotent stem-cell (iPSC) line
Induced pluripotent stem cell (iPSC) lines are pluripotent stem cells that have been generated from adult/somatic cells. The method of generating iPSCs was developed by Shinya Yamanaka's lab in 2006; his group demonstrated that the introduction of four specific genes could induce somatic cells to revert to a pluripotent stem cell state.
Compared to embryonic stem-cell lines, iPSC lines are also pluripotent in nature but can be derived without the use of human embryos—a process that has raised ethical concerns. Furthermore, patient-specific iPSC cell lines can be generated—that is, cell lines that are genetically matched to an individual. Patient-specific iPSC lines have been generated for the purposes of studying diseases and for developing patient-specific medical therapies.
Methods of culture
Stem-cell lines are grown and maintained at specific temperature and atmospheric conditions (37 degrees Celsius and 5% ) in incubators. Culture conditions such as the cell growth medium and surface on which cells are grown vary widely depending on the specific stem cell line. Different biochemical factors can be added to the medium to control the cell phenotype—for example to keep stem cells in a pluripotent state or to differentiate them to a specific cell type.
Uses
Stem-cell lines are used in research and regenerative medicine. They can be used to study stem-cell biology and early human development. In the field of regenerative medicine, it has been proposed that stem cells be used in cell-based therapies to replace injured or diseased cells and tissues. Examples of conditions that researchers are working to develop stem-cell-based treatments for include neurodegenerative diseases, diabetes, and spinal cord injuries.
Stem-cell in-vitro
Stem cells could be used as an ideal in vitro platform to study developmental changes at the molecular level. Neural stem cells (NSC) for examples have been used as a model to study the mechanisms behind the differentiation and maturation of cells of the central nervous system (CNS). These studies are gaining more attention recently since they can be optimised and relevant to modelling neurodegenerative diseases and brain tumors.
Ethical issues
There is controversy associated with the derivation and use of human embryonic stem cell lines. This controversy stems from the fact that derivation of human embryonic stem cells requires the destruction of a blastocyst-stage, pre-implantation human embryo. There is a wide range of viewpoints regarding the moral consideration that blastocyst-stage human embryos should be given.
Access to human embryonic stem-cell lines
United States
In the United States, Executive Order 13505 established that federal money can be used for research in which approved human embryonic stem-cell (hESC) lines are used, but it cannot be used to derive new lines. The National Institutes of Health (NIH) Guidelines on Human Stem Cell Research, effective July 7, 2009, implemented the Executive Order 13505 by establishing criteria which hESC lines must meet to be approved for funding. The NIH Human Embryonic Stem Cell Registry can be accessed online and has updated information on cell lines eligible for NIH funding. There are 486 approved lines as of January 2022.
Studies have found that approved hESC lines are not uniformly used in the US data from cell banks and surveys of researchers indicate that only a handful of the available hESC lines are routinely used in research. Access and utility are cited as the two primary factors influencing what hESC lines scientists choose to work with.
A 2011 survey of stem cell scientists in the US who use hESC lines in their research found that 54% of respondents used two or fewer lines and 75% used three or fewer lines.
Another study tracked cell-line requests fulfilled from the largest US repositories, the National Stem Cell Bank (NSCB) and the Harvard Stem Cell Institute (HSCI; Cambridge, MA, USA), for the periods March 1999 – December 2008 (for NSCB) and April 2004 – December 2008 (for HSCI). For NSCB, out of twenty-one approved cell lines, 77% of requests were for two of the lines (H1 and H9). For HSCI, out of the 17 lines requested more than once, 24.7% of requests were for the two most commonly requested lines.
See also
Stem cell
Embryonic stem cell
Induced pluripotent stem cell
Induced stem cells
Adult stem cell
Cell culture
Immortalised cell line
Stem-cell controversy
Stem-cell treatments
Regenerative Medicine
References
Stem cells
Induced stem cells
Cell culture | Stem-cell line | [
"Biology"
] | 1,758 | [
"Model organisms",
"Cell culture",
"Induced stem cells",
"Stem cell research"
] |
3,070,481 | https://en.wikipedia.org/wiki/Nat%20%28unit%29 | The natural unit of information (symbol: nat), sometimes also nit or nepit, is a unit of information or information entropy, based on natural logarithms and powers of e, rather than the powers of 2 and base 2 logarithms, which define the shannon. This unit is also known by its unit symbol, the nat. One nat is the information content of an event when the probability of that event occurring is 1/e.
One nat is equal to shannons ≈ 1.44 Sh or, equivalently, hartleys ≈ 0.434 Hart.
History
Boulton and Wallace used the term nit in conjunction with minimum message length, which was subsequently changed by the minimum description length community to nat to avoid confusion with the nit used as a unit of luminance.
Alan Turing used the natural ban.
Entropy
Shannon entropy (information entropy), being the expected value of the information of an event, is inherently a quantity of the same type and with a unit of information. The International System of Units, by assigning the same unit (joule per kelvin) both to heat capacity and to thermodynamic entropy implicitly treats information entropy as a quantity of dimension one, with . Systems of natural units that normalize the Boltzmann constant to 1 are effectively measuring thermodynamic entropy with the nat as unit.
When the shannon entropy is written using a natural logarithm,
it is implicitly giving a number measured in nats.
Notes
References
Further reading
Units of information | Nat (unit) | [
"Mathematics"
] | 313 | [
"Units of information",
"Quantity",
"Units of measurement"
] |
14,626,122 | https://en.wikipedia.org/wiki/Lactate%20dehydrogenase | Lactate dehydrogenase (LDH or LD) is an enzyme found in nearly all living cells. LDH catalyzes the conversion of pyruvate to lactate and back, as it converts NAD+ to NADH and back. A dehydrogenase is an enzyme that transfers a hydride from one molecule to another.
LDH exists in four distinct enzyme classes. This article is specifically about the NAD(P)-dependent L-lactate dehydrogenase. Other LDHs act on D-lactate and/or are dependent on cytochrome c: D-lactate dehydrogenase (cytochrome) and L-lactate dehydrogenase (cytochrome).
LDH is expressed extensively in body tissues, such as blood cells and heart muscle. Because it is released during tissue damage, it is a marker of common injuries and disease such as heart failure.
Reaction
Lactate dehydrogenase catalyzes the interconversion of pyruvate and lactate with concomitant interconversion of NADH and NAD+. It converts pyruvate, the final product of glycolysis, to lactate when oxygen is absent or in short supply, and it performs the reverse reaction during the Cori cycle in the liver. At high concentrations of lactate, the enzyme exhibits feedback inhibition, and the rate of conversion of pyruvate to lactate is decreased. It also catalyzes the dehydrogenation of 2-hydroxybutyrate, but this is a much poorer substrate than lactate.
Active site
LDH in humans uses His(193) as the proton acceptor, and works in unison with the coenzyme (Arg99 and Asn138), and substrate (Arg106; Arg169; Thr248) binding residues. The His(193) active site, is not only found in the human form of LDH, but is found in many different animals, showing the convergent evolution of LDH. The two different subunits of LDH (LDHA, also known as the M subunit of LDH, and LDHB, also known as the H subunit of LDH) both retain the same active site and the same amino acids participating in the reaction. The noticeable difference between the two subunits that make up LDH's tertiary structure is the replacement of alanine (in the M chain) with a glutamine (in the H chain). This tiny but notable change is believed to be the reason the H subunit can bind NAD faster, and the M subunit's catalytic activity isn't reduced in the presence of acetylpyridine adenine dinucleotide, whereas the H subunit's activity is reduced fivefold.
Isoenzymes
Enzymatically active lactate dehydrogenase is consisting of four subunits (tetramer). The two most common subunits are the LDH-M and LDH-H peptides, named for their discovery in muscle and heart tissue, and encoded by the LDHA and LDHB genes, respectively. These two subunits can form five possible tetramers (isoenzymes): LDH-1 (4H), LDH-5 (4M), and the three mixed tetramers (LDH-2/3H1M, LDH-3/2H2M, LDH-4/1H3M). These five isoforms are enzymatically similar but show different tissue distribution.
LDH-1 (4H)—in the heart and in RBC (red blood cells), as well as the brain
LDH-2 (3H1M)—in the reticuloendothelial system
LDH-3 (2H2M)—in the lungs
LDH-4 (1H3M)—in the kidneys, placenta, and pancreas
LDH-5 (4M)—in the liver and striated muscle, also present in the brain
LDH-2 is usually the predominant form in the serum. An LDH-1 level higher than the LDH-2 level (a "flipped pattern") suggests myocardial infarction (damage to heart tissues releases heart LDH, which is rich in LDH-1, into the bloodstream). The use of this phenomenon to diagnose infarction has been largely superseded by the use of Troponin I or T measurement.
There are two more mammalian LDH subunits that can be included in LDH tetramers: LDHC and LDHBx. LDHC is a testes-specific LDH protein, that is encoded by the LDHC gene. LDHBx is a peroxisome-specific LDH protein. LDHBx is the readthrough-form of LDHB. LDHBx is generated by translation of the LDHB mRNA, but the stop codon is interpreted as an amino acid-encoding codon. In consequence, translation continues to the next stop codon. This leads to the addition of seven amino acid residues to the normal LDH-H protein. The extension contains a peroxisomal targeting signal, so that LDHBx is imported into the peroxisome.
Protein families
The family also contains L-lactate dehydrogenases that catalyse the conversion of pyruvate to L-lactate, the last step in anaerobic glycolysis. Malate dehydrogenases that catalyse the interconversion of malate to oxaloacetate and participate in the citric acid cycle, and L-2-hydroxyisocaproate dehydrogenases are also members of the family. The N-terminus is a Rossmann NAD-binding fold and the C-terminus is an unusual alpha+beta fold.
Interactive pathway map
Enzyme regulation
This protein may use the morpheein model of allosteric regulation.
Ethanol-induced hypoglycemia
Ethanol is dehydrogenated to acetaldehyde by alcohol dehydrogenase, and further into acetyl CoA by acetaldehyde dehydrogenase. During this reaction 2 NADH are produced. If large amounts of ethanol are present, then large amounts of NADH are produced, leading to a depletion of NAD+. Thus, the conversion of pyruvate to lactate is increased due to the associated regeneration of NAD+. Therefore, anion-gap metabolic acidosis (lactic acidosis) may ensue in ethanol poisoning.
The increased NADH/NAD+ ratio also can cause hypoglycemia in an (otherwise) fasting individual who has been drinking and is dependent on gluconeogenesis to maintain blood glucose levels. Alanine and lactate are major gluconeogenic precursors that enter gluconeogenesis as pyruvate. The high NADH/NAD+ ratio shifts the lactate dehydrogenase equilibrium to lactate, so that less pyruvate can be formed and, therefore, gluconeogenesis is impaired.
Substrate regulation
LDH is also regulated by the relative concentrations of its substrates. LDH becomes more active under periods of extreme muscular output due to an increase in substrates for the LDH reaction. When skeletal muscles are pushed to produce high levels of power, the demand for ATP in regards to aerobic ATP supply leads to an accumulation of free ADP, AMP, and Pi. The subsequent glycolytic flux, specifically production of pyruvate, exceeds the capacity for pyruvate dehydrogenase and other shuttle enzymes to metabolize pyruvate. The flux through LDH increases in response to increased levels of pyruvate and NADH to metabolize pyruvate into lactate.
Transcriptional regulation
LDH undergoes transcriptional regulation by PGC-1α. PGC-1α regulates LDH by decreasing LDH A mRNA transcription and the enzymatic activity of pyruvate to lactate conversion.
Genetics
The M and H subunits are encoded by two different genes:
The M subunit is encoded by LDHA, located on chromosome 11p15.4 ().
The H subunit is encoded by LDHB, located on chromosome 12p12.2-p12.1 ().
A third isoform, LDHC or LDHX, is expressed only in the testis (); its gene is likely a duplicate of LDHA and is also located on the eleventh chromosome (11p15.5-p15.3).
The fourth isoform is localized in the peroxisome. It is tetramer containing one LDHBx subunit, which is also encoded by the LDHB gene. The LDHBx protein is seven amino acids longer than the LDHB (LDH-H) protein. This amino acid extension is generated by functional translational readthrough.
Mutations of the M subunit have been linked to the rare disease exertional myoglobinuria (see OMIM article), and mutations of the H subunit have been described but do not appear to lead to disease.
Mutations
In rare cases, a mutation in the genes controlling the production of lactate dehydrogenase will lead to a medical condition known as lactate dehydrogenase deficiency. Depending on which gene carries the mutation, one of two types will occur: either lactate dehydrogenase-A deficiency (also known as glycogen storage disease XI) or lactate dehydrogenase-B deficiency. Both of these conditions affect how the body breaks down sugars, primarily in certain muscle cells. Lactate dehydrogenase-A deficiency is caused by a mutation to the LDHA gene, while lactate dehydrogenase-B deficiency is caused by a mutation to the LDHB gene.
This condition is inherited in an autosomal recessive pattern, meaning that both parents must contribute a mutated gene in order for this condition to be expressed.
A complete lactate dehydrogenase enzyme consists of four protein subunits. Since the two most common subunits found in lactate dehydrogenase are encoded by the LDHA and LDHB genes, either variation of this disease causes abnormalities in many of the lactate dehydrogenase enzymes found in the body. In the case of lactate dehydrogenase-A deficiency, mutations to the LDHA gene result in the production of an abnormal lactate dehydrogenase-A subunit that cannot bind to the other subunits to form the complete enzyme. This lack of a functional subunit reduces the amount of enzyme formed, leading to an overall decrease in activity. During the anaerobic phase of glycolysis (the Cori cycle), the mutated enzyme is unable to convert pyruvate into lactate to produce the extra energy the cells need. Since this subunit has the highest concentration in the LDH enzymes found in the skeletal muscles (which are the primary muscles responsible for movement), high-intensity physical activity will lead to an insufficient amount of energy being produced during this anaerobic phase. This in turn will cause the muscle tissue to weaken and eventually break down, a condition known as rhabdomyolysis. The process of rhabdomyolysis also releases myoglobin into the blood, which will eventually end up in the urine and cause it to become red or brown: another condition known as myoglobinuria. Some other common symptoms are exercise intolerance, which consists of fatigue, muscle pain, and cramps during exercise, and skin rashes. In severe cases, myoglobinuria can damage the kidneys and lead to life-threatening kidney failure. In order to obtain a definitive diagnosis, a muscle biopsy may be performed to confirm low or absent LDH activity. There is currently no specific treatment for this condition.
In the case of lactate dehydrogenase-B deficiency, mutations to the LDHB gene result in the production of an abnormal lactate dehydrogenase-B subunit that cannot bind to the other subunits to form the complete enzyme. As with lactate dehydrogenase-A deficiency, this mutation reduces the overall effectiveness in the enzyme. However, there are some major differences between these two cases. The first is the location where the condition manifests itself. With lactate dehydrogenase-B deficiency, the highest concentration of B subunits can be found within the cardiac muscle, or the heart. Within the heart, lactate dehydrogenase plays the role of converting lactate back into pyruvate so that the pyruvate can be used again to create more energy. With the mutated enzyme, the overall rate of this conversion is decreased. However, unlike lactate dehydrogenase-A deficiency, this mutation does not appear to cause any symptoms or health problems linked to this condition. At the present moment, it is unclear why this is the case. Affected individuals are usually discovered only when routine blood tests indicate low LDH levels present within the blood.
Role in muscular fatigue
The onset of acidosis during periods of intense exercise is commonly attributed to accumulation of hydrogens that are dissociated from lactate. Previously, lactic acid was thought to cause fatigue. From this reasoning, the idea of lactate production being a primary cause of muscle fatigue during exercise was widely adopted. A closer, mechanistic analysis of lactate production under “anaerobic” conditions shows that there is no biochemical evidence for the production of lactate through LDH contributing to acidosis. While LDH activity is correlated to muscle fatigue, the production of lactate by means of the LDH complex works as a system to delay the onset of muscle fatigue. George Brooks and Colleagues at UC Berkeley where the lactate shuttle was discovered showed that lactate was actually a metabolic fuel not a waste product or the cause of fatigue.
LDH works to prevent muscular failure and fatigue in multiple ways. The lactate-forming reaction generates cytosolic NAD+, which feeds into the glyceraldehyde 3-phosphate dehydrogenase reaction to help maintain cytosolic redox potential and promote substrate flux through the second phase of glycolysis to promote ATP generation. This, in effect, provides more energy to contracting muscles under heavy workloads. The production and removal of lactate from the cell also ejects a proton consumed in the LDH reaction- the removal of excess protons produced in the wake of this fermentation reaction serves to act as a buffer system for muscle acidosis. Once proton accumulation exceeds the rate of uptake in lactate production and removal through the LDH symport, muscular acidosis occurs.
Blood test
On blood tests, an elevated level of lactate dehydrogenase usually indicates tissue damage, which has multiple potential causes, reflecting its widespread tissue distribution:
Hemolytic anemia
Vitamin B12 deficiency anemia
Infections such as infectious mononucleosis, meningitis, encephalitis, HIV/AIDS. It is notably increased in sepsis.
Infarction, such as bowel infarction, myocardial infarction and lung infarction
Acute kidney disease
Acute liver disease
Rhabdomyolysis
Pancreatitis
Bone fractures
Cancers, notably testicular cancer and lymphoma. A high LDH after chemotherapy may indicate that it has not been successful.
Severe shock
Hypoxia
Low and normal levels of LDH do not usually indicate any pathology. Low levels may be caused by large intake of vitamin C.
LDH is a protein that normally appears throughout the body in small amounts.
Testing in cancer
Many cancers can raise LDH levels, so LDH may be used as a tumor marker, but at the same time, it is not useful in identifying a specific kind of cancer. Measuring LDH levels can be helpful in monitoring treatment for cancer. Noncancerous conditions that can raise LDH levels include heart failure, hypothyroidism, anemia, pre-eclampsia, meningitis, encephalitis, acute pancreatitis, HIV and lung or liver disease.
Tissue breakdown releases LDH, and therefore, LDH can be measured as a surrogate for tissue breakdown (e.g., hemolysis). LDH is measured by the lactate dehydrogenase (LDH) test (also known as the LDH test or lactic acid dehydrogenase test). Comparison of the measured LDH values with the normal range help guide diagnosis.
Hemolysis
In medicine, LDH is often used as a marker of tissue breakdown as LDH is abundant in red blood cells and can function as a marker for hemolysis. A blood sample that has been handled incorrectly can show false-positively high levels of LDH due to erythrocyte damage.
It can also be used as a marker of myocardial infarction. Following a myocardial infarction, levels of LDH peak at 3–4 days and remain elevated for up to 10 days. In this way, elevated levels of LDH (where the level of LDH1 is higher than that of LDH2, i.e. the LDH Flip, as normally, in serum, LDH2 is higher than LDH1) can be useful for determining whether a patient has had a myocardial infarction if they come to doctors several days after an episode of chest pain.
Tissue turnover
Other uses are assessment of tissue breakdown in general; this is possible when there are no other indicators of hemolysis. It is used to follow up cancer (especially lymphoma) patients, as cancer cells have a high rate of turnover, with destroyed cells leading to an elevated LDH activity.
HIV
LDH is often measured in HIV patients as a non-specific marker for pneumonia due to Pneumocystis jirovecii (PCP). Elevated LDH in the setting of upper respiratory symptoms in a HIV patient suggests, but is not diagnostic for, PCP. However, in HIV-positive patients with respiratory symptoms, a very high LDH level (>600 IU/L) indicated histoplasmosis (9.33 times more likely) in a study of 120 PCP and 30 histoplasmosis patients.
Testing in other body fluids
Exudates and transudates
Measuring LDH in fluid aspirated from a pleural effusion (or pericardial effusion) can help in the distinction between exudates (actively secreted fluid, e.g., due to inflammation) or transudates (passively secreted fluid, due to a high hydrostatic pressure or a low oncotic pressure). The usual criterion (included in Light's criteria) is that a ratio of pleural LDH to serum LDH greater than 0.6 or the upper limit of the normal laboratory value for serum LDH indicates an exudate, while a ratio of less indicates a transudate. Different laboratories have different values for the upper limit of serum LDH, but examples include 200 and 300 IU/L. In empyema, the LDH levels, in general, will exceed 1000 IU/L.
Meningitis and encephalitis
High levels of lactate dehydrogenase in cerebrospinal fluid are often associated with bacterial meningitis. In the case of viral meningitis, high LDH, in general, indicates the presence of encephalitis and poor prognosis.
In cancer treatment
LDH is involved in tumor initiation and metabolism. Cancer cells rely on increased glycolysis resulting in increased lactate production in addition to aerobic respiration in the mitochondria, even under oxygen-sufficient conditions (a process known as the Warburg effect). This state of fermentative glycolysis is catalyzed by the A form of LDH. This mechanism allows tumorous cells to convert the majority of their glucose stores into lactate regardless of oxygen availability, shifting use of glucose metabolites from simple energy production to the promotion of accelerated cell growth and replication.
LDH A and the possibility of inhibiting its activity has been identified as a promising target in cancer treatments focused on preventing carcinogenic cells from proliferating. Chemical inhibition of LDH A has demonstrated marked changes in metabolic processes and overall survival of carcinoma cells. Oxamate is a cytosolic inhibitor of LDH A that significantly decreases ATP production in tumorous cells as well as increasing production of reactive oxygen species (ROS). These ROS drive cancer cell proliferation by activating kinases that drive cell cycle progression growth factors at low concentrations, but can damage DNA through oxidative stress at higher concentrations. Secondary lipid oxidation products can also inactivate LDH and impact its ability to regenerate NADH, directly disrupting the enzymes ability to convert lactate to pyruvate.
While recent studies have shown that LDH activity is not necessarily an indicator of metastatic risk, LDH expression can act as a general marker in the prognosis of cancers. Expression of LDH5 and VEGF in tumors and the stroma has been found to be a strong prognostic factor for diffuse or mixed-type gastric cancers.
Prokaryotes
A cap-membrane-binding domain is found in prokaryotic lactate dehydrogenase. This consists of a large seven-stranded antiparallel beta-sheet flanked on both sides by alpha-helices. It allows for membrane association.
See also
Dehydrogenase
Erythrocyte lactate transporter defect (formerly, myopathy due to lactate transport defect)
Glycogen storage disease
Lactate
Metabolic myopathies
Oxidoreductase
References
Further reading
External links
Chemical pathology
Tumor markers
EC 1.1.1
Enzymes of known structure | Lactate dehydrogenase | [
"Chemistry",
"Biology"
] | 4,565 | [
"Biomarkers",
"Exercise biochemistry",
"Tumor markers",
"Biochemistry",
"Chemical pathology"
] |
14,626,533 | https://en.wikipedia.org/wiki/Retinal%20G%20protein%20coupled%20receptor | RPE-retinal G protein-coupled receptor also known as RGR-opsin is a protein that in humans is encoded by the RGR gene. RGR-opsin is a member of the rhodopsin-like receptor subfamily of GPCR. Like other opsins which bind retinaldehyde, it contains a conserved lysine residue in the seventh transmembrane domain. RGR-opsin comes in different isoforms produced by alternative splicing.
Function
RGR-opsin preferentially binds all-trans-retinal, which is the dominant form in the dark adapted retina, upon light exposure it is isomerized to 11-cis-retinal. Therefore, RGR-opsin presumably acts as a photoisomerase to convert all-trans-retinal to 11-cis-retinal, similar to retinochrome in invertebrates. 11-cis-retinal is isomerized back within rhodopsin and the iodopsins in the rods and cones of the retina. RGR-opsin is exclusively expressed in tissue close to the rods and cones, the retinal pigment epithelium (RPE) and Müller cells.
Phylogeny
The RGR-opsins are restricted to the echinoderms, the hemichordates the craniates. The craniates are the taxon that contains mammals and with them humans. The RGR-opsins are one of the seven subgroups of the chromopsins. The other groups are the peropsins, the retinochromes, the nemopsins, the astropsins, the varropsins, and the gluopsins. The chromopsins are one of three subgroups of the tetraopsins (also known as RGR/Go or Group 4 opsins). The other groups are the neuropsins and the Go-opsins. The tetraopsins are one of the five major groups of the animal opsins, also known as type 2 opsins). The other groups are the ciliary opsins (c-opsins, cilopsins), the rhabdomeric opsins (r-opsins, rhabopsins), the xenopsins, and the nessopsins. Four of these subclades occur in Bilateria (all but the nessopsins). However, the bilaterian clades constitute a paraphyletic taxon without the opsins from the cnidarians.
In the phylogeny above, Each clade contains sequences from opsins and other G protein-coupled receptors. The number of sequences and two pie charts are shown next to the clade. The first pie chart shows the percentage of a certain amino acid at the position in the sequences corresponding to position 296 in cattle rhodopsin. The amino acids are color-coded. The colors are red for lysine (K), purple for glutamic acid (E), orange for arginine (R), dark and mid-gray for other amino acids, and light gray for sequences that have no data at that position. The second pie chart gives the taxon composition for each clade, green stands for craniates, dark green for cephalochordates, mid green for echinoderms, brown for nematodes, pale pink for annelids, dark blue for arthropods, light blue for mollusks, and purple for cnidarians. The branches to the clades have pie charts, which give support values for the branches. The values are from right to left SH-aLRT/aBayes/UFBoot. The branches are considered supported when SH-aLRT ≥ 80%, aBayes ≥ 0.95, and UFBoot ≥ 95%. If a support value is above its threshold the pie chart is black otherwise gray.
Clinical significance
RGR-opsin may be associated with autosomal recessive and autosomal dominant retinitis pigmentosa (arRP and adRP, respectively).
Interactions
RGR-opsin has been shown to interact with KIAA1279.
References
Further reading
G protein-coupled receptors | Retinal G protein coupled receptor | [
"Chemistry"
] | 881 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,626,628 | https://en.wikipedia.org/wiki/Polynomial%20Diophantine%20equation | In mathematics, a polynomial Diophantine equation is an indeterminate polynomial equation for which one seeks solutions restricted to be polynomials in the indeterminate. A Diophantine equation, in general, is one where the solutions are restricted to some algebraic system, typically integers. (In another usage ) Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made initial studies of integer Diophantine equations.
An important type of polynomial Diophantine equations takes the form:
where a, b, and c are known polynomials, and we wish to solve for s and t.
A simple example (and a solution) is:
A necessary and sufficient condition for a polynomial Diophantine equation to have a solution is for c to be a multiple of the GCD of a and b. In the example above, the GCD of a and b was 1, so solutions would exist for any value of c.
Solutions to polynomial Diophantine equations are not unique. Any multiple of (say ) can be used to transform and into another solution :
Some polynomial Diophantine equations can be solved using the extended Euclidean algorithm, which works as well with polynomials as it does with integers.
References
Diophantine equations
Polynomials | Polynomial Diophantine equation | [
"Mathematics"
] | 263 | [
"Polynomials",
"Mathematical objects",
"Equations",
"Number theory",
"Diophantine equations",
"Algebra"
] |
14,626,644 | https://en.wikipedia.org/wiki/Bromsulfthalein | Bromsulfthalein (also known as bromsulphthalein, bromosulfophthalein, and BSP) is a phthalein dye used in liver function tests. Determining the rate of removal of the dye from the blood stream gives a measure of liver function. The mechanism by which the liver detoxifies BSP is to attach it to glutathione which is the body’s master antioxidant.
References
Triarylmethane dyes
Phthalides
Bromobenzene derivatives
Phenols
Benzenesulfonates
Organic sodium salts | Bromsulfthalein | [
"Chemistry"
] | 122 | [
"Organic sodium salts",
"Salts"
] |
14,626,877 | https://en.wikipedia.org/wiki/Paracrystallinity | In materials science, paracrystalline materials are defined as having short- and medium-range ordering in their lattice (similar to the liquid crystal phases) but lacking crystal-like long-range ordering at least in one direction.
Origin and definition
The words "paracrystallinity" and "paracrystal" were coined by the late Friedrich Rinne in the year 1933. Their German equivalents, e.g. "Parakristall", appeared in print one year earlier.
A general theory of paracrystals has been formulated in a basic textbook, and then further developed/refined by various authors.
Rolf Hosemann's definition of an ideal paracrystal is: "The electron density distribution of any material is equivalent to that of a paracrystal when there is for every building block one ideal point so that the distance statistics to other ideal points are identical for all of these points. The electron configuration of each building block around its ideal point is statistically independent of its counterpart in neighboring building blocks. A building block corresponds then to the material content of a cell of this "blurred" space lattice, which is to be considered a paracrystal."
Theory
Ordering is the regularity in which atoms appear in a predictable lattice, as measured from one point. In a highly ordered, perfectly crystalline material, or single crystal, the location of every atom in the structure can be described exactly measuring out from a single origin. Conversely, in a disordered structure such as a liquid or amorphous solid, the location of the nearest and, perhaps, second-nearest neighbors can be described from an origin (with some degree of uncertainty) and the ability to predict locations decreases rapidly from there out. The distance at which atom locations can be predicted is referred to as the correlation length . A paracrystalline material exhibits a correlation somewhere between the fully amorphous and fully crystalline.
The primary, most accessible source of crystallinity information is X-ray diffraction and cryo-electron microscopy, although other techniques may be needed to observe the complex structure of paracrystalline materials, such as fluctuation electron microscopy in combination with density of states modeling of electronic and vibrational states. Scanning transmission electron microscopy can provide real-space and reciprocal space characterization of paracrystallinity in nanoscale material, such as quantum dot solids.
The scattering of X-rays, neutrons and electrons on paracrystals is quantitatively described by the theories of the ideal and real paracrystal.
Numerical differences in analyses of diffraction experiments on the basis of either of these two theories of paracrystallinity can often be neglected.
Just like ideal crystals, ideal paracrystals extend theoretically to infinity. Real paracrystals, on the other hand, follow the empirical α*-law, which restricts their size. That size is also indirectly proportional to the components of the tensor of the paracrystalline distortion. Larger solid state aggregates are then composed of micro-paracrystals.
Applications
The paracrystal model has been useful, for example, in describing the state of partially amorphous semiconductor materials after deposition. It has also been successfully applied to synthetic polymers, liquid crystals, biopolymers, quantum dot solids, and biomembranes.
See also
Amorphous solid
Crystallite
Crystallography
DNA
Single crystal
X-ray pattern of a B-DNA paracrystal
X-ray scattering techniques
References
Phases of matter | Paracrystallinity | [
"Physics",
"Chemistry"
] | 722 | [
"Phases of matter",
"Matter"
] |
14,627,005 | https://en.wikipedia.org/wiki/Nisnevich%20topology | In algebraic geometry, the Nisnevich topology, sometimes called the completely decomposed topology, is a Grothendieck topology on the category of schemes which has been used in algebraic K-theory, A¹ homotopy theory, and the theory of motives. It was originally introduced by Yevsey Nisnevich, who was motivated by the theory of adeles.
Definition
A morphism of schemes is called a Nisnevich morphism if it is an étale morphism such that for every (possibly non-closed) point x ∈ X, there exists a point y ∈ Y in the fiber such that the induced map of residue fields k(x) → k(y) is an isomorphism. Equivalently, f must be flat, unramified, locally of finite presentation, and for every point x ∈ X, there must exist a point y in the fiber such that k(x) → k(y) is an isomorphism.
A family of morphisms {uα : Xα → X} is a Nisnevich cover if each morphism in the family is étale and for every (possibly non-closed) point x ∈ X, there exists α and a point y ∈ Xα s.t. uα(y) = x and the induced map of residue fields k(x) → k(y) is an isomorphism. If the family is finite, this is equivalent to the morphism from to X being a Nisnevich morphism. The Nisnevich covers are the covering families of a pretopology on the category of schemes and morphisms of schemes. This generates a topology called the Nisnevich topology. The category of schemes with the Nisnevich topology is notated Nis.
The small Nisnevich site of X has as underlying category the same as the small étale site, that is to say, objects are schemes U with a fixed étale morphism U → X and the morphisms are morphisms of schemes compatible with the fixed maps to X. Admissible coverings are Nisnevich morphisms.
The big Nisnevich site of X has as underlying category schemes with a fixed map to X and morphisms the morphisms of X-schemes. The topology is the one given by Nisnevich morphisms.
The Nisnevich topology has several variants which are adapted to studying singular varieties. Covers in these topologies include resolutions of singularities or weaker forms of resolution.
The cdh topology allows proper birational morphisms as coverings.
The h topology allows De Jong's alterations as coverings.
The l′ topology allows morphisms as in the conclusion of Gabber's local uniformization theorem.
The cdh and l′ topologies are incomparable with the étale topology, and the h topology is finer than the étale topology.
Equivalent conditions for a Nisnevich cover
Assume the category consists of smooth schemes over a qcqs (quasi-compact and quasi-separated) scheme, then the original definition due to NisnevichRemark 3.39, which is equivalent to the definition above, for a family of morphisms of schemes to be a Nisnevich covering is if
Every is étale; and
For all field , on the level of -points, the (set-theoretic) coproduct of all covering morphisms is surjective.
The following yet another equivalent condition for Nisnevich covers is due to Lurie: The Nisnevich topology is generated by all finite families of étale morphisms such that there is a finite sequence of finitely presented closed subschemessuch that for , admits a section.
Notice that when evaluating these morphisms on -points, this implies the map is a surjection. Conversely, taking the trivial sequence gives the result in the opposite direction.
Motivation
One of the key motivations for introducing the Nisnevich topology in motivic cohomology is the fact that a Zariski open cover does not yield a resolution of Zariski sheaveswhereis the representable functor over the category of presheaves with transfers. For the Nisnevich topology, the local rings are Henselian, and a finite cover of a Henselian ring is given by a product of Henselian rings, showing exactness.
Local rings in the Nisnevich topology
If x is a point of a scheme X, then the local ring of x in the Nisnevich topology is the Henselization of the local ring of x in the Zariski topology. This differs from the Etale topology where the local rings are strict henselizations. One of the important points between the two cases can be seen when looking at a local ring with residue field . In this case, the residue fields of the Henselization and strict Henselization differso the residue field of the strict Henselization gives the separable closure of the original residue field .
Examples of Nisnevich Covering
Consider the étale cover given by
If we look at the associated morphism of residue fields for the generic point of the base, we see that this is a degree 2 extension
This implies that this étale cover is not Nisnevich. We can add the étale morphism to get a Nisnevich cover since there is an isomorphism of points for the generic point of .
Conditional covering
If we take as a scheme over a field , then a coveringpg 21 given bywhere is the inclusion and , then this covering is Nisnevich if and only if has a solution over . Otherwise, the covering cannot be a surjection on -points. In this case, the covering is only an Etale covering.
Zariski coverings
Every Zariski coveringpg 21 is Nisnevich but the converse doesn't hold in general. This can be easily proven using any of the definitions since the residue fields will always be an isomorphism regardless of the Zariski cover, and by definition a Zariski cover will give a surjection on points. In addition, Zariski inclusions are always Etale morphisms.
Applications
Nisnevich introduced his topology to provide a cohomological interpretation of the class set of an affine group scheme, which was originally defined in adelic terms. He used it to partially prove a conjecture of Alexander Grothendieck and Jean-Pierre Serre which states that a rationally trivial torsor under a reductive group scheme over an integral regular Noetherian base scheme is locally trivial in the Zariski topology. One of the key properties of the Nisnevich topology is the existence of a descent spectral sequence. Let X be a Noetherian scheme of finite Krull dimension, and let Gn(X) be the Quillen K-groups of the category of coherent sheaves on X. If is the sheafification of these groups with respect to the Nisnevich topology, there is a convergent spectral sequence
for , , and . If is a prime number not equal to the characteristic of X, then there is an analogous convergent spectral sequence for K-groups with coefficients in .
The Nisnevich topology has also found important applications in algebraic K-theory, A¹ homotopy theory and the theory of motives.
See also
Presheaf with transfers
Mixed motives (math)
A¹ homotopy theory
Henselian ring
References
, available at Nisnevich's website
Algebraic geometry
Topos theory | Nisnevich topology | [
"Mathematics"
] | 1,582 | [
"Mathematical structures",
"Fields of abstract algebra",
"Category theory",
"Algebraic geometry",
"Topos theory"
] |
14,627,161 | https://en.wikipedia.org/wiki/History%20of%20Sinhala%20software | Sinhala language software for computers have been present since the late 1980s (Samanala written in C) but no standard character representation system was put in place which resulted in proprietary character representation systems and fonts. In the wake of this CINTEC (Computer and Information Technology Council of Sri Lanka) introduced Sinhala within the UNICODE (16‑bit character technology) standard. ICTA concluded the work started by CINTEC for approving and standardizing Sinhala Unicode in Sri Lanka.
Timeline
1980–1989
1985
CINTEC establishes a committee for the use of Sinhala & Tamil in Computer Technology.
1987
"DOS WordPerfect" Reverend Gangodawila Soma Thero, who was the chief incumbent at the Springvale Buddhist temple in Melbourne, Australia asked the Lay members of the temple to produce a Monthly Newsletter for the temple in Sinhala, called "Bodu Puwath". A lay person named Jayantha de Silva developed two HP PCL Sinhala fonts called Lihil and an intelligent Phonetic keyboard that was able to select letters based on context, together with a printer driver and screen fonts. All this was possible because the utilities to create the keyboard and printer driver were supplied with WordPerfect. It was easy to use and was installed in many PCs owned by lay members and in the temple PC for typing articles. The program fell into disuse after Windows came online in 1990 as it did not support the WordPerfect macro keyboard.
1988
"Super77" First trilingual word processor (DOS based) initially developed at "Super Bits Computer Systems", Katunayake and further improved up to the commercial level at IFS kandy (by Rohan Manamudali & Sampath Godamunne, under Prof. Cyril Ponnamperuma). Later it was named as "THIBUS Trilingual Software System" (Windows based).
1989
"WadanTharuwa" (means WordStar in Sinhala) developed by the University of Colombo. It was one of the first commercial Sinhala word processing software products. Gives inspiration to a new generation of developers to pursue further innovation in this field.
1990–1999
1992
True Type Font Set KANDY jointly developed by Niranjan Meegammana and Micheal Gruber as part of project work (German Sri Lankan Co-Operation programme, 1988–1996) to use Sinhala Language in digital navigation charts.
1995
Sarasavi, also developed by the University of Colombo is a new version of WadanTharuwa, the first Trilingual software of its kind.
Thibus for Windows developed by Science Land (Pvt) Ltd. The most successful commercial software. Also includes the first Sinhala/English/Tamil dictionary and word by word translation technology.
Niranjan Meegammana continuing his work introduced New Kandy and several other windows fonts with Sinhala Word, one of the first Sinhala and Tamil word processors.
1996
Sri Lanka CD, A Sinhala Encyclopedia like CD on Sri Lanka developed by Niranjan Meegammana using New Kandy fonts.
1997
Helewadana for Windows developed by Microimage (Pvt) Ltd and Harsha Punasinghe. The most notable competition to Thibus during that time. Provides almost every functionality provided in Thibus.
Lankdeepa and Virakesari News papers published online by Niranjan Meegammana using Kandy New and Jaffna fonts. Greatly appreciated by Sri Lankan world over as Sinhala Content and communications on internet started with this initiative by ISP Ceycom Global Communications Ltd.
1998
Sinhala Word developed by Niranjan Meegamanna and becomes popular among the internet users and the font aKandyNew becomes the web standard font. The software supports both Phonetic and Wijesekara keyboards.
SLS1134/Unicode standards released by CINTEC for the first time.
2000–2009
2000
http://www.kaputa.com introduced by Niranjan Meegammana at e fusion pvt ltd. using Kaputa true type font which superseded Kandy New on web content and email communications opened up a new era of Sinhala content with mass content published by kaputa.com.
Thibus and Helawadana release the new versions of their successful products. The new versions have the transliteration technology built in. (Very primitive stages of transliteration.)
2002
Madura English-Sinhala Dictionary developed by Madura Kulatunga is released. The dictionary software is distributed as freeware. However the dictionary database is originally developed and owned by Thibus.
Siyabasa Sinhala Typing software developed by Dineth Chathuranga is released. This one single program is compatible and works with Windows 98 to Windows 7 operating systems.
2002-2004
Sinhala Text Box developed by Dasith Wijesiriwardena, a light weight word processor which supports publishing web pages to the internet and supports almost all the existing Fonts and Keyboards. Has built in support for transliteration keyboard input in most fonts. One of the major draw backs being the lack of support for Unicode. Wins best software awards at CSITTS (Peradeniya University) 2003 and in Digital Fusion (APIIT) 2004.
Tusitha Randunuge and Niranjan Meegammana at e fusion pvt ltd, released Kaputa dot com 2004 font with improved mapping for web content.
Lanka Linux User Group (LKLUG) introduces Sinhala Unicode in Linux.
The "Iskoola Pota" Unicode Sinhala font released by Microsoft.
It is a Unicode font developed by Microsoft, designed to accurately represent Sinhala characters on digital platforms. Iskoola Pota is widely utilized in various digital applications, including word processing software, web design, and mobile devices, to enable Sinhala speakers to communicate effectively in their native language online. Its clear and legible design makes it a popular choice for both professional and personal use, contributing to the preservation and promotion of the Sinhala language in the digital age.
Tusitha Randunuge and Niranjan Meegammana at http://www.kaputa.com released Kaputa Unicode Fonts and Keyboard drivers.
2004-2006
Formation of Sinhala Unicode Committee standardized Sinhala Keyboards bringing in developers Thibus, Helawadana and e fusion Pvt ltd, Lakehouse, Government Printer, Colombo and Moratuwa Universities, ICTA and SLS policy makers.
Sinhala Unicode Group a community group founded by Niranjan Meegammana, starts popularizing use of Sinhala Unicode and provides support and collaboration as a community initiative. This active group helped solving many technical issues and impact taking Sinhala Unicode to masses.
http://www.gov.lk the first Sinhala Unicode web site developed by Nirnajan Meegammana for Information and Communication Technology Agency (ICTA) at efusion pvt ltd. Inspires the government of Sri Lanka to use Sinhala Unicode in online content.
Sinhala SP for Windows developed by Native Innovation (Pvt) Ltd is a more complete software solution to its predecessor Sinhala Text Box. Its developer (Dasith Wijesiriwardena) introduces a new IME (Input Method Editor) technology by the name of “FutureSinhala” which acts as a bridge between the proprietary fonts/keyboards and the new Unicode/SLS1134 standard. It fully supports working with and converting Thibus, Helawadana and Kaputa font based documents to SLS1134. It ships with a transliteration scheme that works at the Windows OS level (called "Singlish") which has advanced support for Sinhala-English and Tamil-English using a QWERTY keyboard.
Kaputa Uniwriter is a real time Sinhala Unicode Input System and a trainer introduced by http://www.kaputa.com.
"Shilpa Sayura" The first Sinhala Unicode e-Learning system with content for national education developed by Niranjan Meegammana with a grant form Information and communication Technology Agency to e fusion pvt ltd. Inspires rural telecentres to use Sinhala Unicode in education development. This project received several international awards for innovation of local language for rural education development. Shilpa Sayura used a java script based online Sinhala Input Method supporting Kaputa and Wijeysekara Keyboards.
2007-2009
Sinhala Input Method Editor developed by SoftDevex (Pvt) Ltd that uses an exciting new input method for typing Sinhalese characters using conventional keyboard.
In order to provide the instructions on installation of Sinhala Unicode and provide the required software to the users, ICTA with the support of University of Colombo School of Computing (UCSC) established www.fonts.lk. The servers and software for the site was provided free of charge by UCSC. ICTA developed 3 more websites in 2007 in order to extend the support provided by www.fonts.lk in local languages. While www.emadumilihal.lk provides information and software for using Tamil Unicode, http://www.locallanguages.lk provides information and software for using both Sinhala and Tamil Unicode.
Online edition of Madura English-Sinhala Dictionary website http://www.maduraonline.com launched. This is the first online English-Sinhala dictionary and language translator in Sri Lanka.
Realtime Singlish (Another transliteration IME) was first released on April 13 of 2009 by Madura A., latest version is 2.0 (at time of editing). The first Sinhala Unicode which has a correct starting "TNW_Uni" has been developed by Thambaru Wijesekara.
2010–present
2014
Android app of Madura English-Sinhala Dictionary, Madura Online launched on Google Play store.
A Mac OS X dictionary for Sinhalese is made available. Assembled by Bhagya Nirmaan Silva, the dictionary was created with the re-use of work done by Buddhika Siddhisena and Language Technology Research Laboratory of University of Colombo, School of Computing in 2008.
The first standards compliant Sinhala Keyboard for Apple iOS was created and published by Bhagya Nirmaan Silva. This keyboard featured a copyrighted custom layout that was based on SLS 1134:2004 but was heavily optimised for mobile keyboards.
2016
Siththara Image viewer ("සිත්තරා") is first image viewer with Sinhala supported interface and was first released on March 1, 2016, developed by J A Kasun Buddhika.
References
External links
Home page of Science Land Software
Web Hosting Sri Lanka
Sinhala software
Sinhala-language mass media
Software topical history overviews | History of Sinhala software | [
"Technology"
] | 2,214 | [
"History of software",
"History of computing"
] |
14,627,251 | https://en.wikipedia.org/wiki/Architecture%20of%20Kuala%20Lumpur | The architecture of Kuala Lumpur is a blend of old colonial influences, Asian traditions, Malay Islamic inspirations, modern and post modern mix. Being a relatively young city, most of Kuala Lumpur's colonial buildings were built toward the end of 19th and early 20th century. These buildings have Mughal, Tudor, Neo-Gothic or Grecian-Spanish style or architecture. Most of the styling have been modified to cater to use local resources and the acclimatized to the local climate, which is hot and humid all year around.
Independence coupled with the rapid economic growth from the 70's to the 90's, saw buildings with more local and Islamic motifs arise in the central districts of the city. Many of these buildings derive their design from traditional Malay items, such as the head dress and the keris. Some of these buildings have Islamic geometric motifs integrated with the designs of the building, such as square patterns or a dome.
Late Modernist and Post Modernist style architecture began to appear in the late 1990s and early 2000s. Buildings with all-glass exteriors sprang up around the city, with the most prominent example being the Petronas Twin Towers As an emerging global city in a newly industrialized economy, the city skyline is expected to experience further changes in decades to come with construction works like The Gardens, The Pavilion, Four Seasons Place, Lot C of KLCC and many more.
Neo Moorish and Mughal
Buildings with Neo-Moorish and Mughal style of architecture were built at the turn of the 20th century by the colonial power, Great Britain. While most of the buildings with such architecture are in Dataran Merdeka, there are some in older part of town such as the Jamek Mosque on Jalan Tun Perak, and the KTM railway station and the KTM Administration Office. Famous buildings in the neo-Moorish style includes Sultan Abdul Samad Building, the Court of Appeals and the old Kuala Lumpur High Court. All the buildings mention before are within the Dataran Merdeka area. Other buildings with Moorish architecture are Bandaraya Theatre, InfoKraft (National Textile Museum), Kuala Lumpur Memorial Library, National History Museum and the old Sessions and Magistrates Courts before it was moved to Jalan Duta. The architect responsible for many of these buildings was Arthur Benison Hubback who designed the Jamek Mosque, the Railway Station, KTM Administration Office, Bandaraya Theatre and the textile museum, as well as contributing to the design of Sultan Abdul Samad Building.
Tudorbethan & Victorian
There are many buildings built by the British at the turn of the 20th century that exhibit Victorian and Tudor influence in their designs. The buildings are modified to be suitable to the tropical environment of Malaysia, which is hot and humid with many days of monsoon rain.
Mock Tudor or Tudorbethan styled architecture is the feature of two sporting clubs situated in Dataran Merdeka, the Royal Selangor Club and the Selangor Chinese Club. The buildings were built in 1910 and 1929 respectively. The architectural style, which features large exposed wooden beams in half-timbered walls, was the typical model for some of the earliest social club buildings in the country.
Neo-Gothic architecture exists in religious building built by the colonial powers such as the St. Mary's Cathedral, St Andrew's Presbyterian Church, Church of the Holy Rosary and St. John's Church which is converted into Bukit Nanas Community Center. However, some residences such as Carcosa Seri Negara, which was built in 1897 for Frank Swettenham also feature this style of architecture.
Victorian architecture was also a popular choice for the colonial powers to build schools, such as Victoria Institution, Methodist Boys' School and Convent Bukit Nanas. Other examples of building in this style of architecture include the Central Market, National Art Gallery, Malaysia Tourism Center, Industrial Court Building, The Mansem, PAM Center (housing the Malaysian Institute of Architects) and Coliseum Theater.
Grecian-Spanish
Prior to the gay fight, many shophouses, usually two story with functional shops on the ground floor and separate residential spaces upstairs, were built around the old city center. These shop-houses drew inspiration from Straits Chinese and European traditions. Some of these shop-houses have made way for new developments but there are still many standing today around Medan Pasar (Old Market Square), Chinatown, Jalan Tuanku Abdul Rahman, Jalan Doraisamy, Bukit Bintang and Tengkat Tong Shin areas. St. John's Institution in Bukit Nanas is famous of its imposing white and red brick building with emphasis on Grecian-Spanish style of architecture. The Telecom Museum, which was built in 1928 also sports the influence.
Modern Malay
Kuala Lumpur today has many iconic modern buildings which drew inspiration from every day traditional Malay items. The buildings were constructed in the 1980s and 1990s. An example of this style of architecture is the Lembaga Tabung Haji (Pilgrims Fund Board) building which is derived from the form of a Malay drum, Telekom Tower which resembles a slanted cut of a bamboo trunk and Maybank Tower, whose design was inspired by the sheath of the keris, the traditional Malay dagger. The buildings were designed by the same architect, Hijjas Kasturi. Istana Budaya is another example of this type of architecture, in which the building is designed based on a Minangkabau head dress. The National Library which is situated beside Istana Budaya is also inspired by the Malay Head Dress.
Islamic
With Islam being the official religion of Malaysia since independence, there are many Islamic architecture featured buildings that resides in Kuala Lumpur. Buildings like the Dayabumi Complex, and Islamic Center have Islamic geometric motifs on their structure, signifying Islamic restriction on drawing nature. Some buildings such as the Islamic Arts Museum Malaysia and National Planetarium have been built to masquerade itself as a place of worship, complete with dome and minaret, when in fact is a place of science and knowledge. Naturally, Islamic motif are evident in religious structure such as Masjid Wilayah and Masjid Negara. Religious places will have more Arabic calligraphy drawn on the columns and other places on the structure.
Late Modernism & Post-Modern
Kuala Lumpur's central business district today has shifted around the Kuala Lumpur City Center (KLCC) where many new and tall buildings with Late Modernism and Postmodern architecture fill the skyline. The 452 meter Petronas Twin Towers, designed by César Pelli, when seen from above, resembles the Islamic geometric motifs. While looking from street level, the all-glass shell of the building gives a post-modern take on the more traditional motif. The Kuala Lumpur Convention Centre, next door to the towers follows the same theme. The convention center will have the shape of an eagle if viewed from above, while the all-glass shell of the building gives a more post-modern look.
Current Developments
As a developing city and a part of a developing nation, there are many construction projects that are currently being built that will change the city's skyline in the near future. Some of the construction project are The Pavilion, The Gardens, Oval Suites, Four Seasons Center and Lot C of KLCC. A lot of the new development has come at the cost of old existing structures. The destruction of the heritage has created controversy, such as the recent destruction of the colonial-era mansion Bok House on Jalan Ampang in 2006 to make way for a 60-story office tower
Skyline
References
Kuala Lumpur
Kuala Lumpur | Architecture of Kuala Lumpur | [
"Engineering"
] | 1,513 | [
"Architecture by city",
"Architecture"
] |
14,627,460 | https://en.wikipedia.org/wiki/Seismic%20base%20isolation | Seismic base isolation, also known as base isolation, or base isolation system, is one of the most popular means of protecting a structure against earthquake forces. It is a collection of structural elements which should substantially decouple a superstructure from its substructure that is in turn resting on the shaking ground, thus protecting a building or non-building structure's integrity.
Base isolation is one of the most powerful tools of earthquake engineering pertaining to the passive structural vibration control technologies.
The isolation can be obtained by the use of various techniques like rubber bearings, friction bearings, ball bearings, spring systems and other means. It is meant to enable a building or non-building structure to survive a potentially devastating seismic impact through a proper initial design or subsequent modifications. In some cases, application of base isolation can raise both a structure's seismic performance and its seismic sustainability considerably. Contrary to popular belief, base isolation does not make a building earthquake proof.
Base isolation system consists of isolation units with or without isolation components, where:
Isolation units are the basic elements of a base isolation system which are intended to provide the aforementioned decoupling effect to a building or non-building structure.
Isolation components are the connections between isolation units and their parts having no decoupling effect of their own.
Isolation units could consist of shear or sliding units.
This technology can be used for both new structural design and seismic retrofit. In process of seismic retrofit, some of the most prominent U.S. monuments, e.g. Pasadena City Hall, San Francisco City Hall, Salt Lake City and County Building or LA City Hall were mounted on base isolation systems. It required creating rigidity diaphragms and moats around the buildings, as well as making provisions against overturning and P-Delta Effect.
Base isolation is also used on a smaller scale—sometimes down to a single room in a building. Isolated raised-floor systems are used to safeguard essential equipment against earthquakes. The technique has been incorporated to protect statues and other works of art—see, for instance, Rodin's Gates of Hell at the National Museum of Western Art in Tokyo's Ueno Park.
Base isolation units consist of Linear-motion bearings, that allow the building to move, oil dampers that absorb the forces generated by the movement of the building, and laminated rubber bearings that allow the building to return to its original position when the earthquake has ended.
History
Base isolator bearings were pioneered in New Zealand by Dr Bill Robinson during the 1970s. The bearing, which consists of layers of rubber and steel with a lead core, was invented by Dr Robinson in 1974. Later, in 2018, the technology was commercialized by Kamalakannan Ganesan and subsequently made patent-free, allowing for broader access and application of this earthquake-resistant technology
The earliest uses of base isolation systems date back all the way to 550 B.C. in the construction of the Tomb of Cyrus the Great in Pasargadae, Iran. More than 90% of Iran’s territory, including this historic site, is located in the Alpine-Himalaya belt, which is one of the Earth’s most active seismic zones. Historians discovered that this structure, predominantly composed of limestone, was designed to have two foundations. The first and lower foundation, composed of stones that were bonded together with a lime plaster and sand mortar, known as Saroj mortar, was designed to move in the case of an earthquake. The top foundation layer, which formed a large plate that was in no way attached to the structure’s base, was composed of polished stones. The reason this second foundation was not tied down to the base was that in the case of an earthquake, this plate-like layer would be able to slide freely over the structure’s first foundation. As historians discovered thousands of years later, this system worked exactly as its designers had predicted, and as a result, the Tomb of Cyrus the Great still stands today. The development of the idea of base isolation can be divided into two eras. In ancient times the isolation was performed through the construction of multilayered cut stones (or by laying sand or gravel under the foundation) while in recent history, beside layers of gravel or sand as an isolation interface wooden logs between the ground and the foundation are used.
Research
Through the George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES), researchers are studying the performance of base isolation systems.
The project, a collaboration among researchers at University of Nevada, Reno; University of California, Berkeley; University of Wisconsin, Green Bay; and the University at Buffalo is conducting a strategic assessment of the economic, technical, and procedural barriers to the widespread adoption of seismic isolation in the United States.
NEES resources have been used for experimental and numerical simulation, data mining, networking and collaboration to understand the complex interrelationship among the factors controlling the overall performance of an isolated structural system.
This project involves earthquake shaking table and hybrid tests at the NEES experimental facilities at the University of California, Berkeley, and the University at Buffalo, aimed at understanding ultimate performance limits to examine the propagation of local isolation failures (e.g., bumping against stops, bearing failures, uplift) to the system level response. These tests will include a full-scale, three-dimensional test of an isolated 5-story steel building on the E-Defense shake table in Miki, Hyōgo, Japan.
Seismic isolation research in the middle and late 1970s was largely predicated on the observation that most strong-motion records recorded up to that time had very low spectral acceleration values (2 sec) in the long-period range.
Records obtained from lakebed sites in the 1985 Mexico City earthquake raised concerns of the possibility of resonance, but such examples were considered exceptional and predictable.
One of the early examples of the earthquake design strategy is the one given by Dr. J.A. Calantariens in 1909. It was proposed that the building can be built on a layer of fine sand, mica or talc that would allow the building to slide in an earthquake, thereby reducing the forces transmitted to building.
A detailed literature review of semi-active control systems Michael D. Symans et al. (1999) provides references to both theoretical and experimental research but concentrates on describing the results of experimental work. Specifically, the review focuses on descriptions of the dynamic behavior and distinguishing features of various systems which have been experimentally tested both at the component level and within small scale structural models.
Adaptive base isolation
An adaptive base isolation system includes a tunable isolator that can adjust its properties based on the input to minimize the transferred vibration. Magnetorheological fluid dampers and isolators with Magnetorheological elastomer have been suggested as adaptive base isolators.
Notable buildings and structures on base isolation systems
Tomb of Cyrus
LA City Hall
Oakland City Hall
Pasadena City Hall
San Francisco City Hall
California Palace of the Legion of Honor in San Francisco
M. H. de Young Memorial Museum in San Francisco
Asian Art Museum in San Francisco
James R. Browning United States Court of Appeals Building in San Francisco
San Francisco International Airport's International Terminal, one of the largest base-isolated structures in the world
Salt Lake City and County Building
Başakşehir Çam and Sakura City Hospital in Istanbul
New Zealand Parliament Buildings in Wellington
Museum of New Zealand Te Papa Tongarewa in Wellington
Salt Lake Temple of the Church of Jesus Christ of Latter-day Saints in Salt Lake City (undergoing seismic renovation 2019–2024)
BAPS Shri Swaminarayan Mandir Chino Hills, the first earthquake-proof Hindu temple in the world
Apple Park
See also
Earthquake-resistant structures
Geotechnical engineering
Seismic retrofit
Shock absorber
Shock mount
Vibration isolation
References
Earthquake engineering
Seismic vibration control
Structural connectors
Structural system
New Zealand inventions | Seismic base isolation | [
"Technology",
"Engineering"
] | 1,582 | [
"Structural engineering",
"Building engineering",
"Structural system",
"Structural connectors",
"Civil engineering",
"Seismic vibration control",
"Earthquake engineering"
] |
14,627,569 | https://en.wikipedia.org/wiki/Magnolia%20%C3%97%20soulangeana | Magnolia × soulangeana (Magnolia denudata × Magnolia liliiflora), the saucer magnolia or sometimes the tulip tree, is a hybrid flowering plant in the genus Magnolia and family Magnoliaceae. It is a deciduous tree with large, early-blooming flowers in various shades of white, pink, and purple. It is one of the most commonly used magnolias in horticulture, being widely planted in the British Isles, especially in the south of England; and in the United States, especially the east and west coasts.
Description
Growing as a multistemmed large shrub or small tree, Magnolia × soulangeana has alternate, simple, shiny, dark green oval-shaped leaves on stout stems. Its flowers emerge dramatically on a bare tree in early spring, with the deciduous leaves expanding shortly thereafter, lasting through summer until autumn.
Magnolia × soulangeana flowers are large, commonly 10–20 cm (4–8 in) across, and colored various shades of white, pink, and maroon. An American variety, 'Grace McDade' from Alabama, is reported to bear the largest flowers, with a 35 cm (14 in) diameter, white tinged with pinkish-purple. Another variety, Magnolia × soulangeana 'Jurmag1', is supposed to have the darkest and tightest flowers. The exact timing and length of flowering varies between named varieties, as does the shape of the flower. Some are globular, others a cup-and-saucer shape.
Hybrid origin
Magnolia × soulangeana was initially bred by French plantsman Étienne Soulange-Bodin (1774–1846), a retired cavalry officer in Napoleon's army, at his château de Fromont near Paris. He crossed Magnolia denudata with M. liliiflora in 1820, and was impressed with the resulting progeny's first precocious flowering in 1826.
Many times, Soulange-Bodin is cited as the author of this hybrid name, rarely with a reference to a publication however. If a source is given, it is often an English translation of a French title (see for example Callaway, D.J. (1994), World of Magnolias: 204). Soulange-Bodin certainly did not name the hybrid after himself. The name was proposed by members of the Société Linnéenne de Paris and published by Arsène Thiébaud de Berneaud, the secretary of the society, in Relation de la cinquième fête champêtre célébré le 24 mai 1826 in: Comte-Rendu des Travaux de la Société Linnéenne de Paris 1826: 269.
Cultivation
From France, the hybrid quickly entered cultivation in England and other parts of Europe, and also North America. Since then, plant breeders in many countries have continued to develop this plant, and over a hundred named horticultural varieties (cultivars) are now known.
Magnolia × soulangeana is notable for its ease of cultivation, and its relative tolerance to wind and alkaline soils (two vulnerabilities of many other magnolias).
The cultivar 'Brozzonii' has gained the Royal Horticultural Society's Award of Garden Merit.
Gallery
Notes
References
External links
Magnolia x soulangeana images at the Arnold Arboretum of Harvard University Plant Image Database
soulangeana
Hybrid plants
Garden plants of Asia
Garden plants of Europe
Ornamental trees | Magnolia × soulangeana | [
"Biology"
] | 692 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
14,628,551 | https://en.wikipedia.org/wiki/Imidazoline%20receptor | Imidazoline receptors are the primary receptors on which clonidine and other imidazolines act. There are three main classes of imidazoline receptor: I1 is involved in inhibition of the sympathetic nervous system to lower blood pressure, I2 has as yet uncertain functions but is implicated in several psychiatric conditions, and I3 regulates insulin secretion.
Classes
As of 2017, there are three known subtypes of imidazoline receptors: I1, I2, and I3.
I1 receptor
The I1 receptor appears to be a G protein-coupled receptor that is localized on the plasma membrane. It may be coupled to PLA2 signalling and thus prostaglandin synthesis. In addition, activation inhibits the sodium-hydrogen antiporter and enzymes of catecholamine synthesis are induced, suggesting that the I1 receptor may belong to the neurocytokine receptor family, since its signaling pathways are similar to those of interleukins. It is found in the neurons of the reticular formation, the dorsomedial medulla oblongata, adrenal medulla, renal epithelium, pancreatic islets, platelets, and the prostate. They are notably not expressed in the cerebral cortex or locus coeruleus.
Animal research suggests that much of the antihypertensive action of imidazoline drugs such as clonidine is mediated by the I1 receptor. In addition, I1 receptor activation is used in ophthalmology to reduce intraocular pressure. Other putative functions include promoting Na+ excretion and promoting neural activity during hypoxia.
I2 receptor
The I2 receptor binding sites have been defined as being selective binding sites inhibited by the antagonist idazoxan that are not blocked by catecholamines. The major binding site is located on the outer mitochondrial membrane, and is proposed to be an allosteric site on monoamine oxidase, while another binding site has been found to be brain creatine kinase. Other known binding sites have yet to be characterized .
Preliminary research in rodents suggests that I2 receptor agonists may be effective in chronic, but not acute pain, including fibromyalgia. I2 receptor activation has also been shown to decrease body temperature, potentially mediating neuroprotective effects seen in rats.
The only known antagonist for the receptor is idazoxan, which is non-selective.
I3 receptor
The I3 receptor regulates insulin secretion from pancreatic beta cells. It may be associated with ATP-sensitive K+ (KATP) channels.
Ligands
I1 receptors
Agonists
AGN 192403
Moxonidine
Antagonists
I2 receptors
Agonists
CR-4056
Phenyzoline (2-(2-phenylethyl)-4,5-dihydro-1H-imidazole)
RS 45041-90
Tracizoline
Antagonists
BU 224 (disputed)
I3 receptors
No selective ligands are known as of 2017.
Nonselective ligands
Agonists
Agmatine (putative endogenous ligand at I1; also interacts with NMDA, nicotinic, and α2 adrenoceptors)
Apraclonidine (α2 adrenoceptor agonist)
2-BFI (I2 agonist, NMDA antagonist)
Cimetidine (I1 agonist, H2 receptor antagonist)
Clonidine (I1 agonist, α2 adrenoceptor agonist)
LNP-509
LNP-911
7-Me-marsanidine
Dimethyltryptamine
mCPP
Moxonidine
Oxymetazoline (I1 agonist, α1 adrenoceptor agonist, α2 partial agonist)
Rilmenidine
S-23515
S-23757
Tizanidine
Antagonists
BU99006 (alkylating agent, inactivates I2 receptors)
Efaroxan (I1, α2 adrenoceptor antagonist)
Idazoxan (I1, I2 antagonist, α2 adrenoceptor antagonist)
See also
Imidazoline
References
External links
Receptors | Imidazoline receptor | [
"Chemistry"
] | 881 | [
"Receptors",
"Signal transduction"
] |
14,628,623 | https://en.wikipedia.org/wiki/Isomap | Isomap is a nonlinear dimensionality reduction method. It is one of several widely used low-dimensional embedding methods. Isomap is used for computing a quasi-isometric, low-dimensional embedding of a set of high-dimensional data points. The algorithm provides a simple method for estimating the intrinsic geometry of a data manifold based on a rough estimate of each data point’s neighbors on the manifold. Isomap is highly efficient and generally applicable to a broad range of data sources and dimensionalities.
Introduction
Isomap is one representative of isometric mapping methods, and extends metric multidimensional scaling (MDS) by incorporating the geodesic distances imposed by a weighted graph. To be specific, the classical scaling of metric MDS performs low-dimensional embedding based on the pairwise distance between data points, which is generally measured using straight-line Euclidean distance. Isomap is distinguished by its use of the geodesic distance induced by a neighborhood graph embedded in the classical scaling. This is done to incorporate manifold structure in the resulting embedding. Isomap defines the geodesic distance to be the sum of edge weights along the shortest path between two nodes (computed using Dijkstra's algorithm, for example). The top n eigenvectors of the geodesic distance matrix, represent the coordinates in the new n-dimensional Euclidean space.
Algorithm
A very high-level description of Isomap algorithm is given below.
Determine the neighbors of each point.
All points in some fixed radius.
K nearest neighbors.
Construct a neighborhood graph.
Each point is connected to other if it is a K nearest neighbor.
Edge length equal to Euclidean distance.
Compute shortest path between two nodes.
Dijkstra's algorithm
Floyd–Warshall algorithm
Compute lower-dimensional embedding.
Multidimensional scaling
Extensions of ISOMAP
LandMark ISOMAP (L-ISOMAP): Landmark-Isomap is a variant of Isomap which is faster than Isomap. However, the accuracy of the manifold is compromised by a marginal factor. In this algorithm, n << N landmark points are used out of the total N data points and an nxN matrix of the geodesic distance between each data point to the landmark points is computed. Landmark-MDS (LMDS) is then applied on the matrix to find a Euclidean embedding of all the data points.
C Isomap: C-Isomap involves magnifying the regions of high density and shrink the regions of low density of data points in the manifold. Edge weights that are maximized in Multi-Dimensional Scaling(MDS) are modified, with everything else remaining unaffected.
Parallel Transport Unfolding: Replaces the Dijkstra path-based geodesic distance estimates with parallel transport based approximations instead, improving robustness to irregularity and voids in the sampling.
Possible issues
The connectivity of each data point in the neighborhood graph is defined as its nearest k Euclidean neighbors in the high-dimensional space. This step is vulnerable to "short-circuit errors" if k is too large with respect to the manifold structure or if noise in the data moves the points slightly off the manifold. Even a single short-circuit error can alter many entries in the geodesic distance matrix, which in turn can lead to a drastically different (and incorrect) low-dimensional embedding. Conversely, if k is too small, the neighborhood graph may become too sparse to approximate geodesic paths accurately. But improvements have been made to this algorithm to make it work better for sparse and noisy data sets.
Relationship with other methods
Following the connection between the classical scaling and PCA, metric MDS can be interpreted as kernel PCA. In a similar manner, the geodesic distance matrix in Isomap can be viewed as a kernel matrix. The doubly centered geodesic distance matrix K in Isomap is of the form
where is the elementwise square of the geodesic distance matrix D = [Dij], H is the centering matrix, given by
However, the kernel matrix K is not always positive semidefinite. The main idea for kernel Isomap is to make this K as a Mercer kernel matrix (that is positive semidefinite) using a constant-shifting method, in order to relate it to kernel PCA such that the generalization property naturally emerges.
See also
Kernel PCA
Spectral clustering
Nonlinear dimensionality reduction
References
External links
Isomap webpage at Stanford university
Computational statistics | Isomap | [
"Mathematics"
] | 931 | [
"Computational statistics",
"Computational mathematics"
] |
14,628,647 | https://en.wikipedia.org/wiki/G%C3%BCnter%20M.%20Ziegler | Günter Matthias Ziegler (born 19 May 1963) is a German mathematician who has been serving as president of the Free University of Berlin since 2018. Ziegler is known for his research in discrete mathematics and geometry, and particularly on the combinatorics of polytopes.
Biography
Ziegler studied at the Ludwig Maximilian University of Munich from 1981 to 1984, and went on to receive his Ph.D. from the Massachusetts Institute of Technology in Cambridge, Massachusetts, in 1987, under the supervision of Anders Björner. After postdoctoral positions at the University of Augsburg and the Mittag-Leffler Institute, he received his habilitation in 1992 from Technische Universität Berlin], which he joined as a professor in 1995. Ziegler has since joined the faculty of the Free University of Berlin.
Awards and honors
Ziegler was awarded the one million Deutschmark by the Deutsche Forschungsgemeinschaft (DFG) in 1994 and the 1.5 million Deutschmark Gottfried Wilhelm Leibniz Prize, Germany's highest research honor, by the DFG in 2001. He was awarded the 2005 Gauss Lectureship by the German Mathematical Society. In 2006 the Mathematical Association of America awarded Ziegler and Florian Pfender its highest honor for mathematical exposition, the Chauvenet Prize, for their paper on kissing numbers.
In 2006 Ziegler became president of the German Mathematical Society for a two-year term. In 2009, the European Research Council (ERC) awarded Ziegler one of the ERC Advanced Grants in the amount of 1.85 million Euros. In 2012 he became a fellow of the American Mathematical Society. In 2013 Ziegler was granted the Hector Science Award and became a member of the Hector Fellow Academy. Since 2016 Ziegler has been chair of the Berlin Mathematical School. In 2018 he received the Leroy P. Steele Prize for Mathematical Exposition (jointly with Martin Aigner) for Proofs from THE BOOK.
Other activities
Berlin Institute of Health (BIH), Member of the Supervisory Board (since 2020)
German Institute for Economic Research (DIW), member of the board of trustees (since 2018)
Genshagener Kreis, member of the board of trustees (since 2018)
Einstein Foundation Berlin, Member of the Council
Max Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), member of the supervisory board
Berlin Social Science Center (WZB), member of the board of trustees (since 2018)
Klaus Tschira Foundation, member of the board of trustees (since 2017)
Urania, Member of the Board
Selected publications
; 6th ed, 2018.
.
References
Further reading
. Article in German about Ziegler.
External links
1963 births
Living people
20th-century German mathematicians
21st-century German mathematicians
Gottfried Wilhelm Leibniz Prize winners
Academic staff of the Free University of Berlin
Academic staff of Technische Universität Berlin
Fellows of the American Mathematical Society
Combinatorialists
Scientists from Munich
European Research Council grantees
Massachusetts Institute of Technology School of Science alumni
Mathematics popularizers
Members of the German National Academy of Sciences Leopoldina
Presidents of the German Mathematical Society
Studienstiftung alumni | Günter M. Ziegler | [
"Mathematics"
] | 656 | [
"Combinatorialists",
"Combinatorics"
] |
14,629,026 | https://en.wikipedia.org/wiki/Chain%20conveyor | A chain conveyor is a type of conveyor system for moving material through production lines.
Operation
Chain conveyors use an endless chain both to transmit power and to propel material through a trough, either pushed directly by the chain or by attachments to the chain. The chain runs over sprockets at either end of the trough. Chain conveyors are used to move material up to , and typically under .
Chain conveyors utilize a powered continuous chain arrangement, carrying a series of single pendants. The chain arrangement is driven by a motor, and the material suspended on the pendants is conveyed. Chain conveyors are used for moving products down an assembly line and/or around a manufacturing or warehousing facility.
Chain conveyors are primarily used to transport heavy unit loads, e.g., pallets, grid boxes, and industrial containers. These conveyors can be single or double chain strand in configuration. The load is positioned on the chains, and the friction pulls the load forward. Chain conveyors are generally easy to install and have minimal maintenance for users.
Many industry sectors use chain conveyor technology in their production lines. The automotive industry commonly uses chain conveyor systems to convey car parts through paint plants. Chain conveyors also have widespread use in the white and brown goods, metal finishing and distribution industries. Chain conveyors are also used in the painting and coating industry, which allows for easier paint application. The products are attached to an overhead chain conveyor, keeping products off of the floor allows for higher productivity levels.
Types
Types of chain conveyor include apron, drag, plain chain, scraper, flight, and en-masse conveyors.
Drag conveyor
Drag conveyors, variously called drag chain conveyors, scraper chain conveyors and en-masse conveyors, are used in bulk material handling to move solid material along a trough. They are used for moving materials such as cement clinker, ash, and sawdust in the mining and chemical industries, municipal solid waste incinerators, and the production of pellet fuel.
The difference between drag conveyors, scraper conveyors, and flight conveyors largely depends on whether the chain links have obvious flights or paddles attached. In a drag conveyor, the chain moves the material directly, while a flight conveyor uses a series of wood, metal, or plastic flights attached to the chain at regular intervals, which push the material along the trough.
Multiflexing chain conveyor
Multiflexing conveyor systems use plastic chains in many configurations. The flexible conveyor chain design permits horizontal as well as vertical change of direction.
See also
Conveyor system
Conveyor belt
Lineshaft roller conveyor
Notes
References
Freight transport
Mechanical power transmission
Mechanical power control | Chain conveyor | [
"Physics"
] | 553 | [
"Mechanical power transmission",
"Mechanics",
"Mechanical power control"
] |
14,629,712 | https://en.wikipedia.org/wiki/Hydrotreated%20vegetable%20oil | Hydrotreated vegetable oil (HVO) is a biofuel made by the hydrocracking or hydrogenation of vegetable oil. Hydrocracking breaks big molecules into smaller ones using hydrogen while hydrogenation adds hydrogen to molecules. These methods can be used to create substitutes for gasoline, diesel, propane, kerosene and other chemical feedstock. Diesel fuel produced from these sources is known as green diesel or renewable diesel.
Diesel fuel created by hydrotreating is distinct from the biodiesel made through esterification.
Feedstock
The majority of plant and animal oils are triglycerides, suitable for refining. Refinery feedstock includes canola, algae, jatropha, salicornia, palm oil, tallow and soybeans. One type of algae, Botryococcus braunii produces a different type of oil, known as a triterpene, which is transformed into alkanes by a different process.
Chemical analysis
Synthesis
The production of hydrotreated vegetable oils is based on introducing hydrogen molecules into the raw fat or oil molecule. This process is associated with the reduction of the carbon compound. When hydrogen is used to react with triglycerides, different types of reactions can occur, and different resultant products are combined. The second step of the process involves converting the triglycerides/fatty acids to hydrocarbons by hydrodeoxygenation (removing oxygen as water) and/or decarboxylation (removing oxygen as carbon dioxide).
A formulaic example of this is + 12 ⟶ + 3 + 6
Chemical composition
The chemical formula for HVO Diesel is CnH2n+2
Chemical properties
Hydrotreated oils are characterized by very good low temperature properties. The cloud point also occurs below −40 °C. Therefore, these fuels are suitable for the preparation of premium fuel with a high cetane number and excellent low temperature properties. The cold filter plugging point (CFPP) virtually corresponds to the cloud point value, which is why the value of the cloud point is significant in the case of hydrotreated oils.
Comparison to biodiesel
Both HVO diesel (green diesel) and biodiesel are made from the same vegetable oil feedstock. However the processing technologies and chemical makeup of the two fuels differ. The chemical reaction commonly used to produce biodiesel is known as transesterification.
The production of biodiesel also makes glycerol, but the production of HVO does not.
Neste has published the differences between biodiesel and renewable diesel (HVO) which are summarized in the table below.
Commercialization
Various stages of converting renewable hydrocarbon fuels produced by hydrotreating is done throughout energy industry. Some commercial examples of vegetable oil refining are:
Neste NExBTL
Topsoe HydroFlex technology
Axens Vegan technology
H-Bio, the ConocoPhilips process
UOP/Eni Ecofining process.
Neste is the largest manufacturer, producing ca. 3.3 million tonnes annually (2023). Neste completed their first NExBTL plant in the summer 2007 and the second one in 2009. Petrobras planned to use of vegetable oils in the production of H-Bio fuel in 2007. ConocoPhilips is processing of vegetable oil. Other companies working on the commercialization and industrialization of renewable hydrocarbons and biofuels include Neste, REG Synthetic Fuels, LLC, ENI, UPM Biofuels, Diamond Green Diesel partnered with countries across the globe. Manufacturers of these renewable diesels report greenhouse gas emissions reductions of 40-90% compared to fossil diesel, as well as better cold-flow properties to work in colder climates. In addition, all of these green diesels can be introduced into any diesel engine or infrastructure without many mechanical modifications at any ratio with petroleum-based diesels.
Renewable diesel from vegetable oil is a growing substitute for petroleum. California fleets used over of renewable diesel in 2017. The California Air Resources Board predicts that over of fuel will be consumed in the state under its Low Carbon Fuel Standard requirements in the next ten years. Fleets operating on Renewable Diesel from various refiners and feedstocks are reported to see lower emissions, reduced maintenance costs, and nearly identical experience when driving with this fuel.
Sustainability concerns
A number of issues have been raised about the sustainability of HVO, primarily concerning the sourcing of its lipid feedstocks. Waste oils such as used cooking oil are a limited resource and their use cannot be scaled up beyond a certain point. Further demand for HVO would have to be met with crop-based virgin vegetable oils, but the diversion of vegetable oils from the food market into the biofuels sector has been linked to increased global food prices, and to global agricultural expansion and intensification. This is associated with a variety of ecological and environmental implications; moreover, greenhouse gas emissions from land use change may in some circumstances negate or exceed any benefit from the displacement of fossil fuels.
A 2022 study published by the International Council on Clean Transportation found that the anticipated scale-up of renewable diesel capacity in the U.S. would quickly exhaust the available supply of waste and residual oils, and increasingly rely on domestic and imported soy oil. The report also noted that increased U.S. renewable diesel production risked indirectly driving the expansion of palm oil cultivation in Southeast Asia, where the palm oil industry is still endemically associated with deforestation and peat destruction.
Challenges in Producing Fuels from Bio-Derived Feedstocks with HVO
Refinery hydrotreaters are used for processing HVO. Introducing even minor amounts of biomaterial into a diesel hydrotreater has implications and potential risk factors. The main issues are corrosion, high hydrogen consumption, and catalyst deactivation.
According to Haldor Topsoe's experience with their licensed units, HVO production poses certain challenges for hydrotreaters including:
Corrosion - There are several corrosion mechanisms from hydrotreating vegetable oils and animal fats. Most are acidic though this is tempered by being bound into tri and di-glycerides. However, difficult feedstocks like Distillers Corn Oil can contain 10-15% free fatty acids. These acids can attack non-stainless steels in the preheat train, fired heater, piping, valves and reactors. In addition, chlorides that contaminate feeds can be converted to hydrogen chloride in the reactor which can then cause accelerated corrosion in the effluent lines and for sour water. The presence of chlorides in a wet environment is also problematic for the common stainless steel grades 304 and 316 due to the potential of intergranular stress chloride cracking. In addition, the formation of carbon dioxide from decarboxylation reactions during hydrotreating can form carbonic acid when contacted with water.
Hydrogen Consumption - removing oxygen, cracking long-chain molecules, and saturating olefinic bonds will chemically consume two to four times the hydrogen of a conventional ULSD hydrotreater. ULSD hydrotreating chemical hydrogen consumption is typically 300-600 scf/bbl of feed depending on the aromatic saturation required for cycle oils and other cracked feedstocks. Chemical consumption for HVO approaches 2,500 scf/bbl depending on the level of saturation of feedstock and the length of the carbon chains. Delivering hydrogen for consumption, in addition to quench and additional excess circulating hydrogen can pose significant challenges to unit revamp design and operation with hydraulics, distribution, and compressor power being critical.
Fouling - alkali metals and especially phosphorus must be kept low in HVO feedstocks in order to minimize pressure drop from fouling and general catalyst deactivation. Phosphatidic glassing is an aggressive catalyst poisoning mechanism that will not only plug off a reactor's pore spaces thus causing rapid pressure drop, but will also interfere with the catalysts acid sites by coating the outside of the catalyst and begin adhering to other catalyst particles.
Operational history
HVO processing is a young technology relative to most other refining processes. The first commercial scale unit started up in Louisiana in 2010 with a capacity of per year.
A newbuild plant was constructed in 2010 Geismar, LA by the Syntroleum Corporation and its joint-venture partner Tyson Foods. The plant initiated startup in the 3rd quarter with a target of per year. Feedstock for the plant was vegetable oil and pretreated rendered poultry fat. The site achieved 87% of its design capacity in 2011. Corrosion, including chloride-linked stress corrosion cracking shut the plant down in 2012 for more than a year. Tyson sold its 50% ownership to Renewable Energy Group (now Chevron) and Syntroleum's stock was announced by the same buyer in 2013 with closing in 2014. In 2015, two fires caused damage to the plant with major damage being incurred.
Operable capacity
See also
Biodiesel
Indirect land use change impacts of biofuels
Algae fuel
Renewable hydrocarbon fuels via decarboxylation/decarbonylation
Sustainable oils
Vegetable oil fuel
References
External links
University Of Wisconsin / College Of Engineering (June 6, 2005). Green Diesel: New Process Makes Liquid Transportation Fuel From Plants. ScienceDaily. Retrieved August 10, 2010
Renewable Diesel Primer. Retrieved August 10, 2010
Oil refining
Synthetic fuel technologies
Renewable energy technology
Biofuels
Renewable fuels | Hydrotreated vegetable oil | [
"Chemistry"
] | 1,909 | [
"Petroleum technology",
"Oil refining",
"Synthetic fuel technologies"
] |
14,629,784 | https://en.wikipedia.org/wiki/Society%20of%20American%20Military%20Engineers | The Society of American Military Engineers (SAME) unites public and private sector individuals and organizations from across the architecture, engineering, construction, environmental, facility management, contracting and acquisition fields and related disciplines in support of the United States' national security.
SAME connects architects, engineers and builders in the public sector and private industry, uniting them to improve individual and collective capabilities to provide the capability and prepare for and overcome natural and man-made disasters, acts of terrorism and to improve security at home and abroad.
That goal grew from America's experiences in World War I in which more than 11,000 civilian engineers were called to duty upon the United States entering the conflict. Returning home after "the war to end war," many feared the sector would lose this collective knowledge and the cooperation between public and private sectors that proved vital to combat success. Industry and military leaders vowed to capitalize on the technical lessons and camaraderie shared during their battlefield experiences.
History
In 1919, Major General William M. Black, Chief of Engineers, appointed a nine-officer board to consider the formation of an "association of engineers" that would preserve, and expand upon, connections formed in war and promote the advancement of engineering and its related professions. Early in 1920, the first SAME posts were established, providing former colleagues and new engineers opportunities to connect face-to-face, and establishing post-to-community relationships across the United States.
The original nine-member board appointed by General Black also arranged the donation of Professional Memoirs, a magazine published by the Engineer Bureau since 1909, and its assets, to SAME with the blessing of General of the Armies John J. Pershing. Those memoirs were subsequently renamed The Military Engineer, which has been continuously published since it debuted in 1920.
United States Vice President Charles G. Dawes served as SAME's 8th president. The year before assuming his role as president of SAME, Dawes was awarded the 1925 Nobel Peace Prize for his work on German reparations in 1924.
Due to its close ties with the uniformed services of the United States, several branches of the military and the Public Health Service allow its members to wear the SAME ribbon on the uniform after all military and foreign decorations and awards. Colonel, and University president Blake R. Van Leer was also a member.
Governance
Headquartered in Alexandria, Virginia, SAME provides its more than 25,000 members extensive opportunities for industry-government engagement, training, education and professional development through a robust offering of conferences, workshops, networking events and publications. With a membership that includes recent service academy graduates and retired engineering officers, project managers and corporate executives, uniformed and public sector professionals and private sector experts, SAME bridges the gaps between critical stakeholders to help secure our nation.
SAME consists of 95 Posts and more than 30 student chapters and field chapters around the world along with a national office staff. Nationally, the organization is led by a volunteer board of direction that comprises six national officers, 18 regional vice presidents, the chairs of the Mission Committees & Councils and 12 elected directors who serve three-year terms and are elected in groups of four annually.
SAME membership is open to anyone in the U.S. and abroad.
See also
Goethals Medal
References
External links
1920 establishments in Washington, D.C.
501(c)(3) organizations
Aftermath of World War I in the United States
Engineering organizations
Magazine publishing companies of the United States
Military engineering of the United States
Nonpartisan organizations in the United States
Non-profit organizations based in Alexandria, Virginia
Organizations established in 1920
United States military support organizations | Society of American Military Engineers | [
"Engineering"
] | 714 | [
"nan"
] |
14,630,593 | https://en.wikipedia.org/wiki/Anocutaneous%20line | The anocutaneous line, also called the Hilton white line or intersphincteric groove, is a boundary in the anal canal.
Below the anocutaneous line, lymphatic drainage is to the superficial inguinal nodes.
The anocutaneous line is slightly below the pectinate line and a landmark for the intermuscular border between internal and external anal sphincter muscles.
The anocutaneous line represents the transition point from non-keratinized stratified squamous epithelium of the anal canal to keratinized stratified squamous epithelium of the anus and perianal skin.
In live persons, the color of the line is white, hence the alternative name. It is named for John Hilton.
See also
Anal canal
Dentate line
Hilton's Law
References
Digestive system | Anocutaneous line | [
"Biology"
] | 185 | [
"Digestive system",
"Organ systems"
] |
14,630,938 | https://en.wikipedia.org/wiki/Ahmad%20Zirakzadeh | Ahmad Zirakzadeh (; 6 March 1908 – 25 August 1993) was one of the founders of National Front of Iran, an Iranian party which was considered the backbone of Mohammad Mosaddegh's government. He made history by defending the country against Operation Ajax.
Early life
He was born in 1907 to a Yazdi father and mother in a religious family in Tehran. His father, Mirza Zirak, was an Islamic cleric who traveled a lot and in one of his trips he visited a Bakhtiari khan (Sepahkhan) who took Mirza Zirak with him to Tehran, where Ahmad was born. Later his family migrated to a Bakhtiari town, Shahrekord, which was called Dehe Kord at that time. After the Iranian Constitutional Revolution, some of the Bakhtiari khans decided their sons to go to France to study in western universities and see the western culture which was considered an honor at that time, so they chose Mirza Zirak to go with their children as an elder and Mirza Zirak took his older son Gholamhossein with him. After Gholamhossein returned from Europe to Iran he talked to Ahmad about democracy and nationalism which he had seen in France and these ideas and thoughts had a great effect on Ahmad’s later political life.
Education
In 1925 Ahmad Zirakzadeh went to Tehran to Dar ul-Funun high school. After one year he won a scholarship from Iran ministry of defense and went to Polytechnic university of Paris to study marine engineering where he saw a totally different world from Iran.
Political life
After returning to Iran in 1935 he had to work for navy because of his scholarship so he went to Khorramshahr and worked there as a navy officer and in 1941 was transferred to ministry of transportation in Bandar Anzali.
After the strike of engineers in 1943 he felt that he “had to change something” so he went to Tehran and was elected as a head member of center of engineers ().
Zirakzadeh, Farivar, Shafagh, Moazemi and some others founded Iran party which was a nationalist party and considered as one of the most important columns of National Front of Iran and the backbone of Mohammed Mosaddeq's government and Zirakzadeh became the secretary general of the party. The first Iran party’s newspaper was Shafagh() with the management of Dr.Jazayeri and the second newspaper was published by Zirakzadeh with the name of Jebhe(). Later the Iran party formed an alliance with Tudeh party which defaced its reputation.
After the election of Mosaddeq as the prime minister of Iran in 1951, Zirakzadeh became the economy ministry deputy. Later, in 1952 parliament elections his name was put in National Front of Iran list with other eleven members and he was elected as a member of 17th parliament for the people of Tehran and continued to work on nationalization of oil with Mosaddeq, Hossein Fatemi, Makki, Hasibi, Asghar Parsa and other members of National Front of Iran.
The 1953 coup and after
He was in Mosaddeq's house in 1953 coup -which was triggered by CIA
- and broke his leg while he tried to escape there. After the coup he hid from the government for about 2.5 years then he was arrested and was in prison for about five months. After release he established a private business. He was arrested again in 1957 because Shah who was in power again after the 1953 coup suspected that he was working against the government. He became a member of the second National front of Iran after release but he was not as active as the past in this new political party as he himself has said “I did not have the Heat that I used to have before”. He went to the United States in 1980 one year after the Islamic revolution and lived there for many years before his return to Iran.
Before his death he made a will to establish a science foundation with his money which is now operating under the name of Zirakzadeh Science foundation in Tehran.
He died in 1993 at the age of 86 in Tehran and is buried in Beheshte Zahra.
See also
Mohammed Mosaddeq
References
External links
Zirakzadeh science foundation website .
1908 births
1993 deaths
National Front (Iran) MPs
Marine engineers
Iran Party politicians
20th-century Iranian engineers
Iranian prisoners and detainees
Burials at Behesht-e Zahra
École Polytechnique alumni | Ahmad Zirakzadeh | [
"Engineering"
] | 939 | [
"Marine engineers",
"Marine engineering"
] |
14,631,872 | https://en.wikipedia.org/wiki/Preservation%20metadata | Preservation metadata is item level information that describes the context and structure of a digital object. It provides background details pertaining to a digital object's provenance, authenticity, and environment. Preservation metadata, is a specific type of metadata that works to maintain a digital object's viability while ensuring continued access by providing contextual information, usage details, and rights.
As an increasing portion of the world’s information output shifts from analog to digital form, preservation metadata is an essential component of most digital preservation strategies, including digital curation, data management, digital collections management and the preservation of digital information over the long-term. It is an integral part of the data lifecycle and helps to document a digital object’s authenticity while maintaining usability across formats.
Definition of Preservation Metadata
Metadata surrounds and describes physical, digitized, and born-digital information objects. Preservation metadata is external metadata related to a digital object created after a resource has been separated from its original creator, with value-added. The item-level data further stores technical details on the format, structure and uses of a digital resource, along with the history of all actions performed on the resource. These technical details include changes and decisions regarding digitization, migration to other formats, authenticity information such as technical features or custody history, and the rights and responsibilities information. In addition, preservation metadata may include information on the physical condition of a resource.
Preservation metadata is dynamic, accessibility focused, and should provide the following information: details about files and instructions for use, documentation of all updates or actions that have been performed on an object, object provenance and details pertaining to current and future custody; details of the individual(s) who are responsible for the preservation of the object and changes made to it.
Components of Metadata
Provenance: Who has had custody/ownership of the digital object?
Authenticity: Is the digital object what it purports to be?
Preservation activity: What has been done to preserve the digital object?
Technical environment: What is needed to render, interact with and use the digital object?
Rights management: What intellectual property rights must be observed?
Types of Metadata Creation
Automatic (internal)
Manual (often created by a specialist)
Created during digitization
User-contributed
Uses of Preservation Metadata
Digital materials require constant maintenance and migration to new formats to accommodate evolving technologies and varied user needs. In order to survive into the future, digital objects need preservation metadata that exists independently from the systems which were used to create them. Without preservation metadata, digital material will be lost. “While a print book with a broken spine can be easily re-bound, a digital object that has become corrupted or obsolete is often impossible (or prohibitively expensive) to repair”. Preservation metadata provides the vital information which will make “digital objects self-documenting across time.” Data maintenance is considered a key piece of collections maintenance by ensuring the availability of a resource over time, a concept detailed in the Reference Model for an Open Archival Information System (OAIS). OAIS is a broad conceptual model which many organizations have followed in developing new preservation metadata element sets and Archival Information Packages (AIP). Early projects in preservation metadata in the library community include CEDARS, NEDLIB, The National Library of Australia and the OCLC/RLG Working Group on Preservation Metadata. The ongoing work of maintaining, supporting, and coordinating future revisions to the PREMIS Data Dictionary is undertaken by the PREMIS Editorial Committee, hosted by the Library of Congress. Preservation metadata provides continuity and contributes to the validity and authenticity of a resource by providing evidence of changes, adjustments and migrations.
The importance of preservation metadata is further indicated by its required inclusion in many Data Management Plans (DMPs) which are often key pieces of applications for grants and government funding.
Considered by the National Information Standards Organization (NISO) to be a subtype of administrative metadata, preservation metadata is used to promote:
Interoperability
Digital object management
Preservation (often in conjunction with technical metadata)
Complications of Preservation Metadata
Concern over the poor management of digital objects notes the possibility of a "digital dark age". Many institutions, including the Digital Curation Center (DCC) and the National Digital Stewardship Alliance (NDSA) are working to create access to digital objects while ensuring their continued viability. In the NDSA’s Version 1 of the Levels of Digital Preservation, preservation metadata is grouped under Level Four, or "Repair your metadata", part of the macro preservation plan intended to make objects available over the long term.
The differing uses of digital resources across space, time and institutions requires that one object or set of information be accessible in a variety of formats, with the creation of new preservation metadata in each iteration. Anne Gilliland notes that these variations create the need for wider data standards that can be used within and across industries that will then result in further use and interoperability. The value of interoperability is further validated by the expense, both temporal and financial, of metadata creation.
The creation of preservation metadata by multiple users or institutions can complicate issues of ownership, access and responsibility. Depending on an institution’s mission, it may be difficult or outside the scope of responsibility to perform preservation while providing access. Further research into cross-institutional collaboration may provide greater insight into where data should be stored, and who should be managing it. Scholar Maggie Fieldhouse notes that the creation of metadata is shifting from collections managers to suppliers and publishers. Jerome McDonough identifies the benefits of multiple partners collaborating to improve metadata records around an object with preservation metadata as a key in cross-institutional communication. Sheila Corrall notes that the creation and management of preservation metadata represents the intersection between libraries, IT management and archival practice.
Developments in Preservation Metadata
Preservation metadata is a new and developing field. The OAIS Reference Model is a broad conceptual model which many organizations have followed in developing new preservation metadata element sets. Early projects in preservation metadata in the library community include CEDARS, NEDLIB, The National Library of Australia and the OCLC/RLG Working Group on Preservation Metadata. The ongoing work of maintaining, supporting, and coordinating future revisions to the PREMIS Data Dictionary is undertaken by the PREMIS Editorial Committee, hosted by the Library of Congress.
ARCHANGEL
Recent developments in blockchain technology and the need for verifiable sources have led to the pilot program ARCHANGEL to use blockchain in the archival space.
See also
Digital preservation
Preservation Metadata: Implementation Strategies (PREMIS)
Metadata
Digital library
Content Management Systems
References
Further reading
External links
Australian National Data Services (ANDS) Data Sharing Verbs
Capability Maturity Model for Scientific Data Management
CEDARS (2000) "Metadata for Digital Preservation: The CEDARS Project Outline Specification"
Controlled LOCKSS
Data Curation Profiles
Data Documentation Initiative (DDI)
DataONE Data Lifecycle
Dublin Core Metadata Initiative Preservation Community
Digital Curation Center (DCC) Digital Curation Lifecycle Model
Jisc
I2S2 Idealized Scientific Research Activity Lifecycle Model
Lots of Copies Keeps Stuff Safe (LOCKSS)
Merritt Repository
National Digital Stewardship Alliance (NDSA)
National Library of Australia, Preserving Access to Digital Information
National Library of New Zealand Metadata Standards Framework — Preservation Metadata
NEDLIB (2000) "Metadata for Long Term Preservation"
NISO Primer "Understanding Metadata"
Portico
Reference model for an Open Archival Information System (OAIS)
Research360 Institutional Research Lifecycle
UK Data Archive Data Lifecycle
Digital preservation
Museology
Metadata
Preservation (library and archival science)
Conservation and restoration of cultural heritage
Content management systems | Preservation metadata | [
"Technology"
] | 1,516 | [
"Metadata",
"Data"
] |
14,631,918 | https://en.wikipedia.org/wiki/Cytochrome%20b-245 | The Cytochrome b (-245) protein complex is composed of cytochrome b alpha (CYBA) and beta (CYBB) chain.
References
Cytochromes | Cytochrome b-245 | [
"Chemistry"
] | 38 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,632,170 | https://en.wikipedia.org/wiki/Magnesium%20iodide | Magnesium iodide is an inorganic compound with the chemical formula . It forms various hydrates . Magnesium iodide is a salt of magnesium and hydrogen iodide. These salts are typical ionic halides, being highly soluble in water.
Uses
Magnesium iodide has few commercial uses, but can be used to prepare compounds for organic synthesis.
Preparation
Magnesium iodide can be prepared from magnesium oxide, magnesium hydroxide, and magnesium carbonate by treatment with hydroiodic acid:
Reactions
Magnesium iodide is stable at high heat under a hydrogen atmosphere, but decomposes in air at normal temperatures, turning brown from the release of elemental iodine. When heated in air, it decomposes completely to magnesium oxide.
Another method to prepare is mixing powdered elemental iodine and magnesium metal. In order to obtain anhydrous , the reaction should be conducted in a strictly anhydrous atmosphere; dry-diethyl ether can be used as a solvent.
Usage of magnesium iodide in the Baylis-Hillman reaction tends to give (Z)-vinyl compounds.
Demethylation of certain aromatic methyl ethers can be afforded using magnesium iodide in diethyl ether.
Hydrates
Two hydrates are known, the octahydrate and the nonahydrate, both verified by X-ray crystallography These hydrates feature [Mg(H2O)6]2+ ions.
References
Iodides
Magnesium compounds
Alkaline earth metal halides
Deliquescent materials | Magnesium iodide | [
"Chemistry"
] | 307 | [
"Deliquescent materials"
] |
14,633,002 | https://en.wikipedia.org/wiki/Flushing%20trough | A flushing trough is a long cistern which serves several toilet pans. It is designed to allow a shorter interval between flushes than individual cisterns.
Flushing troughs were commonly used in places such as schools, colleges, public toilets, factories and public buildings where repeated use of the flushing cistern was required in a short period of time. Such troughs were used by local councils in the UK into the 1980s.
Background
Water byelaws in the United Kingdom restricted the volume of water that could be used to flush WCs and urinals. Water boards typically required valveless siphonic cisterns that were designed to be "water waste preventers": these deliver a fixed volume of water on every flush and do not allow water to run into a WC pan continuously. A typical siphonic cistern is emptied completely when it is flushed, and can only be flushed again once it has refilled: the delay between flushes was found to be inconvenient in busy lavatories such as those in schools, factories or public conveniences. The flushing trough was designed to overcome this delay by allowing a fixed volume of water to be discharged from a larger cistern.
Development
The flushing trough was developed by Adamsez Limited and a patent was issued to MJ Adams in 1912 for a flushing trough that used the bell siphon flushing system. A further patent was issued in 1928 to AH Adams for a flushing trough that used the plate siphon mechanism, marketed as the 'Epic'. Advertisements by Adamsez stated that 25,000 were in use by 1939. Rival manufacturer Shanks obtained a patent for a modified version in 1935 which they marketed as the 'Alisa'.
Design
Flushing trough cisterns were usually made of cast iron or galvanised steel, but were also manufactured in fireclay and plastic, and could serve 2 or more toilets. The trough would typically span a row of cubicles, with an individual siphon and flush chain for each closet. The lever arm connecting the siphon plate to the flush chain was often fixed directly to a pivot on the siphon rather than the cistern, so the arrangement of the siphons was highly flexible: flush pipes could be fitted in the middle or side of the cubicles; flush chains could be arranged at the back or front of the trough, or through the bottom of the trough via a standpipe. Flushing troughs could also be concealed in ducts behind the wall of a range of WCs, with the flush chains linked to flush levers. Although flushing troughs generally proved reliable, a key disadvantage is that repair of one siphon requires all WCs served by the trough to be out of service.
Operation
Each siphon in a flushing trough is connected to its own timing box by a pipe. Siphonic action is started in the same way as an ordinary flushing cistern. As the water is siphoned from the trough, water is also sucked from the timing box and the water level inside the box falls rapidly, with air drawn into the timing box through a 'snorkel' vent pipe. When the timing box has been emptied of water, air flows through the timing box and into the siphon to break the siphonic action, stopping the flush. The timing box quickly refills with water through a hole in its side. The siphon is then ready to flush again.
Up to seven siphons would be supplied by a single ballcock, which would refill the trough whenever the water level fell.
References
Toilets | Flushing trough | [
"Biology"
] | 720 | [
"Excretion",
"Toilets"
] |
14,633,213 | https://en.wikipedia.org/wiki/BigBelly | Bigbelly was originally a solar powered trash-compacting bin, manufactured by U.S. company Big belly Solar Inc for use in public spaces such as parks, beaches, amusement parks, universities, retail properties, grocery industry and food service operators. The bin was designed and originally manufactured in Needham, Massachusetts, by Seahorse Power, a company set up in 2003 with the aim of reducing fossil fuel consumption. Due to the bin's commercial success, Seahorse Power changed its name to BigBelly Solar.
Although solar power is still an important feature, the company has since created self-powered stations for use where sun may not be available such as under a convenience store's dispenser canopy and AC powered stations for applications such as corporate cafeterias.
The bin
The bin has a capacity of 567 litres. Its compaction mechanism exerts 5.3kN of force, increasing the bin's effective capacity by five. The compaction mechanism is chain-driven, using no hydraulic fluids. Maintenance consists of lubricating the front door lock annually. The mechanism runs on a standard 12 volt battery, which is kept charged by the solar panel. The battery reserve lasts for approximately three weeks. Wireless technology-enabled units report their status into the CLEAN (Collection, Logistics, Efficiency And Notification system) dashboard that gives waste management and administration insights for monitoring and route optimization. BigBelly Solar also provides companion recycling units that allow cities, parks and universities to collect single-stream or separated recyclable materials in public spaces.
The first machine was installed in Vail, Colorado, in 2004.
The city of Spokane, Washington, installed 70 of the "smart" garbage bins in 2018.
In July 2023 the city of Münster in Germany began an eight-week test of internet-connected BigBelly garbage bins.
References
Solar Compactors Make Mincemeat of Trash, All Things Considered, NPR, July 17, 2007
External links
BigBellySolar.com - Manufacturer's website
American inventions
Solar-powered devices
Waste treatment technology | BigBelly | [
"Chemistry",
"Engineering"
] | 422 | [
"Water treatment",
"Waste treatment technology",
"Environmental engineering"
] |
14,633,281 | https://en.wikipedia.org/wiki/Twig%20work | Twig-work is the term applied to architectural details constructed of twigs and branches to form decorative motifs in buildings and furniture. Carpentry or woodworking using wood that has not been milled into lumber and is still in its natural shape describes the national park service rustic style.
Construction
Joinery on twigs and branches is similar to joinery for lumber. Mortise and tenon joints are strong, but also labor-intensive and time-consuming. Twigs and branches can also be fastened with nails. Where one branch meets another, the ends must be coped, or cut to match the curve.
See also
Bentwood
Echo Camp
Knollwood Club
Rustic furniture
References
External links
New York Times, "Twigs That Grew Up As Tables", January 2, 1992
Adirondacks
Architectural elements
Carpentry
Woodworking | Twig work | [
"Technology",
"Engineering"
] | 165 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
14,633,642 | https://en.wikipedia.org/wiki/POLG | DNA polymerase subunit gamma (POLG or POLG1) is an enzyme that in humans is encoded by the POLG gene. Mitochondrial DNA polymerase is heterotrimeric, consisting of a homodimer of accessory subunits plus a catalytic subunit. The protein encoded by this gene is the catalytic subunit of mitochondrial DNA polymerase. Defects in this gene are a cause of progressive external ophthalmoplegia with mitochondrial DNA deletions 1 (PEOA1), sensory ataxic neuropathy dysarthria and ophthalmoparesis (SANDO), Alpers-Huttenlocher syndrome (AHS), and mitochondrial neurogastrointestinal encephalopathy syndrome (MNGIE).
Structure
POLG is located on the q arm of chromosome 15 in position 26.1 and has 23 exons. The POLG gene produces a 140 kDa protein composed of 1239 amino acids. POLG, the protein encoded by this gene, is a member of the DNA polymerase type-A family. It is a mitochondrion nucleiod with an Mg2+ cofactor and 15 turns, 52 beta strands, and 39 alpha helixes. POLG contains a polyglutamine tract near its N-terminus that may be polymorphic. Two transcript variants encoding the same protein have been found for this gene.
Function
POLG is a gene that codes for the catalytic subunit of the mitochondrial DNA polymerase, called DNA polymerase gamma. The human POLG cDNA and gene were cloned and mapped to chromosome band 15q25. In eukaryotic cells, the mitochondrial DNA is replicated by DNA polymerase gamma, a trimeric protein complex composed of a catalytic subunit, POLG, and a dimeric accessory subunit of 55 kDa encoded by the POLG2 gene. The catalytic subunit contains three enzymatic activities, a DNA polymerase activity, a 3’-5’ exonuclease activity that proofreads misincorporated nucleotides, and a 5’-dRP lyase activity required for base excision repair.
Catalytic activity
Deoxynucleoside triphosphate + DNA(n) = diphosphate + DNA(n+1).
Clinical significance
Mutations in the POLG gene are associated with several mitochondrial diseases, progressive external ophthalmoplegia with mitochondrial DNA deletions 1 (PEOA1), sensory ataxic neuropathy dysarthria and ophthalmoparesis (SANDO), Alpers-Huttenlocher syndrome (AHS), and mitochondrial neurogastrointestinal encephalopathy syndrome (MNGIE). Pathogenic variants have also been linked with fatal congenital myopathy and gastrointestinal pseudo-obstruction and fatal infantile hepatic failure. A list of all published mutations in the POLG coding region and their associated disease can be found at the Human DNA Polymerase Gamma Mutation Database.
Mice heterozygous for a Polg mutation are only able to replicate their mitochondrial DNA inaccurately, so that they sustain a 500-fold higher mutation burden than normal mice. These mice show no clear features of rapidly accelerated aging, indicating that mitochondrial mutations do not have a causal role in natural aging.
Interactions
POLG has been shown to have 50 binary protein-protein interactions including 32 co-complex interactions. POLG appears to interact with POLG2, Dlg4, Tp53, and Sod2.
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on POLG-Related Disorders
DNA replication | POLG | [
"Biology"
] | 750 | [
"Genetics techniques",
"DNA replication",
"Molecular genetics"
] |
14,633,662 | https://en.wikipedia.org/wiki/Namespace-based%20Validation%20Dispatching%20Language | Namespace-based Validation Dispatching Language (NVDL) is an XML schema language for validating XML documents that integrate with multiple namespaces. It is an ISO/IEC standard, and it is Part 4 of the DSDL schema specification. Much of the work on NVDL is based on the older Namespace Routing Language.
Validation
Most XML languages are based on a single XML namespace. The expectation in these cases is that XML elements in a particular namespace belong to that language, and elements in another namespace belong to another language. Many XML languages allow the use of arbitrary elements from other namespaces.
The problem arises during the attempt to validate these hybrid documents. Each language is defined by a specific XML schema, but there is no linkage between the schemas.
The purpose of NVDL is to provide that linkage, based on namespaces. By associating a schema validator with an NVDL schema, the validator can use multiple schemas to validate a single document, switching between them based on the namespaces used in that document.
Format
NVDL documents contain a list of rules, each of which has one or more actions to take when that rule is true. Rules include a specific namespace and a mode setting. NVDL recognizes the mode as a particular piece of state that changes as the document is processed.
Actions occur when a rule is true. Actions can include validating a schema, declaring the instance document invalid, accepting this part of the instance document as valid, and continue validating as the parent did. Actions can also change the current NVDL mode. Multiple actions can be taken when a rule is true; this allows for validating a section of the instance document with multiple schemas of a different type.
Example
<rules xmlns="http://purl.oclc.org/dsdl/nvdl/ns/structure/1.0">
<namespace ns="http://www.w3.org/1999/xhtml">
<validate schema="xhtml.rng"/>
</namespace>
<namespace ns="http://www.w3.org/2000/svg/">
<validate schema="svg.sch"/>
</namespace>
<anyNamespace>
<reject/>
</anyNamespace>
</rules>
This NVDL schema will validate the parts that use the XHTML 1.0 namespace with a RELAX NG schema, validate the parts that use the SVG 1.0 namespace with a Schematron schema, and reject the document as invalid if it encounters elements with any other namespace.
External links
NVDL information
NVDL tutorial
An introduction to NVDL
ISO/IEC standards
XML-based standards | Namespace-based Validation Dispatching Language | [
"Technology"
] | 610 | [
"Computer standards",
"XML-based standards"
] |
13,480,113 | https://en.wikipedia.org/wiki/Stein%E2%80%93Str%C3%B6mberg%20theorem | In mathematics, the Stein–Strömberg theorem or Stein–Strömberg inequality is a result in measure theory concerning the Hardy–Littlewood maximal operator. The result is foundational in the study of the problem of differentiation of integrals. The result is named after the mathematicians Elias M. Stein and Jan-Olov Strömberg.
Statement of the theorem
Let λn denote n-dimensional Lebesgue measure on n-dimensional Euclidean space Rn and let M denote the Hardy–Littlewood maximal operator: for a function f : Rn → R, Mf : Rn → R is defined by
where Br(x) denotes the open ball of radius r with center x. Then, for each p > 1, there is a constant Cp > 0 such that, for all natural numbers n and functions f ∈ Lp(Rn; R),
In general, a maximal operator M is said to be of strong type (p, p) if
for all f ∈ Lp(Rn; R). Thus, the Stein–Strömberg theorem is the statement that the Hardy–Littlewood maximal operator is of strong type (p, p) uniformly with respect to the dimension n.
References
Inequalities
Theorems in measure theory
Operator theory | Stein–Strömberg theorem | [
"Mathematics"
] | 253 | [
"Theorems in mathematical analysis",
"Theorems in measure theory",
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Mathematical theorems"
] |
13,480,124 | https://en.wikipedia.org/wiki/Spinodal%20decomposition | Spinodal decomposition is a mechanism by which a single thermodynamic phase spontaneously separates into two phases (without nucleation). Decomposition occurs when there is no thermodynamic barrier to phase separation. As a result, phase separation via decomposition does not require the nucleation events resulting from thermodynamic fluctuations, which normally trigger phase separation.
Spinodal decomposition is observed when mixtures of metals or polymers separate into two co-existing phases, each rich in one species and poor in the other. When the two phases emerge in approximately equal proportion (each occupying about the same volume or area), characteristic intertwined structures are formed that gradually coarsen (see animation). The dynamics of spinodal decomposition is commonly modeled using the Cahn–Hilliard equation.
Spinodal decomposition is fundamentally different from nucleation and growth. When there is a nucleation barrier to the formation of a second phase, time is taken by the system to overcome that barrier. As there is no barrier (by definition) to spinodal decomposition, some fluctuations (in the order parameter that characterizes the phase) start growing instantly. Furthermore, in spinodal decomposition, the two distinct phases start growing in any location uniformly throughout the volume, whereas a nucleated phase change begins at a discrete number of points.
Spinodal decomposition occurs when a homogenous phase becomes thermodynamically unstable. An unstable phase lies at a maximum in free energy. In contrast, nucleation and growth occur when a homogenous phase becomes metastable. That is, another biphasic system becomes lower in free energy, but the homogenous phase remains at a local minimum in free energy, and so is resistant to small fluctuations. J. Willard Gibbs described two criteria for a metastable phase: that it must remain stable against a small change over a large area.
History
In the early 1940s, Bradley reported the observation of sidebands around the Bragg peaks in the X-ray diffraction pattern of a Cu-Ni-Fe alloy that had been quenched and then annealed inside the miscibility gap. Further observations on the same alloy were made by Daniel and Lipson, who demonstrated that the sidebands could be explained by a periodic modulation of composition in the <100> directions. From the spacing of the sidebands, they were able to determine the wavelength of the modulation, which was of the order of 100 angstroms (10 nm).
The growth of a composition modulation in an initially homogeneous alloy implies uphill diffusion or a negative diffusion coefficient. Becker and Dehlinger had already predicted a negative diffusivity inside the spinodal region of a binary system, but their treatments could not account for the growth of a modulation of a particular wavelength, such as was observed in the Cu-Ni-Fe alloy. In fact, any model based on Fick's law yields a physically unacceptable solution when the diffusion coefficient is negative.
The first explanation of the periodicity was given by Mats Hillert in his 1955 Doctoral Dissertation at MIT. Starting with a regular solution model, he derived a flux equation for one-dimensional diffusion on a discrete lattice. This equation differed from the usual one by the inclusion of a term, which allowed for the effect of the interfacial energy on the driving force of adjacent interatomic planes that differed in composition. Hillert solved the flux equation numerically and found that inside the spinodal it yielded a periodic variation of composition with distance. Furthermore, the wavelength of the modulation was of the same order as that observed in the Cu-Ni-Fe alloys.
Building on Hillert's work, a more flexible continuum model was subsequently developed by John W. Cahn and John Hilliard, who included the effects of coherency strains as well as the gradient energy term. The strains are significant in that they dictate the ultimate morphology of the decomposition in anisotropic materials.
Cahn–Hilliard model for spinodal decomposition
Free energies in the presence of small amplitude fluctuations, e.g. in concentration, can be evaluated using an approximation introduced by Ginzburg and Landau to describe magnetic field gradients in superconductors. This approach allows one to approximate the free energy as an expansion in terms of the concentration gradient , a vector. Since free energy is a scalar and we are probing near its minima, the term proportional to is negligible. The lowest order term is the quadratic expression , a scalar. Here is a parameter that controls the free energy cost of variations in concentration .
The Cahn–Hilliard free energy is then
where is the bulk free energy per unit volume of the homogeneous solution, and the integral is over the volume of the system.
We now want to study the stability of the system with respect to small fluctuations in the concentration , for example a sine wave of amplitude and wavevector , for the wavelength of the concentration wave. To be thermodynamically stable, the free energy change due to any small amplitude concentration fluctuation , must be positive.
We may expand about the average composition co as follows:
and for the perturbation the free energy change is
When this is integrated over the volume , the gives zero, while and integrate to give . So, then
As , thermodynamic stability requires that the term in brackets be positive. The is always positive but tends to zero at small wavevectors, large wavelengths. Since we are interested in macroscopic fluctuations, , stability requires that the second derivative of the free energy be positive. When it is, there is no spinodal decomposition, but when it is negative, spinodal decomposition will occur. Then fluctuations with wavevectors become spontaneously unstable, where the critical wave number is given by:
which corresponds to a fluctuations above a critical wavelength
Dynamics of spinodal decomposition when molecules move via diffusion
Spinodal decomposition can be modeled using a generalized diffusion equation:
for the chemical potential and the mobility. As pointed out by Cahn, this equation can be considered as a phenomenological definition of the mobility M, which must by definition be positive.
It consists of the ratio of the flux to the local gradient in chemical potential. The chemical potential is a variation of the free energy and when this is the Cahn–Hilliard free energy this is
and so
and now we want to see what happens to a small concentration fluctuation - note that now it has time dependence as a wavevector dependence. Here is a growth rate. If then the perturbation shrinks to nothing, the system is stable with respect to small perturbations or fluctuations, and there is no spinodal decomposition. However, if then the perturbation grows and the system is unstable with respect to small perturbations or fluctuations: There is spinodal decomposition.
Substituting in this concentration fluctuation, we get
This gives the same expressions for the stability as above, but it also gives an expression for the growth rate of concentration perturbations
which has a maximum at a wavevector
So, at least at the beginning of spinodal decomposition, we expect the growing concentrations to mostly have this wavevector.
Phase diagram
This type of phase transformation is known as spinodal decomposition, and can be illustrated on a phase diagram exhibiting a miscibility gap. Thus, phase separation occurs whenever a material transition into the unstable region of the phase diagram. The boundary of the unstable region sometimes referred to as the binodal or coexistence curve, is found by performing a common tangent construction of the free-energy diagram. Inside the binodal is a region called the spinodal, which is found by determining where the curvature of the free-energy curve is negative. The binodal and spinodal meet at the critical point. It is when a material is moved into the spinodal region of the phase diagram that spinodal decomposition can occur.
The free energy curve is plotted as a function of composition for a temperature below the convolute temperature, T. Equilibrium phase compositions are those corresponding to the free energy minima. Regions of negative curvature (∂2f/∂c2 < 0 ) lie within the inflection points of the curve (∂2f/∂c2 = 0 ) which are called the spinodes. Their locus as a function of temperature defines the spinodal curve. For compositions within the spinodal, a homogeneous solution is unstable against infinitesimal fluctuations in density or composition, and there is no thermodynamic barrier to the growth of a new phase. Thus, the spinodal represents the limit of physical and chemical stability.
To reach the spinodal region of the phase diagram, a transition must take the material through the binodal region or the critical point. Often phase separation will occur via nucleation during this transition, and spinodal decomposition will not be observed. To observe spinodal decomposition, a very fast transition, often called a quench, is required to move from the stable to the spinodal unstable region of the phase diagram.
In some systems, ordering of the material leads to a compositional instability and this is known as a conditional spinodal, e.g. in the feldspars.
Coherency strains
For most crystalline solid solutions, there is a variation of lattice parameters with the composition. If the lattice of such a solution is to remain coherent in the presence of a composition modulation, mechanical work has to be done to strain the rigid lattice structure. The maintenance of coherency thus affects the driving force for diffusion.
Consider a crystalline solid containing a one-dimensional composition modulation along the x-direction. We calculate the elastic strain energy for a cubic crystal by estimating the work required to deform a slice of material so that it can be added coherently to an existing slab of cross-sectional area. We will assume that the composition modulation is along the x' direction and, as indicated, a prime will be used to distinguish the reference axes from the standard axes of a cubic system (that is, along the <100>).
Let the lattice spacing in the plane of the slab be ao and that of the undeformed slice a. If the slice is to be coherent after the addition of the slab, it must be subjected to a strain ε in the z' and y' directions which is given by:
In the first step, the slice is deformed hydrostatically in order to produce the required strains to the z' and y' directions. We use the linear compressibility of a cubic system 1 / ( c11 + 2 c12 ) where the c's are the elastic constants. The stresses required to produce a hydrostatic strain of δ are therefore given by:
The elastic work per unit volume is given by:
where the ε's are the strains. The work performed per unit volume of the slice during the first step is therefore given by:
In the second step, the sides of the slice parallel to the x' direction are clamped and the stress in this direction is relaxed reversibly. Thus, εz' = εy' = 0. The result is that:
The net work performed on the slice in order to achieve coherency is given by:
or
The final step is to express c1'1' in terms of the constants referred to the standard axes. From the rotation of axes, we obtain the following:
where l, m, n are the direction cosines of the x' axis and, therefore the direction cosines of the composition modulation. Combining these, we obtain the following:
The existence of any shear strain has not been accounted for. Cahn considered this problem, and concluded that shear would be absent for modulations along <100>, <110>, <111> and that for other directions the effect of shear strains would be small. It then follows that the total elastic strain energy of a slab of cross-sectional area A is given by:
We next have to relate the strain δ to the composition variation. Let ao be the lattice parameter of the unstrained solid of the average composition co. Using a Taylor series expansion about co yields the following:
in which
where the derivatives are evaluated at co. Thus, neglecting higher-order terms, we have:
Substituting, we obtain:
This simple result indicates that the strain energy of a composition modulation depends only on the amplitude and is independent of the wavelength. For a given amplitude, the strain energy WE is proportional to Y. Consider a few special cases.
For an isotropic material:
so that:
This equation can also be written in terms of Young's modulus E and Poisson's ratio υ using the standard relationships:
Substituting, we obtain the following:
For most metals, the left-hand side of this equation
is positive, so that the elastic energy will be a minimum for those directions that minimize the term: l2m2 + m2n2 + l2n2. By inspection, those are seen to be <100>. For this case:
the same as for an isotropic material. At least one metal (molybdenum) has an anisotropy of the opposite sign. In this case, the directions for minimum WE will be those that maximize the directional cosine function. These directions are <111>, and
As we will see, the growth rate of the modulations will be a maximum in the directions that minimize Y. These directions, therefore, determine the morphology and structural characteristics of the decomposition in cubic solid solutions.
Rewriting the diffusion equation and including the term derived for the elastic energy yields the following:
or
which can alternatively be written in terms of the diffusion coefficient D as:
The simplest way of solving this equation is by using the method of Fourier transforms.
Fourier transform
The motivation for the Fourier transformation comes from the study of a Fourier series. In the study of a Fourier series, complicated periodic functions are written as the sum of simple waves mathematically represented by sines and cosines. Due to the properties of sine and cosine, it is possible to recover the amount of each wave in the sum by an integral. In many cases it is desirable to use Euler's formula, which states that e2πiθ = cos 2πθ + i sin 2πθ, to write Fourier series in terms of the basic waves e2πiθ, with the distinct advantage of simplifying many unwieldy formulas.
The passage from sines and cosines to complex exponentials makes it necessary for the Fourier coefficients to be complex-valued. The usual interpretation of this complex number is that it gives both the amplitude (or size) of the wave present in the function and the phase (or the initial angle) of the wave. This passage also introduces the need for negative "frequencies". (E.G. If θ were measured in seconds then the waves e2πiθ and e−2πiθ would both complete one cycle per second—but they represent different frequencies in the Fourier transform. Hence, frequency no longer measures the number of cycles per unit time, but is closely related.)
If A(β) is the amplitude of a Fourier component of wavelength λ and wavenumber β = 2π/λ the spatial variation in composition can be expressed by the Fourier integral:
in which the coefficients are defined by the inverse relationship:
Substituting, we obtain on equating coefficients:
This is an ordinary differential equation that has the solution:
in which A(β) is the initial amplitude of the Fourier component of wave wavenumber β and R(β) defined by:
or, expressed in terms of the diffusion coefficient D:
In a similar manner, the new diffusion equation:
has a simple sine wave solution given by:
where is obtained by substituting this solution back into the diffusion equation as follows:
For solids, the elastic strains resulting from coherency add terms to the amplification factor as follows:
where, for isotropic solids:
,
where E is Young's modulus of elasticity, ν is Poisson's ratio, and η is the linear strain per unit composition difference. For anisotropic solids, the elastic term depends on the direction in a manner that can be predicted by elastic constants and how the lattice parameters vary with composition. For the cubic case, Y is a minimum for either (100) or (111) directions, depending only on the sign of the elastic anisotropy.
Thus, by describing any composition fluctuation in terms of its Fourier components, Cahn showed that a solution would be unstable concerning to the sinusoidal fluctuations of a critical wavelength. By relating the elastic strain energy to the amplitudes of such fluctuations, he formalized the wavelength or frequency dependence of the growth of such fluctuations, and thus introduced the principle of selective amplification of Fourier components of certain wavelengths. The treatment yields the expected mean particle size or wavelength of the most rapidly growing fluctuation.
Thus, the amplitude of composition fluctuations should grow continuously until a metastable equilibrium is reached with preferential amplification of components of particular wavelengths. The kinetic amplification factor R is negative when the solution is stable to the fluctuation, zero at the critical wavelength, and positive for longer wavelengths—exhibiting a maximum at exactly times the critical wavelength.
Consider a homogeneous solution within the spinodal. It will initially have a certain amount of fluctuation from the average composition which may be written as a Fourier integral. Each Fourier component of that fluctuation will grow or diminish according to its wavelength.
Because of the maximum in R as a function of wavelength, those components of the fluctuation with times the critical wavelength will grow fastest and will dominate. This "principle of selective amplification" depends on the initial presence of these wavelengths but does not critically depend on their exact amplitude relative to other wavelengths (if the time is large compared with (1/R). It does not depend on any additional assumptions, since different wavelengths can coexist and do not interfere with one another.
Limitations of this theory would appear to arise from this assumption and the absence of an expression formulated to account for irreversible processes during phase separation which may be associated with internal friction and entropy production. In practice, frictional damping is generally present and some of the energy is transformed into thermal energy. Thus, the amplitude and intensity of a one-dimensional wave decrease with distance from the source, and for a three-dimensional wave, the decrease will be greater.
Dynamics in k-space
In the spinodal region of the phase diagram, the free energy can be lowered by allowing the components to separate, thus increasing the relative concentration of a component material in a particular region of the material. The concentration will continue to increase until the material reaches the stable part of the phase diagram. Very large regions of material will change their concentration slowly due to the amount of material that must be moved. Very small regions will shrink away due to the energy cost of maintaining an interface between two dissimilar component materials.
To initiate a homogeneous quench a control parameter, such as temperature, is abruptly and globally changed. For a binary mixture of -type and -type materials, the Landau free-energy
is a good approximation of the free energy near the critical point and is often used to study homogeneous quenches. The mixture concentration is the density difference of the mixture components, the control parameters which determine the stability of the mixture are and , and the interfacial energy cost is determined by .
Diffusive motion often dominates at the length-scale of spinodal decomposition. The equation of motion for a diffusive system is
where is the diffusive mobility, is some random noise such that , and the chemical potential is derived from the Landau free-energy:
We see that if , small fluctuations around have a negative effective diffusive mobility and will grow rather than shrink. To understand the growth dynamics, we disregard the fluctuating currents due to , linearize the equation of motion around and perform a Fourier transform into -space. This leads to
which has an exponential growth solution:
Since the growth rate is exponential, the fastest growing angular wavenumber
will quickly dominate the morphology. We now see that spinodal decomposition results in domains of the characteristic length scale called the spinodal length:
The growth rate of the fastest-growing angular wave number is
where is known as the spinodal time.
The spinodal length and spinodal time can be used to nondimensionalize the equation of motion, resulting in universal scaling for spinodal decomposition.
Spinodal Architected Materials
Spinodal phase decomposition has been used to generate architected materials by interpreting one phase as solid, and the other phase as void. These spinodal architected materials present interesting mechanical properties, such as high energy absorption, insensitivity to imperfections, superior mechanical resilience, and high stiffness-to-weight ratio. Furthermore, by controlling the phase separation, i.e., controlling the proportion of materials, and/or imposing preferential directions in the decompositions, one can control the density, and preferential directions effectively tuning the strength, weight, and anisotropy of the resulting architected material. Another interesting property of spinodal materials is the capability to seamlessly transition between different classes, orientations, and densities, thereby enabling the manufacturing of effectively multi-material structures.
References
Further reading
External links
Brief statement by Mats Hillert
John Cahn's Homepage
Binary alloys
Composition profiles
Copper / Nickel / Tin alloys
Graphical representation of microstructural evolution
Condensed matter physics
Thermodynamics
Materials science
Critical phenomena
Phase transitions | Spinodal decomposition | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 4,472 | [
"Physical phenomena",
"Phase transitions",
"Applied and interdisciplinary physics",
"Phases of matter",
"Materials science",
"Critical phenomena",
"Condensed matter physics",
"Thermodynamics",
"nan",
"Statistical mechanics",
"Matter",
"Dynamical systems"
] |
13,480,147 | https://en.wikipedia.org/wiki/Graduation%20%28scale%29 | A graduation is a marking used to indicate points on a visual scale, which can be present on a container, a measuring device, or the axes of a line plot, usually one of many along a line or curve, each in the form of short line segments perpendicular to the line or curve. Often, some of these line segments are longer and marked with a numeral, such as every fifth or tenth graduation. The scale itself can be linear (the graduations are spaced at a constant distance apart) or nonlinear.
Linear graduation of a scale occurs mainly (but not exclusively) on straight measuring devices, such as a rule or measuring tape, using units such as inches or millimetres.
Graduations can also be spaced at varying spatial intervals, such as when using a logarithmic, for instance on a measuring cup, can vary in scale due to the container's non-cylindrical shape.
Graduations along a curve
Circular graduations of a scale occur on a circular arc or limb of an instrument. In some cases, non-circular curves are graduated in instruments. A typical circular arc graduation is the division into angular measurements, such as degrees, minutes and seconds. These types of graduated markings are traditionally seen on devices ranging from compasses and clock faces to alidades found on such instruments as telescopes, theodolites, inclinometers, astrolabes, armillary spheres, and celestial spheres.
There can also be non-uniform graduations such as logarithmic or other scales such as seen on circular slide rules and graduated cylinders.
Manufacture of graduations
Graduations can be placed on an instrument by etching, scribing or engraving, painting, printing or other means. For durability and accuracy, etched or scribed marks are usually preferable to surface coatings such as paints and inks. Markings can be a combination of both physical marks such as a scribed line and a paint or other marking material. For example, it is common for black ink or paint to fill the grooves cut in a scribed rule. Inexpensive plastic devices can be molded and painted or molded with two or more colors of plastic used. Some rather high-quality devices can be manufactured with plastic and reveal high-precision graduations.
Graduations traditionally have been scribed into an instrument by hand with a sharp, hard tool. Later developments in devices such as dividing engines allowed the process to be automated with greater precision. Modern devices can be stamped, cut on a milling machine or with a CNC machine. In the case of stamping, the master has the precision built into itself and the stamped device is as accurate as the stamping process allows. Similarly, molding of plastic can be as precise as the mold process. With proper concern for such effects as thermal expansion or contraction and shrinkage, the precision can be very high.
US graduation style
The US graduation style of an instrument was a Federal standard for codes used by manufacturers to quickly determine which types of scales are marked on the instrument.
Other commonly recognized styles are:
30–1 mm, 0.5 mm
31–1 mm, 0.5 mm, 1/32″, 1/64″
34–1 mm, 0.5 mm, 1/10″, 1/50″
35–1 mm, 0.5 mm on both sides
35E—1 mm, 0.5 mm on both sides, plus mm on both ends on one side
36—1/32″ and 1 mm on one side; 1/64″ and 1 mm on other side
37–1 mm, 0.5 mm
37E—1 mm, 0.5 mm on both sides, plus mm on both ends on one side, Single row inch figure
E/M—edge 1: 1/10″, edge 2: 1/100″, edge 3: 1.0 mm, edge 4: 0.5 mm
3R—1/64″, 1/50″, 1/32″, 1/10″
4R—1/64″, 1/32″, 1/16″, 1/8″
5R—1/100″, 1/64″, 1/32″, 1/10″
6R—1/32″, 1/64″, 1/10″, 1/100″
7R—1/100″,1/64″, 1/32″, 1/16″
9R—1/16″, 1/32″, 1/64″
10R—1/32″, 1/64″ (quick-reading)
10R/D—1/64″, 1/32″, Decimal Equivalency Table Graduation
12R—1/100″, 1/64″, 1/50″, 1/32″
16R—1/100″, 1/64″, 1/50″, 1/32″
Suffix key:
R = Rapid Read (32nd & 64th graduations marked with number values)
E = End Graduations (Graduations appear on end edge/edges)
ME = Metric/English (Metric units in preferred position)
E/M = English/Metric (English units in preferred position)
See also
Level staff
Monochord
Volumetric flask
References
Measuring instruments
Signage | Graduation (scale) | [
"Technology",
"Engineering"
] | 1,075 | [
"Measuring instruments"
] |
13,480,873 | https://en.wikipedia.org/wiki/Digital%20prototyping | Digital Prototyping gives conceptual design, engineering, manufacturing, and sales and marketing departments the ability to virtually explore a complete product before it's built. Industrial designers, manufacturers, and engineers use Digital Prototyping to design, iterate, optimize, validate, and visualize their products digitally throughout the product development process. Innovative digital prototypes can be created via CAutoD through intelligent and near-optimal iterations, meeting multiple design objectives (such as maximised output, energy efficiency, highest speed and cost-effectiveness), identifying multiple figures of merit, and reducing development gearing and time-to-market. Marketers also use Digital Prototyping to create photorealistic renderings and animations of products prior to manufacturing. Companies often adopt Digital Prototyping with the goal of improving communication between product development stakeholders, getting products to market faster, and facilitating product innovation.
Digital Prototyping goes beyond simply creating product designs in 3D. It gives product development teams a way to assess the operation of moving parts, to determine whether or not the product will fail, and see how the various product components interact with subsystems—either pneumatic or electric. By simulating and validating the real-world performance of a product design digitally, manufacturers often can reduce the number of physical prototypes they need to create before a product can be manufactured, reducing the cost and time needed for physical prototyping. Many companies use Digital Prototyping in place of, or as a complement to, physical prototyping.
Digital Prototyping changes the traditional product development cycle from design>build>test>fix to design>analyze>test>build. Instead of needing to build multiple physical prototypes and then testing them to see if they'll work, companies can conduct testing digitally throughout the process by using Digital Prototyping, reducing the number of physical prototypes needed to validate the design. Studies show that by using Digital Prototyping to catch design problems up front, manufacturers experience fewer change orders downstream. Because the geometry in digital prototypes is highly accurate, companies can check interferences to avoid assembly issues that generate change orders in the testing and manufacturing phases of development. Companies can also perform simulations in early stages of the product development cycle, so they avoid failure modes during testing or manufacturing phases. With a Digital Prototyping approach, companies can digitally test a broader range of their product's performance. They can also test design iterations quickly to assess whether they're over- or under-designing components.
Research from the Aberdeen Group shows that manufacturers that use Digital Prototyping build half the number of physical prototypes as the average manufacturer, get to market 58 days faster than average, and experience 48 percent lower prototyping costs.
History of Digital Prototyping
The concept of Digital Prototyping has been around for over a decade, particularly since software companies such as Autodesk, PTC, Siemens PLM (formerly UGS), and Dassault began offering computer-aided design (CAD) software capable of creating accurate 3D models.
It may even be argued that the product lifecycle management (PLM) approach was the harbinger of Digital Prototyping. PLM is an integrated, information-driven approach to a product's lifecycle, from development to disposal. A major aspect of PLM is coordinating and managing product data among all software, suppliers, and team members involved in the product's lifecycle. Companies use a collection of software tools and methods to integrate people, data, and processes to support singular steps in the product's lifecycle or to manage the product's lifecycle from beginning to end. PLM often includes product visualization to facilitate collaboration and understanding among the internal and external teams that participate in some aspect of a product's lifecycle.
While the concept of Digital Prototyping has been a longstanding goal for manufacturing companies for some time, it's only recently that Digital Prototyping has become a reality for small-to-midsize manufacturers that cannot afford to implement complex and expensive PLM solutions.
Digital Prototyping and PLM
Large manufacturing companies rely on PLM to link otherwise unconnected, siloed activities, such as concept development, design, engineering, manufacturing, sales, and marketing. PLM is a fully integrated approach to product development that requires investments in application software, implementation, and integration with enterprise resource planning (ERP) systems, as well as end-user training and a sophisticated IT staff to manage the technology. PLM solutions are highly customized and complex to implement, often requiring a complete replacement of existing technology. Because of the high expense and IT expertise required to purchase, deploy, and run a PLM solution, many small-to-midsized manufacturers cannot implement PLM.
Digital Prototyping is a viable alternative to PLM for these small-to-midsized manufacturers. Like PLM, Digital Prototyping seeks to link otherwise unconnected, siloed activities, such as concept development, design, engineering, manufacturing, sales, and marketing. However, unlike PLM, Digital Prototyping does not support the entire product development process from conception to disposal, but rather focuses on the design-to-manufacture portion of the process. The realm of Digital Prototyping ends when the digital product and the engineering bill of materials are complete. Digital Prototyping aims to resolve many of the same issues as PLM without involving a highly customized, all-encompassing software deployment. With Digital Prototyping, a company may choose to address one need at a time, making the approach more pervasive as its business grows. Other differences between Digital Prototyping and PLM include:
Digital Prototyping involves fewer participants than PLM.
Digital Prototyping has a less complex process for collecting, managing, and sharing data.
Manufacturers can keep product development activities separate from operations management with Digital Prototyping.
Digital Prototyping solutions don't need to be integrated with ERP (but can be), customer relationship management (CRM), and project and portfolio management (PPM) software.
Digital Prototyping Workflow
A Digital Prototyping workflow involves using a single digital model throughout the design process to bridge the gaps that typically exist between workgroups such as industrial design, engineering, manufacturing, sales, and marketing. Product development can be broken into the following general phases at most manufacturing companies:
Conceptual Design
Engineering
Manufacturing
Customer Involvement
Marketing Communications
Conceptual Design
The conceptual design phase involves taking customer input or market requirements and data to create a product design. In a Digital Prototyping workflow, designers work digitally, from the very first sketch, throughout the conceptual design phase. They capture their designs digitally, and then share that data with the engineering team using a common file format. The industrial design data is then incorporated into the digital prototype to ensure technical feasibility.
In a Digital Prototyping workflow, designers and their teams review digital design data via high-quality digital imagery or renderings to make informed product design decisions. Designers may create and visualize several iterations of design, changing things like materials or color schemes, before a concept is finalized.
Engineering
During the engineering phase of the Digital Prototyping workflow, engineers create the product's 3D model (the digital prototype), integrating design data developed during the conceptual design phase. Teams also add electrical systems design data to the digital prototype while it's being developed, and evaluate how different systems interact. At this stage of the workflow, all data related to the product's development is fully integrated into the digital prototype. Working with mechanical, electrical, and industrial design data, companies engineer every last product detail in the engineering phase of the workflow. At this point, the digital prototype is a fully realistic digital model of the complete product.
Engineers test and validate the digital prototype throughout their design process to make the best possible design decisions and avoid costly mistakes. Using the digital prototype, engineers can:
Perform integrated calculations, and stress, deflection, and motion simulations to validate designs
Test how moving parts will work and interact
Evaluate different solutions to motion problems
Test how the design functions under real-world constraints
Conduct stress analysis to analyze material selection and displacement
Verify the strength of a part
By incorporating integrated calculations, stress, deflection, and motion simulations into the Digital Prototyping workflow, companies can speed development cycles by minimizing physical prototyping phases. By implementing a digital prototype of a partially or fully automated vehicle and its sensor suite into a dynamic co-simulation of traffic flow and vehicle dynamics, a novel toolchain methodology comprising virtual testing is available for the development of automated driving functions by the automotive industry.
Also during the engineering phase of the Digital Prototyping workflow, engineers create documentation required by the production team.
Manufacturing
In a Digital Prototyping workflow, manufacturing teams are involved early in the design process. This input helps engineers and manufacturing experts work together on the digital prototype throughout the design process to ensure that the product can be produced cost effectively. Manufacturing teams can see the product exactly as it's intended, and provide input on manufacturability. Companies can perform molding simulations on digital prototypes for plastic part and injection molds to test the manufacturability of their designs, identifying potential manufacturing defects before they cut mold tooling.
Digital Prototyping also enables product teams to share detailed assembly instructions digitally with manufacturing teams. While paper assembly drawings can be confusing, 3D visualizations of digital prototypes are unambiguous. This early and clear collaboration between manufacturing and engineering teams helps minimize manufacturing problems on the shop floor.
Finally, manufacturers can use Digital Prototyping to visualize and simulate factory-floor layouts and production lines. They can check for interferences to detect potential issues such as space constraints and equipment collisions.
Customer Involvement
Customers are involved throughout the Digital Prototyping workflow. Rather than waiting for a physical prototype to be complete, companies that use Digital Prototyping bring customers into the product development process early. They show customers realistic renderings and animations of the product's digital prototype so they'll know what the product looks like and how it will function. This early customer involvement helps companies get sign-off up front, so they don't waste time designing, engineering, and manufacturing a product that doesn't fulfill the customer's expectations.
Marketing
Using 3D CAD data from the digital prototype, companies can create realistic visualizations, renderings, and animations to market products in print, on the web, in catalogues, or in television commercials. Without needing to produce expensive physical prototypes and conduct photo shoots, companies can create virtual photography and cinematography nearly indistinguishable from reality. One aspect of this is creating the illumination environment for the subject, an area of new development.
Realistic visualizations not only help marketing communications, but the sales process as well. Companies can respond to requests for proposals and bid on projects without building physical prototypes, using visualizations to show the potential customer what the end product will be like. In addition, visualizations can help companies bid more accurately by making it more likely that everyone has the same expectations about the end product. Companies can also use visualizations to facilitate the review process once they've secured the business. Reviewers can interact with digital prototypes in realistic environments, allowing for the validation of design decisions early in the product development process.
Connecting Data and Teams
To support a Digital Prototyping workflow, companies use data management tools to coordinate all teams at every stage in the workflow, streamline design revisions and automate release processes for digital prototypes, and manage engineering bills of materials. These data management tools connect all workgroups to critical Digital Prototyping data.
Digital Prototyping and Sustainability
Companies increasingly use Digital Prototyping to understand sustainability factors in new product designs, and to help meet customer requirements for sustainable products and processes. They minimize material use by assessing multiple design scenarios to determine the optimal amount and type of material required to meet product specifications. In addition, by reducing the number of physical prototypes required, manufacturers can trim down their material waste.
Digital Prototyping can also help companies reduce the carbon footprint of their products. For example, WinWinD, a company that creates innovative wind turbines, uses Digital Prototyping to optimize the energy production of wind-power turbines for varying wind conditions. Furthermore, the rich product data supplied by Digital Prototyping can help companies demonstrate conformance with the growing number of product-related environmental regulations and voluntary sustainability standards.
References
Prototyping
Prototypes | Digital prototyping | [
"Technology"
] | 2,553 | [
"Industrial computing",
"Digital manufacturing"
] |
13,481,097 | https://en.wikipedia.org/wiki/Roasting%20jack | A roasting jack is a machine which rotates meat roasting on a spit. It can also be called a spit jack, a spit engine or a turnspit, although this name can also refer to a human turning the spit, or a turnspit dog. Cooking meat on a spit dates back at least to the 1st century BC, but at first spits were turned by human power. In Britain, starting in the Tudor period, dog-powered turnspits were used; the dog ran in a treadmill linked to the spit by belts and pulleys. Other forms of roasting jacks included the steam jack, driven by steam, the smoke jack, driven by hot gas rising from the fire, and the bottle jack or clock jack, driven by weights or springs.
Weight or clock jacks
A great majority of the jacks used prior to the 19th century were powered by a descending weight, often made of stone or iron, sometimes of lead. Although most commonly referred to as spit engines or jacks, these were also termed weight or clock jacks (clock jacks was the more common term in North America).
Earlier jacks of this type had a train of two arbors (spindles), later ones had a more efficient three arbor train. In the case of British examples, almost without exception, the governor or flywheel was set above the engine (as opposed to being located within the frame) and to one side; to the right for two train and to the left for three train engines.
European jacks are characterised by a flywheel set centrally and often within the frame; commonly the highest part of the frame has a bell-like arch that the shaft for the flywheel passes through.
Steam jack
A steam-powered roasting jack was first described by the Ottoman polymath and engineer Taqi al-Din in his Al-Turuq al-samiyya fi al-alat al-ruhaniyya (The Sublime Methods of Spiritual Machines), in 1551. A steam-driven jack was patented by the American clockmaker John Bailey II in 1792, and steam jacks were later commercially available in the United States.
Smoke jack
Leonardo da Vinci sketched a smoke-jack in the form of a turbine with four vanes. Smoke-jacks were also illustrated in Vittorio Zonca's book of machines (1607), and in John Wilkins's Mathematical Magick. The 1826 A Treatise of Mechanics describes a smoke-jack:
Smoke-jack is an engine used for the same purpose as the common jack; and is so called from its being moved by means of the smoke, or rarefied air, ascending the chimney, and striking against the sails of the horizontal wheel AB (plate XXI. fig 1.), which being inclined to the horizon, is moved about the axis of the wheel, together with the pinion C, which carries the wheels D and E; and E carries the chain F, which turns the spit. The wheel AB should be placed in the narrow part of the chimney, where the motion of the smoke is swiftest, and where also the greatest part of it must strike upon the sails.—The force of this machine depends on the draught of the chimney, and the strength of the fire.
Smoke-jacks are sometimes moved by means of spiral flyers coiling about a vertical axle; and at other times by a vertical wheel with sails like the float-boards of a mill: but the above is the more customary construction.
Bottle-jack
A bottle-jack was a clockwork motor in a brass shell, shaped like a bottle; it was introduced in the late 18th Century and in many cases replaced the earlier and much more simple dangle spit.
See also
Rotisserie (modern style)
List of cooking appliances
History of steam engine
References
Cooking appliances
Food preparation utensils
Domestic implements
Arab inventions
Meat
Fire
Turkish inventions | Roasting jack | [
"Chemistry"
] | 792 | [
"Combustion",
"Fire"
] |
13,481,218 | https://en.wikipedia.org/wiki/Open%20platform | In computing, an open platform describes a software system which is based on open standards, such as published and fully documented external application programming interfaces (API) that allow using the software to function in other ways than the original programmer intended, without requiring modification of the source code. Using these interfaces, a third party could integrate with the platform to add functionality. The opposite is a closed platform.
An open platform does not mean it is open source, however most open platforms have multiple implementations of APIs. For example, Common Gateway Interface (CGI) is implemented by open source web servers as well as Microsoft Internet Information Server (IIS). An open platform can consist of software components or modules that are either proprietary or open source or both. It can also exist as a part of closed platform, such as CGI, which is an open platform, while many servers that implement CGI also have other proprietary parts that are not part of the open platform.
An open platform implies that the vendor allows, and perhaps supports, the ability to do this. Using an open platform a developer could add features or functionality that the platform vendor had not completed or had not conceived of. An open platform allows the developer to change existing functionality, as the specifications are publicly available open standards.
A service-oriented architecture allows applications, running as services, to be accessed in a distributed computing environment, such as between multiple systems or across the Internet. A major focus of Web services is to make functional building blocks accessible over standard Internet protocols that are independent from platforms and programming languages. An open SOA platform would allow anyone to access and interact with these building blocks.
A 2008 Harvard Business School working paper, titled "Opening Platforms: How, When and Why?", differentiated a platform's openness in four aspects and gave example platforms.
See also
Application programming interface
Open standard
Open architecture
Service-oriented architecture
References
Application programming interfaces
Computing platforms | Open platform | [
"Technology"
] | 385 | [
"Computing stubs",
"Computing platforms"
] |
13,481,618 | https://en.wikipedia.org/wiki/Tubulin%20beta-3%20chain | Tubulin beta-3 chain, Class III β-tubulin, βIII-tubulin (β3-tubulin) or β-tubulin III, is a microtubule element of the tubulin family found almost exclusively in neurons, and in testis cells. In humans, it is encoded by the TUBB3 gene.
It is possible to use monoclonal antibodies and immunohistochemistry to identify neurons in samples of brain tissue, separating neurons from glial cells, which do not express tubulin beta-3 chain.
Class III β-tubulin is one of the seven β-tubulin isotypes identified in the human genome, predominantly in neurons and the testis. It is conditionally expressed in a number of other tissues after exposure to a toxic microenvironment featured by hypoxia and poor nutrient supply. Posttranslational changes including phosphorylation and glycosylation are required for functional activity.
Class III β-tubulin's role in neural development has warranted its use as an early biomarker of neural cell differentiation from multi potent progenitors. TUBB3 inactivation impairs neural progenitor proliferation. Rescue experiments demonstrate the non-interchangeability of TUBB3 with other classes of β-tubulins which cannot restore the phenotype resulting from TUBB3 inactivation. Congenital neurologic syndromes associated with TUBB3 missense mutations demonstrate the critical importance of class III β-tubulin for normal neural development.
Gene
The human TUBB3 gene is located on chromosome 16q24.3, and consists of 4 exons that transcribe a protein of 450aa. A shorter isoform of 378aa derived from alternative splicing of exon 1 is devoid of part of the N-terminus and may be responsible for mitochondrial expression. Like other β-tubulin isotypes, βIII-tubulin has a GTPase domain which plays an essential role in regulating microtubule dynamics. Differences between Class I (the most commonly represented and constitutively expressed isotype) and class III β-tubulin are limited to only 13aa within region 1-429aa, while all amino acids in region 430-450aa are divergent. These variations in primary structure affect the paclitaxel (a mimic of Nur77) binding domain on βIII-tubulin and may account for the ability of this isotype to confer resistance to Nur77-initiated apoptosis.
Function
Cysteine residues in class III β-tubulin are actively involved in regulating ligand interactions and microtubule formation. Proteomic analysis has revealed that many factors bound to these cysteine residues are involved in the oxidative stress and glucose deprivation response. This is particularly interesting in light of the fact that class III β-tubulin first appears in the phylogenetic tree when life emerged from the seas and cells were exposed to atmospheric oxygen. In structural terms, constitutive Class I (TUBB) and Class IVb (TUBB2C) β-tubulins contain a cysteine at position 239, while βIII-tubulin has a cysteine at position 124. Position 239 can be readily oxidized while position 124 is relatively resistant to oxidation. Thus, a relative abundance of βIII-tubulin in situations of oxidative stress could provide a protective benefit.
Interactions
The interactome of class III β-tubulin comprises the GTPase GBP1 (guanylate binding protein 1) and a panel of an additional 19 kinases having prosurvival activity including PIM1 (Proviral Integration Site 1) and NEK6 (NIMA-related kinase 6). Incorporation of these kinases into the cytoskeleton via the GBP-1/ class III β-tubulin interaction protects kinases from rapid degradation. Other pro-survival factors interacting with class III β-tubulin enabling cellular adaptation to oxidative stress include the molecular chaperone HSP70/GRP75. FMO4 (vimentin/dimethylalanine monooxygenase 4) and GSTM4 (glutathione transferase M4).
Regulation
The expression of Class III β-tubulin is regulated at both the transcriptional and translational levels. In neural tissue, constitutive expression is driven by Sox4 and Sox11. In non-neural tissues, regulation is dependent on an E-box site in the 3' flanking region at +168 nucleotides. This site binds basic helix-loop-helix (bHLH) hypoxia induced transcription factors Hif-1α and Hif-2α and is epigenetically modified in cancer cells with constitutive TUBB3 expression. Translational regulation of TUBB3 occurs in the 3`flanking region with the interaction of the miR-200c family of micro-RNA. MiR-200c is in turn modulated by the protein HuR (encoded by ELAVL1). When HuR is predominantly in the nucleus, a phenomenon typically occurring in low stage carcinomas, miR-200c suppresses class III β-tubulin translation. By contrast, cytoplasmic HuR and miR-200c enhance class III β-tubulin translation by facilitating the entry of the mRNA into the ribosome.
Role in cancer
In oncology, class III β-tubulin has been investigated as both a prognostic biomarker and an indicator of resistance to taxanes and other compounds. The majority of reports implicate class III β-tubulin as a biomarker of poor outcome. However, there are also data in clear cell carcinoma, melanoma and breast cancer showing favorable prognosis. Class III β-tubulin is integral component of a pro-survival, cascading molecular pathway which renders cancer cells resistant to apoptosis and enhances their ability to invade local tissues and metastasize. Class III β-tubulin performs best as a prognostic biomarker when analyzed in the context of an integrated signature including upstream regulators and downstream effectors. TUBB3 mutation is associated with microlissencephaly.
Overexpression of this isotype in clinical samples correlates with tumor aggressiveness, resistance to chemotherapeutic drugs, and poor patient survival.
Pathophysiology
The β3 isotype increases tumor aggressiveness by two distinct mechanisms.
Incorporation of this isotype makes microtubule networks hypostable, allowing them to resist the cytotoxic effects of microtubule stabilizing drugs like taxanes or epothilones. Mechanistically, it was found that overexpression of β3-tubulin increases the rate of microtubule detachment from microtubule organizing centers, an activity that is suppressed by drugs such as paclitaxel.
Expression of β3-tubulin also makes cells more aggressive by altering their response to drug-induced suppression of microtubule dynamics. Dynamic microtubules are needed for the cell migration that underlies processes such as tumor metastasis and angiogenesis. The dynamics are normally suppressed by low, subtoxic concentrations of microtubule drugs that also inhibit cell migration. However, incorporating β3-tubulin into microtubules increases the concentration of drug that is needed to suppress dynamics and inhibit cell migration. Thus, tumors that express β3-tubulin are not only resistant to the cytotoxic effects of microtubule targeted drugs, but also to their ability to suppress tumor metastasis. Moreover, expression of β3-tubulin also counteracts the ability of these drugs to inhibit angiogenesis which is normally another important facet of their action.
Notes
References
Cell anatomy
Cytoskeleton
Proteins | Tubulin beta-3 chain | [
"Chemistry"
] | 1,629 | [
"Proteins",
"Biomolecules by chemical classification",
"Molecular biology"
] |
13,481,675 | https://en.wikipedia.org/wiki/Natural%20Language%20Semantics%20Markup%20Language | Natural Language Semantics Markup Language is a markup language for providing systems (like Voice Browsers) with semantic interpretations for a variety of inputs, including speech and natural language text input. Natural Language Semantics Markup Language is currently a World Wide Web Consortium Working Draft.
See also
VoiceXML
SRGS
Semantic Interpretation for Speech Recognition
External links
SRGS Specification (W3C Recommendation)
Natural Language Semantics Markup Language for the Speech Interface Framework (W3C Working Draft)
W3C's Voice Browser Working Group
World Wide Web Consortium standards
XML-based standards | Natural Language Semantics Markup Language | [
"Technology"
] | 114 | [
"Computer standards",
"World Wide Web stubs",
"Computing stubs",
"XML-based standards"
] |
13,482,338 | https://en.wikipedia.org/wiki/Reflecting%20instrument | Reflecting instruments are those that use mirrors to enhance their ability to make measurements. In particular, the use of mirrors permits one to observe two objects simultaneously while measuring the angular distance between the objects. While reflecting instruments are used in many professions, they are primarily associated with celestial navigation as the need to solve navigation problems, in particular the problem of the longitude, was the primary motivation in their development.
Objectives of the instruments
The purpose of reflecting instruments is to allow an observer to measure the altitude of a celestial object or the angular distance between two objects. The driving force behind the developments discussed here was the solution to the problem of finding one's longitude at sea. The solution to this problem was seen to require an accurate means of measuring angles and the accuracy was seen to rely on the observer's ability to measure this angle by simultaneously observing two objects.
The deficiency of prior instruments was well known. Requiring the observer to observe two objects with two divergent lines of sight increased the likelihood of an error. Those that considered the problem realized that the use of specula (mirrors in modern parlance) could permit two objects to be observed in a single view. What followed is a series of inventions and improvements that refined the instrument to the point that its accuracy exceeded that which was required for determining longitude. Any further improvements required a completely new technology.
Early reflecting instruments
Some of the early reflecting instruments were proposed by scientists such as Robert Hooke and Isaac Newton. These were little used or may not have been built or tested extensively. The van Breen instrument was the exception, in that it was used by the Dutch. However, it had little influence outside of the Netherlands.
Joost van Breen's reflecting cross-staff
Invented in 1660 by the Dutch Joost van Breen, the spiegelboog (mirror-bow) was a reflecting cross staff. This instrument appears to have been used for approximately 100 years, mainly in the Zeeland Chamber of the VOC (The Dutch East India Company).
Robert Hooke's single-reflecting instrument
Hooke's instrument was a single-reflecting instrument. It used a single mirror to reflect the image of an astronomical object to the observer's eye. This instrument was first described in 1666 and a working model was presented by Hooke at a meeting of the Royal Society some time later.
The device consisted of three primary components, an index arm, a radial arm and a graduated chord. The three were arranged in a triangle as in the image on the right. A telescopic sight was mounted on the index arm. At the point of rotation of the radial arm, a single mirror was mounted. This point of rotation allowed the angle between the index arm and the radial arm to be changed. The graduated chord was connected to the opposite end of the radial arm and the chord was permitted to rotate about the end. The chord was held against the distant end of the index arm and slid against it. The graduations on the chord were uniform and, by using it to measure the distance between the ends of the index arm and the radial arm, the angle between those arms could be determined. A table of chords was used to convert a measurement of distance to a measurement of angle. The use of the mirror resulted in the measured angle being twice the angle included by the index and the radius arm.
The mirror on the radial arm was small enough that the observer could see the reflection of an object in half the telescope's view while seeing straight ahead in the other half. This allowed the observer to see both objects at once. Aligning the two objects together in the telescopes view resulted in the angular distance between them to be represented on the graduated chord.
While Hooke's instrument was novel and attracted some attention at the time, there is no evidence that it was subjected to any tests at sea. The instrument was little used and did not have any significant effect on astronomy or navigation.
Halley's reflecting instrument
In 1692, Edmond Halley presented the design of a reflecting instrument to the Royal Society.
This is an interesting instrument, combining the functionality of a radio latino with a double telescope. The telescope (AB in the adjacent image), has an eyepiece at one end and a mirror (D) partway along its length with one objective lens at the far end (B). The mirror only obstructs half the field (either left or right) and permits the objective to be seen on the other. Reflected in the mirror is the image from the second objective lens (C). This permits the observer to see both images, one straight through and one reflected, simultaneously besides each other. It is essential that the focal lengths of the two objective lenses be the same and that the distances from the mirror to either lens be identical. If this condition is not met, the two images cannot be brought to a common focus.
The mirror is mounted on the staff (DF) of the radio latino portion of the instrument and rotates with it. The angle this side of the radio latino's rhombus makes to the telescope can be set by adjusting the rhombus' diagonal length. In order to facilitate this and allow for fine adjustment of the angle, a screw (EC) is mounted so as to allow the observer to change the distance between the two vertexes (E and C).
The observer sights the horizon with the direct lens' view and sights a celestial object in the mirror. Turning the screw to bring the two images directly adjacent sets the instrument. The angle is determined by taking the length of the screw between E and C and converting this to an angle in a table of chords.
Halley specified that the telescope tube be rectangular in cross section. This makes construction easy, but is not a requirement as other cross section shapes can be accommodated. The four sides of the radio latino portion (CD, DE, EF, FC) must be equal in length in order for the angle between the telescope and the objective lens side (ADC) to be precisely twice the angle between the telescope and the mirror (ADF) (or in other words – to enforce the angle of incidence being equal to the angle of reflection). Otherwise, instrument collimation will be compromised and the resulting measurements would be in error.
The celestial object's elevation angle could have been determined by reading from graduations on the staff at the slider, however, that's not how Halley designed the instrument. This may suggest that the overall design of the instrument was coincidentally like a radio latino and that Halley may not have been familiar with that instrument.
There is no knowledge of whether this instrument was ever tested at sea.
Newton's reflecting quadrant
Newton's reflecting quadrant was similar in many respects to Hadley's first reflecting quadrant that followed it.
Newton had communicated the design to Edmund Halley around 1699. However, Halley did not do anything with the document and it remained in his papers only to be discovered after his death. However, Halley did discuss Newton's design with members of the Royal Society when Hadley presented his reflecting quadrant in 1731. Halley noted that Hadley's design was quite similar to the earlier Newtonian instrument.
As a result of this inadvertent secrecy, Newton's invention played little role in the development of reflecting instruments.
The octant
What is remarkable about the octant is the number of persons who independently invented the device in a short period of time. John Hadley and Thomas Godfrey both get credit for inventing the octant. They independently developed the same instrument around 1731. They were not the only ones, however.
In Hadley's case, two instruments were designed. The first was an instrument very similar to Newton's reflecting quadrant. The second had essentially the same form as the modern sextant. Few of the first design were constructed, while the second became the standard instrument from which the sextant derived and, along with the sextant, displaced all prior navigation instruments used for celestial navigation.
Caleb Smith, an English insurance broker with a strong interest in astronomy, had created an octant in 1734. He called it an Astroscope or Sea-Quadrant. He used a fixed prism in addition to an index mirror to provide reflective elements. Prisms provide advantages over mirrors in an era when polished speculum metal mirrors were inferior and both the silvering of a mirror and the production of glass with flat, parallel surfaces was difficult. However, the other design elements of Smith's instrument made it inferior to Hadley's octant and it was not used significantly.
Jean-Paul Fouchy, a mathematics professor and astronomer in France, invented an octant in 1732. His was essentially the same as Hadley's. Fouchy did not know of the developments in England at the time, since communications between the two country's instrument makers was limited and the publications of the Royal Society, particularly the Philosophical Transactions, were not being distributed in France. Fouchy's octant was overshadowed by Hadley's.
The sextant
The main article, Sextant, covers the use of the instrument in navigation. This article concentrates on the history and the development of the instrument
The origin of the sextant is straightforward and not in dispute. Admiral John Campbell, having used Hadley's octant in sea trials of the method of lunar distances, found that it was wanting. The 90° angle subtended by the arc of the instrument was insufficient to measure some of the angular distances required for the method. He suggested that the angle be increased to 120°, yielding the sextant. John Bird made the first such sextant in 1757.
With the development of the sextant, the octant became something of a second class instrument. The octant, while occasionally constructed entirely of brass, remained primarily a wooden-framed instrument. Most of the developments in advanced materials and construction techniques were reserved for the sextant.
There are examples of sextants made with wood, however most are made from brass. In order to ensure the frame was stiff, instrument makers used thicker frames. This had a drawback in making the instrument heavier, which could influence the accuracy due to hand-shaking as the navigator worked against its weight. In order to avoid this problem, the frames were modified. Edward Troughton patented the double-framed sextant in 1788. This used two frames held in parallel with spacers. The two frames were about a centimetre apart. This significantly increased the stiffness of the frame. An earlier version had a second frame that only covered the upper part of the instrument, securing the mirrors and telescope. Later versions used two full frames. Since the spacers looked like little pillars, these were also called pillar sextants.
Troughton also experimented with alternative materials. The scales were plated with silver, gold or platinum. Gold and platinum both minimized corrosion problems. The platinum-plated instruments were expensive, due to the scarcity of the metal, though less expensive than gold. Troughton knew William Hyde Wollaston through the Royal Society and this gave him access to the precious metal. Instruments from Troughton's company that used platinum can be easily identified by the word Platina engraved on the frame. These instruments remain highly valued as collector's items and are as accurate today as when they were constructed.
As the developments in dividing engines progressed, the sextant was more accurate and could be made smaller. In order to permit easy reading of the vernier, a small magnifying lens was added. In addition, to reduce glare on the frame, some had a diffuser surrounding the magnifier to soften the light. As accuracy increased, the circular arc vernier was replaced with a drum vernier.
Frame designs were modified over time to create a frame that would not be adversely affected by temperature changes. These frame patterns became standardized and one can see the same general shape in many instruments from many different manufacturers.
In order to control costs, modern sextants are now available in precision-made plastic. These are light, affordable and of high quality.
Types of sextants
While most people think of navigation when they hear the term sextant, the instrument has been used in other professions.
Navigator's sextantThe common type of instrument most people think of when they hear the term sextant.
Sounding sextantsThese are sextants that were constructed for use horizontally rather than vertically and were developed for use in hydrographic surveys.
Surveyor's sextantsThese were constructed for use exclusively on land for horizontal angular measurements. Instead of a handle on the frame, they had a socket to allow the attachment of a surveyor's Jacob's staff.
Box or pocket sextantsThese are small sextants entirely contained within a metal case. First developed by Edward Troughton, they are usually all brass with most of the mechanical components inside the case. The telescope extends from an opening in the side. The index and other parts are completely covered when the case cover is slipped on. Popular with surveyors for their small size (typically only in diameter and deep), their accuracy was enabled by improvements in the dividing engines used to graduate the arcs. The arcs are so small that magnifiers are attached to allow them to be read.
In addition to these types, there are terms used for various sextants.
A pillar sextant can be either:
A double-frame sextant as patented by Edward Troughton in 1788.
A surveyor's sextant with a socket for a surveyor's staff (the pillar).
The former is the most common use of the term.
Beyond the sextant
Quintant and others
Several makers offered instruments with sizes other than one-eighth or one-sixth of a circle. One of the most common was the quintant or fifth of a circle (72° arc reading to 144°). Other sizes were also available, but the odd sizes never became common. Many instruments are found with scales reading to, for example, 135°, but they are simply referred to as sextants. Similarly, there are 100° octants, but these are not separated as unique types of instruments.
There was interest in much larger instruments for special purposes. In particular a number of full circle instruments were made, categorized as reflecting circles and repeating circles.
Reflecting circles
The reflecting circle was invented by the German geometer and astronomer Tobias Mayer in 1752, with details published in 1767. His development preceded the sextant and was motivated by the need to create a superior surveying instrument.
The reflecting circle is a complete circular instrument graduated to 720° (to measure distances between heavenly bodies, there is no need to read an angle greater than 180°, since the minimum distance will always be less than 180°). Mayer presented a detailed description of this instrument to the Board of Longitude and John Bird used the information to construct one sixteen inches in diameter for evaluation by the Royal Navy. This instrument was one of those used by Admiral John Campbell during his evaluation of the lunar distance method. It differed in that it was graduated to 360° and was so heavy that it was fitted with a support that attached to a belt. It was not considered better than the Hadley octant and was less convenient to use. As a result, Campbell recommended the construction of the sextant.
Jean-Charles de Borda further developed the reflecting circle. He modified the position of the telescopic sight in such a way that the mirror could be used to receive an image from either side relative to the telescope. This eliminated the need to ascertain that the mirrors were precisely parallel when reading zero. This simplified the use of the instrument. Further refinements were performed with the help of Etienne Lenoir. The two of them refined the instrument to its definitive form in 1777. This instrument was so distinctive it was given the name Borda circle or repeating circle.
Borda and Lenoir developed the instrument for geodetic surveying. Since it was not used for the celestial measures, it did not use double reflection and substituted two telescope sights. As such, it was not a reflecting instrument. It was notable as being the equal of the great theodolite created by the renowned instrument maker, Jesse Ramsden.
Josef de Mendoza y Ríos redesigned Borda's reflecting circle (London, 1801). The goal was to use it together with his Lunar Tables published by the Royal Society (London, 1805). He made a design with two concentric circles and a vernier scale and recommended averaging three sequential readings to reduce the error. Borda's system was not based on a circle of 360° but 400 grads (Borda spent years calculating his tables with a circle divided in 400°). Mendoza's lunar tables have been used through almost the entire nineteenth century (see Lunar distance (navigation)).
Edward Troughton also modified the reflecting circle. He created a design with three index arms and verniers. This permitted three simultaneous readings to average out the error.
As a navigation instrument, the reflecting circle was more popular with the French navy than with the British.
Bris sextant
The Bris sextant is not a true sextant, but it is a true reflecting instrument based on the principle of double reflection and subject to the same rules and errors as common octants and sextants. Unlike common octants and sextants, the Bris sextant is a fixed angle instrument capable of accurately measuring a few specific angles unlike other reflecting instruments which can measure any angle within the range of the instrument. It is particularly suited to determining the altitude of the sun or moon.
Surveying sector
Francis Ronalds invented an instrument for recording angles in 1829 by modifying the octant. A disadvantage of reflecting instruments in surveying applications is that optics dictate that the mirror and index arm rotate through half the angular separation of the two objects. The angle thus needs to be read, noted and a protractor employed to draw the angle on a plan. Ronalds' idea was to configure the index arm to rotate through twice the angle of the mirror, so that the arm could then be used to draw a line at the correct angle directly onto the drawing. He used a sector as the basis of his instrument and placed the horizon glass at one tip and the index mirror near the hinge connecting the two rulers. The two revolving elements were linked mechanically and the barrel supporting the mirror was twice the diameter of the hinge to give the required angular ratio.
References
External links
National Maritime Museum Portrait of a merchant navy captain holding a Caleb Smith Octant.
Astronomical instruments
Celestial navigation
Angle measuring instruments
Navigational equipment | Reflecting instrument | [
"Astronomy"
] | 3,801 | [
"Celestial navigation",
"Astronomical instruments"
] |
13,482,790 | https://en.wikipedia.org/wiki/List%20of%20psilocybin%20mushroom%20species | Psilocybin mushrooms are mushrooms which contain the hallucinogenic substances psilocybin, psilocin, baeocystin and norbaeocystin. The mushrooms are collected and grown as an entheogen and recreational drug, despite being illegal in many countries. Many psilocybin mushrooms are in the genus Psilocybe, but species across several other genera contain the drugs.
General
Conocybula
Galerina
Gymnopilus
Inocybe
Panaeolus
Pluteus
Psilocybe
Conocybula
Conocybe siligineoides R. Heim
Conocybula cyanopus (G.F. Atk.) T. Bau & H. B. Song
Conocybula smithii (Watling) T. Bau & H. B. Song (Galera cyanopes Kauffman, Conocybe smithii Watling)
Galerina
Galerina steglichii Besl
Gymnopilus
Gymnopilus aeruginosus (Peck) Singer (photo)
Gymnopilus braendlei (Peck) Hesler
Gymnopilus cyanopalmicola Guzm.-Dáv
Gymnopilus dilepis (Berk. & Broome) Singer
Gymnopilus dunensis H. Bashir, Jabeen & Khalid
Gymnopilus intermedius (Singer) Singer
Gymnopilus lateritius (Pat.) Murrill
Gymnopilus luteofolius (Peck) Singer (photo)
Gymnopilus luteoviridis Thiers (photo)
Gymnopilus luteus (Peck) Hesler (photo)
Gymnopilus palmicola Murrill
Gymnopilus purpuratus (Cooke & Massee) Singer (photo)
Gymnopilus subpurpuratus Guzmán-Davalos & Guzmán
Gymnopilus subspectabilis Hesler
Gymnopilus validipes (Peck) Hesler
Gymnopilus viridans Murrill
Inocybe
Inocybe aeruginascens Babos
Inocybe caerulata Matheny, Bougher & G.M. Gates
Inocybe coelestium Kuyper
Inocybe corydalina
Inocybe corydalina var. corydalina Quél.
Inocybe corydalina var. erinaceomorpha (Stangl & J. Veselsky) Kuyper
Inocybe haemacta (Berk. & Cooke) Sacc.
Inocybe tricolor Kühner
Most species in this genus are poisonous.
Panaeolus
Panaeolus affinis (E. Horak) Ew. Gerhardt
Panaeolus africanus Ola'h
Panaeolus axfordii Y. Hu, S.C. Karunarathna, P.E. Mortimer & J.C. Xu
Panaeolus bisporus (Malencon and Bertault) Singer and Weeks
Panaeolus cambodginiensis (OlaĽh et Heim) Singer & Weeks. (Merlin & Allen, 1993)
Panaeolus chlorocystis (Singer & R.A. Weeks) Ew. Gerhardt
Panaeolus cinctulus (Bolton) Britzelm.
Panaeolus cyanescens (Berk. & Broome) Sacc.
Panaeolus fimicola (Fr.) Gillet
Panaeolus lentisporus Ew. Gerhardt
Panaeolus microsporus Ola'h & Cailleux
Panaeolus moellerianus Singer
Panaeolus olivaceus F.H. Møller
Panaeolus rubricaulis Petch (= Panaeolus campanuloides Guzmán & K. Yokoy.)
Panaeolus tirunelveliensis (Natarajan & Raman) Ew. Gerhardt
Panaeolus tropicalis Ola'h
Panaeolus venezolanus Guzmán (= Panaeolus annulatus Natarajan & Raman)
Pluteus
Pluteus albostipitatus (Dennis) Singer
Pluteus americanus (P. Banerjee & Sundb.) Justo, E.F. Malysheva & Minnis (2014)
Pluteus brunneidiscus Murrill (1917).
Pluteus cyanopus Quél.
Pluteus glaucus Singer
Pluteus glaucotinctus E. Horak
Pluteus nigroviridis Babos
Pluteus phaeocyanopus Minnis & Sundb.
Pluteus salicinus (Pers. : Fr.) P. Kumm.
Pluteus saupei Justo & Minnis
Pluteus velutinornatus G. Stev. 1962.
Pluteus villosus (Bull.) Quél.
Psilocybe
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
A
Psilocybe acutipilea (Speg.) Guzmán
Psilocybe allenii Borov., Rockefeller & P.G.Werner
Psilocybe alutacea Y.S. Chang & A.K. Mills
Psilocybe angulospora Yen W. Wang & S.S. Tzean
Psilocybe antioquiensis Guzmán, Saldarriaga, Pineda, García & Velázquez
Psilocybe araucariicola P. S. Silva & Ram.-Cruz
Psilocybe atlantis Guzmán, Hanlin & C. White
Psilocybe aquamarina (Pegler) Guzmán
Psilocybe armandii Guzmán & S.H. Pollock
Psilocybe aucklandiae Guzmán, C.C. King & Bandala
Psilocybe aztecorum
Psilocybe aztecorum var. aztecorum
Psilocybe aztecorum var. bonetii (Guzmán) Guzmán
Psilocybe azurescens Stamets & Gartz
B
Psilocybe baeocystis Singer & A.H. Sm. emend. Guzmán
Psilocybe banderillensis Guzmán
Psilocybe brasiliensis Guzmán
Psilocybe brunneocystidiata Guzmán & Horak
C
Psilocybe caeruleoannulata Singer ex Guzmán
Psilocybe caerulescens
Psilocybe caerulescens var. caerulescens Murrill
Psilocybe caerulescens var. ombrophila (R. Heim) Guzmán
Psilocybe caerulipes (Peck) Sacc.
Psilocybe callosa
Psilocybe carbonaria Singer
Psilocybe chuxiongensis T.Ma & K.D.Hyde
Psilocybe collybioides Singer & A.H. Sm.
Psilocybe columbiana Guzmán
Psilocybe congolensis Guzmán, S.C. Nixon & Cortés-Pérez
Psilocybe cordispora R. Heim
Psilocybe cubensis (Earle) Singer
Psilocybe cyanescens Wakef. (non-sensu Krieglsteiner)
Psilocybe cyanofibrillosa Guzmán & Stamets
D
Psilocybe dumontii Singer ex Guzmán
E
Psilocybe egonii Guzmán & T.J. Baroni
Psilocybe eximia E. Horak & Desjardin
F
Psilocybe fagicola
Psilocybe fagicola var. fagicola
Psilocybe fagicola var. mesocystidiata Guzmán
Psilocybe farinacea Rick ex Guzmán
Psilocybe fimetaria (P.D. Orton) Watling
Psilocybe fuliginosa (Murrill) A.H. Sm.
Psilocybe furtadoana Guzmán
G
Psilocybe galindoi Guzmán
Psilocybe gallaeciae Guzmán & M.L. Castro
Psilocybe graveolens Peck
Psilocybe guatapensis Guzmán, Saldarriaga, Pineda, García & Velázquez
Psilocybe guilartensis Guzmán, Tapia & Nieves-Rivera
H
Psilocybe heimii Guzmán
Psilocybe herrerae Guzmán
Psilocybe hispanica Guzmán
Psilocybe hoogshagenii
Psilocybe hoogshagenii var. hoogshagenii "(= Psilocybe caerulipes var. gastonii Singer, Psilocybe zapotecorum R. Heim s. Singer)"
Psilocybe hoogshagenii var. convexa Guzmán (= Psilocybe semperviva R. Heim & Cailleux)
Psilocybe hopii Guzmán & J. Greene
I
Psilocybe inconspicua Guzmán & Horak
Psilocybe indica Sathe & J.T. Daniel
Psilocybe ingeli B. van der Merwe, A. Rockefeller & K Jacobs
Psilocybe isabelae Guzmán
J
Psilocybe jacobsii Guzmán
Psilocybe jaliscana Guzmán
K
Psilocybe kumaenorum R. Heim
L
Psilocybe laurae Guzmán
Psilocybe lazoi Singer
Psilocybe liniformans
Psilocybe liniformans var. liniformans
Psilocybe liniformans var. americana Guzmán & Stamets
M
Psilocybe mairei Singer
Psilocybe makarorae Johnst. & Buchanan
Psilocybe maluti B. van der Merwe, A. Rockefeller & K. Jacobs
Psilocybe mammillata (Murrill) A.H. Sm.
Psilocybe medullosa (Bres.) Borovička
Psilocybe meridensis Guzmán
Psilocybe meridionalis Guzmán, Ram.-Guill. & Guzm.-Dáv.
Psilocybe mescaleroensis Guzmán, Walstad, E. Gándara & Ram.-Guill.
Psilocybe mexicana R. Heim
Psilocybe moseri Guzmán
Psilocybe muliercula Singer & A.H. Sm. (= Psilocybe wassonii R. Heim)
N
Psilocybe naematoliformis Guzmán
Psilocybe natalensis Gartz, Reid, Smith & Eicker
Psilocybe natarajanii Guzmán (= Psilocybe aztecorum var. bonetii (Guzmán) Guzmán s. Natarajan & Raman)
Psilocybe neorhombispora Guzmán & Horak
Psilocybe neoxalapensis Guzmán, Ram.-Guill. & Halling
Psilocybe ningshanensis X.L. He, W.Y. Huo, L.G. Zhang, Y. Liu & J.Z. Li
Psilocybe niveotropicalis Ostuni, Rockefeller, J. Jacobs & Birkebak
O
Psilocybe ovoideocystidiata Guzmán et Gaines
P
Psilocybe papuana Guzmán & Horak
Psilocybe paulensis (Guzmán & Bononi) Guzmán (= Psilocybe banderiliensis var. paulensis Guzmán & Bononi)
Psilocybe pelliculosa (A.H. Sm.) Singer & A.H. Sm.
Psilocybe pintonii Guzmán
Psilocybe pleurocystidiosa Guzmán
Psilocybe plutonia (Berk. & M.A. Curtis) Sacc.
Psilocybe portoricensis Guzmán, Tapia & Nieves-Rivera
Psilocybe pseudoaztecorum Natarajan & Raman
Psilocybe puberula Bas & Noordel.
Q
Psilocybe quebecensis Ola'h & R. Heim
R
Psilocybe rickii Guzmán & Cortez
Psilocybe rostrata (Petch) Pegler
Psilocybe rzedowskii Guzmán
S
Psilocybe samuiensis Guzmán, Bandala & Allen
Psilocybe schultesii Guzmán & S.H. Pollock
Psilocybe semilanceata (Fr. : Secr.) P. Kumm.
Psilocybe septentrionalis (Guzmán) Guzmán (= Psilocybe subaeriginascens Höhn. var. septentrionalis Guzmán)
Psilocybe serbica Moser & Horak (non ss. Krieglsteiner)
Psilocybe sierrae Singer (= Psilocybe subfimetaria Guzmán & A.H. Sm.)
Psilocybe silvatica (Peck) Singer & A.H. Sm.
Psilocybe singeri Guzmán
Psilocybe strictipes Singer & A.H. Sm.
Psilocybe stuntzii Guzman & Ott
Psilocybe subacutipilea Guzmán, Saldarriaga, Pineda, García & Velázquez
Psilocybe subaeruginascens Hohnel
Psilocybe subaeruginosa Cleland
Psilocybe subbrunneocystidiata P.S. Silva & Guzmán
Psilocybe subcaerulipes Hongo
Psilocybe subcubensis Guzmán
Psilocybe subpsilocybioides Guzmán, Lodge & S.A. Cantrell
Psilocybe subtropicalis Guzmán
T
Psilocybe tampanensis Guzmán & S.H. Pollock (photo)
Psilocybe tasmaniana Guzmán & Watling (1978)
Psilocybe thaiaerugineomaculans Guzmán, Karunarathna & Ram.-Guill.
Psilocybe thaicordispora Guzmán, Ram.-Guill. & Karun.
Psilocybe thaiduplicatocystidiata Guzmán, Karun. & Ram.-Guill.
U
Psilocybe uruguayensis Singer ex Guzmán
Psilocybe uxpanapensis Guzmán
V
Psilocybe venenata (S. Imai) Imaz. & Hongo (= Psilocybe fasciata Hongo; Stropharia caerulescens S. Imai)
W
Psilocybe wassoniorum Guzmán & S.H. Pollock
Psilocybe wayanadensis K.A. Thomas, Manim. & Guzmán
Psilocybe weilii Guzmán, Tapia & Stamets (photo)
Psilocybe weldenii Guzmán
Psilocybe weraroa Borovicka, Oborník & Noordel.
X
Psilocybe xalapensis Guzmán & A. López
Y
Psilocybe yungensis Singer & A.H. Sm.
Z
Psilocybe zapotecoantillarum Guzmán, T.J. Baroni & Lodge
Psilocybe zapotecocaribaea Guzmán, Ram.-Guill. & T.J. Baroni
Psilocybe zapotecorum
References
Entheogens
Psilocybin species
Psychoactive fungi
Psychedelic tryptamine carriers
Hallucinations | List of psilocybin mushroom species | [
"Biology"
] | 3,204 | [
"Set index articles on fungus common names",
"Set index articles on organisms"
] |
13,483,344 | https://en.wikipedia.org/wiki/Incantation%20bowl | Incantation bowls are a form of protective magic found in what is now Iraq and Iran. Produced in the Middle East during late antiquity from the sixth to eighth centuries, particularly in Upper Mesopotamia and Syria, the bowls were usually inscribed in a spiral, beginning from the rim and moving toward the center. Most are inscribed in Jewish Babylonian Aramaic.
The bowls were buried face down and were meant to capture demons. They were commonly placed under the threshold, courtyards, in the corner of the homes of the recently deceased and in cemeteries.
The majority of Mesopotamia's population were either Christian, Manichaean, Mandaean, Jewish, or adherents of the ancient Babylonian religion, all of whom spoke Aramaic dialects. Zoroastrians who spoke Persian also lived here. Mandaeans and Jews each used their own Aramaic variety, although very closely related. A subcategory of incantation bowls are those used in Jewish and Christian magical practice (see Jewish magical papyri for context). The majority of recovered incantation bowls were written in Jewish Aramaic. These are followed in frequency by the Mandaic language and then Syriac. A handful of bowls have been discovered that were written in Arabic or Persian. An estimated 10% of incantation bowls were not written in any real language but pseudo-script. They are thought to be forgeries by illiterate “scribes” and sold to illiterate clients. The bowls are thought to have been regularly commissioned across religious lines.
Archaeological finds
To date only around 2000 incantation bowls have been registered as archaeological finds, but since they are widely dug up in the Middle East, there may be tens of thousands in the hands of private collectors and traders. Aramaic incantation bowls from Sasanian Mesopotamia are an important source for studying the everyday beliefs of Jews, Christians, Mandaeans, Manichaeans, Zoroastrians, and pagans on the eve of the early Muslim conquests.
In Judaism
A subcategory of incantation bowls are those used in Jewish and Christian magical practice. Aramaic incantation bowls are an important source of knowledge about Jewish magical practices, particularly the nearly eighty surviving Jewish incantation bowls from Babylon during the rule by the Sasanian Empire (226-636), primarily from the Jewish diaspora settlement in Nippur. These bowls were used in magic to protect against evil influences such as the evil eye, Lilith, and Bagdana. These bowls could be used by any member of the community, and almost every house excavated in the Jewish settlement in Nippur had such bowls buried in them.
The inscriptions often include scriptural quotes and quotes from rabbinic texts. The text on incantation bowls is the only written material documenting Jewish language and religion recovered from the period around the writing of the Babylonian Talmud. Scholars say that the use of rabbinic texts demonstrates that they were considered to have supernatural power comparable to that of biblical quotes. The bowls often refer to themselves as "amulets" and the Talmud discusses the use of amulets and magic to drive away demons.
In Christianity
In Christianity, during the same period and in the same region where traditional incantation bowls were prevalent, Christian incantation bowls emerged. These artifacts, often inscribed in Syriac, a dialect of the Aramaic language, demonstrate a syncretism of Christian and local magical beliefs. The inscriptions on these bowls typically include prayers, psalms, or invocations for protection against evil forces. Scholars interpret them as a unique manifestation of the blending of Christian and folk religious practices in the ancient Middle East. Further research may delve into specific examples, deciphering the content of the inscriptions and exploring the cultural significance of these Christian incantation bowls within their historical context.
In Mandaeism
There are also many incantation bowls written in Mandaic. Mandaic incantation bowls have been found in various archaeological sites in southern Mesopotamia, including bowls from Nippur that date to the early Islamic era.
Many are kept in museums and private collections around the world, including the British Museum and the Moussaieff Collection.
See also
Demons in Mandaeism
Phaistos Disc
References
Further reading
Bhayro, Siam, James Nathan Ford, Dan Levene, and Ortal-Paz Saar, Aramaic Magic Bowls in the Vorderasiatisches Museum in Berlin. Descriptive List and Edition of Selected Texts [Magical and Religious Literature of Late Antiquity 7], 2018.
Ford, James Nathan and Matthew Morgenstern, Aramaic Incantation Bowls in Museum Collections. Volume One: The Frau Professor Hilprecht Collection of Babylonian Antiquities, Jena [Magical and Religious Literature of Late Antiquity 8], 2019.
Gioia, Ted, "Healing songs", Format: Book, Electronic Resource 2006
Gordon, Cyrus H. “Aramaic Incantation Bowls.” Orientalia, vol. 10, 1941, pp. 116–141. JSTOR, JSTOR, www.jstor.org/stable/43582631.
Harari, Yuval, "Jewish Magic before the Rise of Kabbala", 2017.
Juusola, Hannu, "Linguistic peculiarities in the Aramaic magic bowl texts", Format: Book, Electronic Resource, 1999.
Levene, Dan, "A corpus of magic bowls : Incantation texts in Jewish Aramaic from late antiquity", format: Book, Electronic Resource, 2003.
McCullough, William Stewart, "Jewish and Mandaean incantation bowls in the Royal Ontario Museum", 1967.
Montgomery, James A., "Aramaic Incantation Texts from Nippur", 1913.
Müller-Kessler, Christa, "Die Zauberschalentexte in der Hilprecht-Sammlung, Jena und weitere Nippur-Texte anderer Sammlungen", 2005.
Naveh, Joseph and Shaked, Shaul, "Amulets and magic bowls : Aramaic incantations of late antiquity", 1985.
Naveh, Joseph and Shaked, Shaul, “Magic Spells and Formulae : Aramaic Incantations of Late Antiquity", 1993.
Kedar Dorit, Who wrote the Incantation Bowls? PhD Diss. (Freie Universität Berlin) 2018.
External links
Translation of an incantation bowl
Rare Magic Inscription on Human Skull Biblical Archaeology Review
How Aggressive is Aramaic Aggressive Magic. A paper by PhD candidate Chaya-Vered Dürrschnabel
Demons in Judaism
Iranian pottery
Magic items
Medieval Upper Mesopotamia
Objects believed to protect from evil
Syrian pottery
Language and mysticism
Mandaean texts
Texts in Aramaic
Archaeological corpora | Incantation bowl | [
"Physics"
] | 1,328 | [
"Magic items",
"Physical objects",
"Matter"
] |
13,484,159 | https://en.wikipedia.org/wiki/Acetorphine | Acetorphine is a potent opioid analgesic, up to 8700 times stronger than morphine by weight. It is a derivative of the more well-known opioid etorphine, which is used as a very potent veterinary painkiller and anesthetic medication, primarily for the sedation of large animals such as elephants, giraffes and rhinos.
Acetorphine was developed in 1966 by the Reckitt research group that developed etorphine. Acetorphine was developed for the same purpose as etorphine itself, namely as a strong tranquilizer for use in immobilizing large animals in veterinary medicine. Despite showing some advantages over etorphine, for instance producing less toxic side effects in giraffes, acetorphine was never widely adopted for veterinary use, and etorphine (along with other tranquilizers such as carfentanil and azaperone) remains the drug of choice in this application.
Legal Status
Australia
Acetorphine is a schedule 9 substance in Australia under the Poisons Standard (February 2017). A schedule 9 drug is outlined in the Poisons Act 1964 as "Substances which may be abused or misused, the manufacture, possession, sale or use of which should be prohibited by law except when required for medical or scientific research, or for analytical, teaching or training purposes with approval of the CEO."
Under the Misuse of Drugs Act 1981 6.0 g is the amount required determining a court of trial, 2.0 g is considered intent to sell and supply.
Germany
Acetorphine is illegal in Germany (Anlage I).
Romania
Acetorphine is prohibited in Romania.
United Kingdom
Acetorphine is considered a Class A drug by the UK Misuse of Drugs Act since 1971, making its unlawful possession and distribution illegal. Class A drugs are deemed to be the most dangerous.
United States
Acetorphine is a Schedule I controlled substance in the United States. Its DEA Administrative Controlled Substances Control Number is 9319 and the one salt in use, acetorphine hydrochloride, has a freebase conversion ratio of 0.93.
Italy
In Italy acetorphine is illegal, as are the parent compounds etorphine and dihydroetorphine.
See also
6,14-Endoethenotetrahydrooripavine
References
External links
Space filling 3D model and ball & stick 3D model full motion animated rotating of Acetorphine/Acetyletorphine
2D nonanimated/nonmoving rendering of Acetorphine/Acetyletorphine as a flat diagram
Semisynthetic opioids
Mu-opioid receptor agonists
4,5-Epoxymorphinans
Tertiary alcohols
Ethers
Acetate esters | Acetorphine | [
"Chemistry"
] | 562 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
13,484,552 | https://en.wikipedia.org/wiki/Cyprenorphine | Cyprenorphine (M285), N-cyclo-propylmethyl-6,14-endoetheno-7α-(1-hydroxy-1-methylethyl)-6,7,8,14-tetrahydronororipavine, is an opioid drug. It is related to more well-known opioids such as buprenorphine, which is used as an analgesic and for the treatment of opioid addiction, and diprenorphine, which is used as an antidote to reverse the effects of other opioids. It is roughly 35 times as strong as nalorphine.
Cyprenorphine is a powerful antagonist of opioid receptors and a highly potent specific antagonist. It blocks the binding of morphine and etorphine to these receptors.
Cyprenorphine has mixed agonist–antagonist effects at opioid receptors, like those of buprenorphine. However the effects of cyprenorphine are somewhat different, as it produces pronounced dysphoric and hallucinogenic effects which limit its potential use as an analgesic.
Cyprenorphine also has been shown to suppress the intake of sweet solution but doesn't suppress the increase in food consumption that is produced by the alpha-2-adrenoceptor antagonist idazoxan. Idazoxan may lead to the release of endogenous opioid peptides and increase food intake, this effect is attenuated by (-)-naloxone but not by the mu/delta-antagonist cyprenorphine.
Medical uses
Cyprenorphine increases locomotor activity. It is normally used to reverse the clinically immobilizing effects of etorphine. These effects are reversed rapidly and almost entirely. Etorphine is a chemical relative of morphine, with similar analgesic characteristics but fewer side effects. For instance, in order to handle polar bears and other large animals, they are immobilized using etorphine and the effects of etorphine reversed as soon as handling is complete using cyprenorphine. Etorphine and cyprenorphine come as white powders in a package and cannot be purchased separately. Both are administered by injection after dissolving in saline. Because etorphine is used to immobilize large, still moving, animals, it is often administered intramuscularly using a dart whereas cyprenorphine can be administered intravenously in the femoral vein of the immobile animal. Unlike other antagonists, used to reverse the effects of etorphine, the dose of cyprenorphine administered depends on the initial dose of etorphine instead of the weight of the animal. The recommended dose of cyprenorphine is three times that of the initial etorphine administered. Although the effects of cyprenorphine typically take from 40 to 60 seconds to kick in, it has been observed to take up to 3 hours in white rhinoceroses.
Adverse effects
Cyprenorphine induces depression over an hour in rats. It has also been found to induce psychotomimetic actions in humans and dysphoria when used as a post-operative analgesic in patients. Because of these side effects, it is seldom used in humans, with diprenorphine preferred instead.
Mechanism of action
Although it is still unclear how cyprenorphine antagonizes the effects of etorphine, it has been suggested that its greater potency may enable it to displace etorphine in mutual binding sites in the brain. 16-methyl Cyprenorphine, an isoform of Cyprenorphine is an antagonist of the delta, mu and kappa opioid receptors. Its elimination rate constants (Ke) at these receptors are 0.68, 0.076 and 0.79 nM respectively.
References
Tertiary alcohols
Cyclopropyl compounds
4,5-Epoxymorphinans
Ethers
Kappa-opioid receptor agonists
Hydroxyarenes
Semisynthetic opioids | Cyprenorphine | [
"Chemistry"
] | 842 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
13,485,117 | https://en.wikipedia.org/wiki/AG%20489 | AG 489 (or agatoxin 489) is a component of the venom produced by Agelenopsis aperta, a North American funnel web spider. It inhibits the ligand gated ion channel TRPV1 through a pore blocking mechanism.
Discovery
To identify new inhibitors, capsaicin receptor channels (TRPV1) were screened with a venom library for activity against these channels. In result, the robust inhibitory activity was found in the venom. Venom fractionation using reversed phase HPLC allowed the purification of two acylpolyamine toxins, AG489 and AG505.
Both of these inhibit the TRPV1 channels from the extracellular membrane side. From the pore blocking mechanism, the pore mutations that change toxic affinity were identified. As a result, the four mutants decreased toxic affinity and several mutants increased it. Therefore, this was consistent with the scanned TM5-TM6 linker region being the outer vestibule of the channels and further confirming that AG489 is a pore blocker.
See also
Agatoxin
References
External links
Spider toxins
Indoles
Hydroxylamines
Polyamines | AG 489 | [
"Chemistry"
] | 238 | [
"Hydroxylamines",
"Reducing agents"
] |
13,485,805 | https://en.wikipedia.org/wiki/Radio-frequency%20engineering | Radio-frequency (RF) engineering is a subset of electrical engineering involving the application of transmission line, waveguide, antenna, radar, and electromagnetic field principles to the design and application of devices that produce or use signals within the radio band, the frequency range of about 20 kHz up to 300 GHz.
It is incorporated into almost everything that transmits or receives a radio wave, which includes, but is not limited to, mobile phones, radios, Wi-Fi, and two-way radios.
RF engineering is a highly specialized field that typically includes the following areas of expertise:
Design of antenna systems to provide radiative coverage of a specified geographical area by an electromagnetic field or to provide specified sensitivity to an electromagnetic field impinging on the antenna.
Design of coupling and transmission line structures to transport RF energy without radiation.
Application of circuit elements and transmission line structures in the design of oscillators, amplifiers, mixers, detectors, combiners, filters, impedance transforming networks and other devices.
Verification and measurement of performance of radio frequency devices and systems.
To produce quality results, the RF engineer needs to have an in-depth knowledge of mathematics, physics and general electronics theory as well as specialized training in areas such as wave propagation, impedance transformations, filters and microstrip printed circuit board design.
Radio electronics
Radio electronics is concerned with electronic circuits which receive or transmit radio signals.
Typically, such circuits must operate at radio frequency and power levels, which imposes special constraints on their design. These constraints increase in their importance with higher frequencies. At microwave frequencies, the reactance of signal traces becomes a crucial part of the physical layout of the circuit.
List of radio electronics topics:
RF oscillators: Phase-locked loop, voltage-controlled oscillator
Transmitters, transmission lines, transmission line tuners, RF connectors
Antennas, antenna theory
Receivers, tuners
Amplifiers
Modulators, demodulators, detectors
RF filters
RF shielding, ground plane
Direct-sequence spread spectrum (DSSS), noise power
Digital radio
RF power amplifiers
Metal–oxide–semiconductor field-effect transistor (MOSFET)s: Power MOSFET, Laterally-diffused metal-oxide semiconductor (LDMOS)
Bipolar junction transistors
Baseband processors (Complementary metal–oxide–semiconductor (CMOS))
RF CMOS (mixed-signal integrated circuits)
Duties
Radio-frequency engineers are specialists in their respective field and can take on many different roles, such as design, installation, and maintenance. Radio-frequency engineers require many years of extensive experience in the area of study. This type of engineer has experience with transmission systems, device design, and placement of antennas for optimum performance.
The RF engineer job description at a broadcast facility can include maintenance of the station's high-power broadcast transmitters and associated systems. This includes transmitter site emergency power, remote control, main transmission line and antenna adjustments, microwave radio relay STL/TSL links, and more.
In addition, a radio-frequency design engineer must be able to understand electronic hardware design, circuit board material, antenna radiation, and the effect of interfering frequencies that prevent optimum performance within the piece of equipment being developed.
Mathematics
There are many applications of electromagnetic theory to radio-frequency engineering, using conceptual tools such as vector calculus and complex analysis. Topics studied in this area include waveguides and transmission lines, the behavior of radio antennas, and the propagation of radio waves through the Earth's atmosphere. Historically, the subject played a significant role in the development of nonlinear dynamics.
See also
Broadcast engineering
Information theory
Microwave engineering
Overlap zone
Radar engineering
Radio resource management
Radio-frequency current
SPLAT! (software)
List of textbooks in electromagnetism
References
External links
Practical Guide to Radio-Frequency Analysis and Design
Radio spectrum
Radio waves
Radio waves
Electromagnetic spectrum
Broadcast engineering
Electrical engineering
Electronic engineering
Broadcasting occupations
Engineering occupations
MOSFETs
Telecommunications techniques | Radio-frequency engineering | [
"Physics",
"Technology",
"Engineering"
] | 786 | [
"Information and communications technology",
"Broadcast engineering",
"Physical phenomena",
"Telecommunications engineering",
"Radio spectrum",
"Spectrum (physical sciences)",
"Computer engineering",
"Electromagnetic spectrum",
"Radio technology",
"Waves",
"Motion (physics)",
"Electronic engin... |
13,485,881 | https://en.wikipedia.org/wiki/Prelude%20SIEM | Prelude SIEM is a Security information and event management (SIEM).
Prelude SIEM is a tool for driving IT security that collects and centralizes information about the company's IT security to offer a single point of view to manage it. It can create alerts about intrusions and security threats in the network in real-time using logs and flow analyzers. Prelude SIEM provides multiple tools to do forensic reporting on Big Data and Smart Data to identify weak signals and Advanced Persistent Threats (APT). Prelude SIEM also embeds all tools for the exploitation phase to make work easier for operators and help them with risk management.
While a malicious user (or software) may be able to evade the detection of a single intrusion detection system, it becomes exponentially more difficult to get around defenses when there are multiple protection mechanisms. Prelude SIEM comes with a large set of sensors, each of them monitoring different event types. Prelude SIEM permits alert collection to the WAN scale, whether its scope covers a city, a country, a continent or the world.
Prelude SIEM is a SIEM system capable of inter-operating with all the systems available on the market. It implements natively with the Intrusion Detection Message Exchange Format (IDMEF, RFC 4765) format. In this way, it is natively IDMEF compatible with OpenSource IDS: AuditD, Nepenthes, NuFW, OSSEC, Pam, Samhain, Sancp, Snort, Suricata, Kismet, etc. but anyone can write their own IDS or use any of the third party sensors available, given Prelude SIEM's open APIs and libraries.
Since 2016, with the "Prelude IDMEF Partner Program", Prelude SIEM is now also IDMEF compatible with many commercial IDS.
Prelude SIEM provides all SIEM functions through three modules: ALERT (SEM), ANALYZE and ARCHIVE (SIM) and is so the only one true SIEM alternative on the market. Plus, Prelude SIEM promotes the use of IETF security standards through the SECEF project and the "Prelude IDMEF Partner Program".
History
1998: Creation of an IDS project by Yoann Vandoorselaere: Prelude IDS
2002: Prelude becomes a Hybrid IDS
2005: Creation of the company Prelude-Technologies
2009: The INL Society acquires Prelude-Technologies
2009: INL become Edenwall Technologies
2011-08-18: Edenwall Technologies is declared for suspended payments, Prelude-IDS software, the company, and the brand are on sale
2011-10-13: CS (Communication & Systems), Edenwall partner, buy Prelude-IDS
2012: Opening of the websites: www.prelude-ids.org and www.prelude-ids.com (Now www.prelude-siem.com)
2012: Release of the new version Prelude OSS 1.1 and Prelude Enterprise 1.1
2014: Release of Prelude Enterprise V2
2014: Prelude IDS becomes Prelude SIEM and Prelude Enterprise becomes Prelude SOC
2015: Prelude SIEM received the award of "France Cybersecurity" (French cybersecurity)
2016: Prelude SIEM launch the "Prelude IDMEF Partner Program"
2016: Prelude SIEM OSS (Community version) received the award of OW2 for its community
2017: Release of Prelude SIEM 4.0, results of two years of research and developments efforts
2017: New packaging of Prelude SIEM available: Machine virtuelle
Functions
Prelude SIEM collects, normalizes, sorts, aggregates, correlates and displays all security events regardless of the types of surveillance equipment. Beyond its capacity for processing of all types of event logs (system logs, syslog, flat files, etc.), it's also natively compatible with many IDS.
Prelude SIEM's main characteristics are the following:
Built on an open-source core (Python, C), light web client 2.0
"Agent-less" operation
Compliant with Intrusion Detection Message Exchange Format (IDMEF, RFC 4765), Incident Object Description Exchange Format (IODEF, RFC 5070), HTTP, XML, SSL standards
Smart Data: Smart correlation of security events
Big Data: Collection, storage and indexing of logs
Modular, flexible and resilient
Hierarchical and decentralized architecture
Prelude SIEM Community version
Prelude SIEM OSS has been designed in a scalable way to simply adapt to any environment. it is a free, public and open-source version (GPLV2) for small IT Infrastructures, tests and educational purposes.
The open-source version is composed of the following main modules:
Manager: which receives and stores alerts into the database
LibPrelude: connect each IDMEF agents to Prelude SIEM
LibPreludeDB: high-speed database insertion module
Correlator: event correlation module
LML (Log Management Lackey): detect and normalize important logs
Prewikka: web graphical user interface (GUI)
These modules are the base of the ALERT module in the commercial version. The commercial version also adds many functionalities to these modules and scale up the performances and architecture possibilities.
Prelude SIEM and Prelude SOC
Prelude SIEM (commercial version) is a scalable, professionally usable and high-performance version of Prelude, for real-world environments. Prelude SOC is fully scaled version, mainly for SOC (Security Operations Center) usage.
The commercial versions are organized as follows:
Prelude SIEM: SIEM for enterprise with modules: ALERT, ANALYSE, and ARCHIVE
ALERT: Storage, Detection, Normalization, Correlation, Aggregation, Real-time Notification
ANALYSE: Analyze, Reporting and Compliance
ARCHIVE: Storage, Indexation of logs and flows for forensic
Prelude SOC: also to Prelude SIEM, it is possible to add more operational security modules to build a Security Operation Center (SOC)
MAP: Real-time cartography of the IT parc with security indicators. It is possible to drill down and made physical, logical or risk management representations
VULN: Vulnerability scanner based on OpenVAS. It is possible to use it inside the correlator to make cross-correlation
ASSET: Asset management based on GLPi (assets, tickets, workflow, etc.)
REPORT: Business Intelligence reporting
References
External links
Official Website
Prelude SIEM OSS
Five questions about Prelude SIEM
Computer network security
Linux security software
Unix security-related software
Intrusion detection systems | Prelude SIEM | [
"Engineering"
] | 1,320 | [
"Cybersecurity engineering",
"Computer networks engineering",
"Computer network security"
] |
13,486,743 | https://en.wikipedia.org/wiki/Bhaskara%27s%20lemma | Bhaskara's Lemma is an identity used as a lemma during the chakravala method. It states that:
for integers and non-zero integer .
Proof
The proof follows from simple algebraic manipulations as follows: multiply both sides of the equation by , add , factor, and divide by .
So long as neither nor are zero, the implication goes in both directions. (The lemma holds for real or complex numbers as well as integers.)
References
C. O. Selenius, "Rationale of the chakravala process of Jayadeva and Bhaskara II", Historia Mathematica, 2 (1975), 167-184.
C. O. Selenius, Kettenbruch theoretische Erklarung der zyklischen Methode zur Losung der Bhaskara-Pell-Gleichung, Acta Acad. Abo. Math. Phys. 23 (10) (1963).
George Gheverghese Joseph, The Crest of the Peacock: Non-European Roots of Mathematics (1975).
External links
Introduction to chakravala
Diophantine equations
Number theoretic algorithms
Lemmas in algebra
Indian mathematics
Articles containing proofs | Bhaskara's lemma | [
"Mathematics"
] | 257 | [
"Theorems in algebra",
"Mathematical objects",
"Equations",
"Lemmas in algebra",
"Diophantine equations",
"Articles containing proofs",
"Lemmas",
"Number theory"
] |
13,487,160 | https://en.wikipedia.org/wiki/Heterocodeine | Heterocodeine (6-methoxymorphine) is an opiate derivative, the 6-methyl ether of morphine, and a structural isomer of codeine; it is called "hetero-" because it is the reverse isomer of codeine. Heterocodeine was first synthesised in 1932 and first patented in 1935. It can be made from morphine by selective methylation. Codeine is the natural mono-methyl ether, but must be metabolized for activity (that is, it is a prodrug). In contrast the semi-synthetic mono-methyl ether, heterocodeine is a direct agonist. The 6,7,8,14 tetradehydro 3,6 methyl di-ether of morphine is thebaine.
Heterocodeine is 6 times more potent than morphine due to having a substitution at the 6-hydroxy position, in a similar manner to 6-acetylmorphine. The drug methyldihydromorphine (dihydroheterocodeine) is a derivative of heterocodeine. Like the morphine metabolite morphine-6-glucuronide, 6-position branches (esters or ethers) of morphine bind to the otherwise unagonized human mu receptor subtype mu-3 (or μ3); as well as the 6-acetylmorphine metabolite of heroin this includes heterocodeine.
The relative strength of heterocodeine to codeine has been published as 50, 72, 81, 88, 93, 96, and 108 ×.
It is not mentioned specifically in the Controlled Substances Act 1970 but is a Schedule II controlled substance as an analogue of morphinan or morphine under the morphine structure rules of the Analogues Act; in other countries it is usually controlled as a strong opioid.
Homocodeine is a synonym for pholcodine. Bicodeine is a dimer of codeine which is essentially the codeine analogue of pseudomorphine and is also known as pseudocodeine. It is an occasional component of opium and is also a decomposition product of codeine under certain circumstances.
References
4,5-Epoxymorphinans
Ethers
Mu-opioid receptor agonists
Hydroxyarenes
Semisynthetic opioids | Heterocodeine | [
"Chemistry"
] | 489 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
13,487,725 | https://en.wikipedia.org/wiki/Photostationary%20state | The photostationary state of a reversible photochemical reaction is the equilibrium chemical composition under a specific kind of electromagnetic irradiation (usually a single wavelength of visible or UV radiation).
It is a property of particular importance in photochromic compounds, often used as a measure of their practical efficiency and usually quoted as a ratio or percentage.
The position of the photostationary state is primarily a function of the irradiation parameters, the absorbance spectra of the chemical species, and the quantum yields of the reactions. The photostationary state can be very different from the composition of a mixture at thermodynamic equilibrium. As a consequence, photochemistry can be used to produce compositions that are "contra-thermodynamic".
For instance, although cis-stilbene is "uphill" from trans-stilbene in a thermodynamic sense, irradiation of trans-stilbene results in a mixture that is predominantly the cis isomer. As an extreme example, irradiation of benzene at 237 to 254 nm results in formation of benzvalene, an isomer of benzene that is 71 kcal/mol higher in energy than benzene itself.
Overview
Absorption of radiation by reactants of a reaction at equilibrium increases the rate of forward reaction without directly affecting the rate of the reverse reaction.
The rate of a photochemical reaction is proportional to the absorption cross section of the reactant with respect to the excitation source (σ), the quantum yield of reaction (Φ), and the intensity of the irradiation. In a reversible photochemical reaction between compounds A and B, there will therefore be a "forwards" reaction of at a rate proportional to and a "backwards" reaction of at a rate proportional to . The ratio of the rates of the forward and backwards reactions determines where the equilibrium lies, and thus the photostationary state is found at:
If (as is always the case to some extent) the compounds A and B have different absorption spectra, then there may exist wavelengths of light where σa is high and σb is low. Irradiation at these wavelengths will provide photostationary states that contain mostly B. Likewise, wavelengths that give photostationary states of predominantly A may exist. This is particularly likely in compounds such as some photochromics, where A and B have entirely different absorption bands. Compounds that may be readily switched in this way find utility in devices such as molecular switches and optical data storage.
Practical considerations
Quantum yields of reaction (and to a lesser extent, absorption cross sections) are usually temperature and environment-dependent to some extent, and the photostationary state may therefore depend slightly on temperature and solvent as well as on the excitation.
If thermodynamic interconversion of A and B can take place on a similar timescale to the photochemical reaction, it can complicate experimental measurements. This phenomenon can be important, for example in photochromatic eyeglasses.
References
Photochemistry | Photostationary state | [
"Chemistry"
] | 623 | [
"nan"
] |
13,488,005 | https://en.wikipedia.org/wiki/Magellanic%20Cloud%20Emission-line%20Survey | The Magellanic Cloud Emission Line Survey (MCELS) is a joint project of Cerro Tololo Inter-American Observatory (Chile) and the University of Michigan using the CTIO Curtis/Schmidt Telescope. The main goal of the project is to trace the ionized gas in the Magellanic Clouds using narrow-band filters ([S II], Hα and [O III]) and investigate the physical properties of the interstellar medium of these galaxies. Those emission lines are produced by different astrophysical objects and processes.
References
External links
MCELS web site
Cerro Tololo Interamerican Observatory (CTIO)
Astronomy Department, University of Illinois at Urbana
Astronomical surveys
Magellanic Clouds
University of Michigan | Magellanic Cloud Emission-line Survey | [
"Astronomy"
] | 146 | [
"Astronomical surveys",
"Astronomical objects",
"Astronomy stubs",
"Works about astronomy"
] |
13,489,396 | https://en.wikipedia.org/wiki/Chakragati%20mouse | Chakragati mouse (ckr) is an insertional transgenic mouse mutant (Mus musculus) displaying hyperactive behaviour and circling. It is also deficient in prepulse inhibition, latent inhibition and has brain abnormalities such as lateral ventricular enlargement that are typical to endophenotypic models of schizophrenia, which make it useful in screening for antipsychotic drug candidates. The mouse is currently licensed by Chakra Biotech.
References
External links
Chakragati gene information on the ckr gene from Mouse Genome Informatics database
Molecular neuroscience
Molecular genetics | Chakragati mouse | [
"Chemistry",
"Biology"
] | 123 | [
"Molecular neuroscience",
"Molecular genetics",
"Molecular biology"
] |
13,491,397 | https://en.wikipedia.org/wiki/Manganese%28III%29%20acetate | Manganese(III) acetate describes a family of materials with the approximate formula Mn(O2CCH3)3. These materials are brown solids that are soluble in acetic acid and water. They are used in organic synthesis as oxidizing agents.
Structure
Although manganese(III) triacetate has not been reported, salts of basic manganese(III) acetate are well characterized. Basic manganese acetate adopts the structure reminiscent of those of basic chromium acetate and basic iron acetate. The formula is [Mn3O(O2CCH3)6Ln]X where L is a ligand and X is an anion. The salt [Mn3O(O2CCH3)6]O2CCH3.HO2CCH3 has been confirmed by X-ray crystallography.
Preparation
It is usually used as the dihydrate, although the anhydrous form is also used in some situations. The dihydrate is prepared by combining potassium permanganate and manganese(II) acetate in acetic acid. Addition of acetic anhydride to the reaction produces the anhydrous form. It is also synthesized by electrochemical method starting from Mn(OAc)2.
Use in organic synthesis
Manganese triacetate has been used as a one-electron oxidant. It can oxidize alkenes via addition of acetic acid to form lactones.
This process is thought to proceed via the formation of a •CH2CO2H radical intermediate, which then reacts with the alkene, followed by additional oxidation steps and finally ring closure. When the alkene is not symmetric, the major product depends on the nature of the alkene, and is consistent with initial formation of the more stable radical (among the two carbons of the alkene) followed by ring closure onto the more stable conformation of the intermediate.
When reacted with enones, the carbon on the other side of the carbonyl reacts rather than the alkene portion, leading to α'-acetoxy enones. In this process, the carbon next to the carbonyl is oxidized by the manganese, followed by transfer of acetate from the manganese to it.
It can similarly oxidize β-ketoesters at the α carbon, and this intermediate can react with various other structures, including halides and alkenes (see: manganese-mediated coupling reactions). One extension of this idea is the cyclization of the ketoester portion of the molecule with an alkene elsewhere in the same structure.
See also
Manganese(III) chloride
Manganese(II) acetate
Chromium(III) acetate
Iron(III) acetate
Zinc acetate
References
Acetates
Manganese(III) compounds
Oxidizing agents
Coordination polymers | Manganese(III) acetate | [
"Chemistry"
] | 595 | [
"Redox",
"Oxidizing agents"
] |
1,575,643 | https://en.wikipedia.org/wiki/Mass%20flow%20meter | A mass flow meter, also known as an inertial flow meter, is a device that measures mass flow rate of a fluid traveling through a tube. The mass flow rate is the mass of the fluid traveling past a fixed point per unit time.
The mass flow meter does not measure the volume per unit time (e.g. cubic meters per second) passing through the device; it measures the mass per unit time (e.g. kilograms per second) flowing through the device. Volumetric flow rate is the mass flow rate divided by the fluid density. If the density is constant, then the relationship is simple. If the fluid has varying density, then the relationship is not simple. For example, the density of the fluid may change with temperature, pressure, or composition. The fluid may also be a combination of phases such as a fluid with entrained bubbles. Actual density can be determined due to dependency of sound velocity on the controlled liquid concentration.
Operating principle of a Coriolis flow meter
The Coriolis flow meter is based on the Coriolis force, which bends rotating objects depending on their velocity.
There are two basic configurations of Coriolis flow meter: the curved tube flow meter and the straight tube flow meter. This article discusses the curved tube design.
The animations on the right do not represent an actually existing Coriolis flow meter design. The purpose of the animations is to illustrate the operating principle, and to show the connection with rotation.
Fluid is being pumped through the mass flow meter. When there is mass flow, the tube twists slightly. The arm through which fluid flows away from the axis of rotation must exert a force on the fluid, to increase its angular momentum, so it bends backwards. The arm through which fluid is pushed back to the axis of rotation must exert a force on the fluid to decrease the fluid's angular momentum again, hence that arm will bend forward. In other words, the inlet arm (containing an outwards directed flow), is lagging behind the overall rotation, the part which in rest is parallel to the axis is now skewed, and the outlet arm (containing an inwards directed flow) leads the overall rotation.
The animation on the right represents how curved tube mass flow meters are designed. The fluid is led through two parallel tubes. An actuator (not shown) induces equal counter vibrations on the sections parallel to the axis, to make the measuring device less sensitive to outside vibrations. The actual frequency of the vibration depends on the size of the mass flow meter, and ranges from 80 to 1000 Hz. The amplitude of the vibration is too small to be seen, but it can be felt by touch.
When no fluid is flowing, the motion of the two tubes is symmetrical, as shown in the left animation. The animation on the right illustrates what happens during mass flow: some twisting of the tubes. The arm carrying the flow away from the axis of rotation must exert a force on the fluid to accelerate the flowing mass to the vibrating speed of the tubes at the outside (increase of absolute angular momentum), so it is lagging behind the overall vibration. The arm through which fluid is pushed back towards the axis of movement must exert a force on the fluid to decrease the fluid's absolute angular speed (angular momentum) again, hence that arm leads the overall vibration.
The inlet arm and the outlet arm vibrate with the same frequency as the overall vibration, but when there is mass flow the two vibrations are out of sync: the inlet arm is behind, the outlet arm is ahead. The two vibrations are shifted in phase with respect to each other, and the degree of phase-shift is a measure for the amount of mass that is flowing through the tubes and line.
Density and volume measurements
The mass flow of a U-shaped Coriolis flow meter is given as:
where Ku is the temperature dependent stiffness of the tube, K is a shape-dependent factor, d is the width, τ is the time lag, ω is the vibration frequency, and Iu is the inertia of the tube. As the inertia of the tube depend on its contents, knowledge of the fluid density is needed for the calculation of an accurate mass flow rate.
If the density changes too often for manual calibration to be sufficient, the Coriolis flow meter can be adapted to measure the density as well. The natural vibration frequency of the flow tubes depends on the combined mass of the tube and the fluid contained in it. By setting the tube in motion and measuring the natural frequency, the mass of the fluid contained in the tube can be deduced. Dividing the mass on the known volume of the tube gives us the density of the fluid.
An instantaneous density measurement allows the calculation of flow in volume per time by dividing mass flow with density.
Calibration
Both mass flow and density measurements depend on the vibration of the tube. Calibration is affected by changes in the rigidity of the flow tubes.
Changes in temperature and pressure will cause the tube rigidity to change, but these can be compensated for through pressure and temperature zero and span compensation factors.
Additional effects on tube rigidity will cause shifts in the calibration factor over time due to degradation of the flow tubes. These effects include pitting, cracking, coating, erosion or corrosion. It is not possible to compensate for these changes dynamically, but efforts to monitor the effects may be made through regular meter calibration or verification checks. If a change is deemed to have occurred, but is considered to be acceptable, the offset may be added to the existing calibration factor to ensure continued accurate measurement.
See also
Coriolis effect
Flow measurement
Gaspard-Gustave Coriolis
Oscillating U-tube
References
External links
Lecture slides on flow measurement, University of Minnesota
Classical mechanics
Flow meters
Mass | Mass flow meter | [
"Physics",
"Chemistry",
"Mathematics",
"Technology",
"Engineering"
] | 1,191 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Measuring instruments",
"Size",
"Mechanics",
"Flow meters",
"Wikipedia categories named after physical quantities",
"Matter",
"Fluid dynamics"
] |
1,575,699 | https://en.wikipedia.org/wiki/HD%2052265 | HD 52265 is a star with an orbiting exoplanet companion in the equatorial constellation of Monoceros. It is dimly visible to the naked eye with an apparent visual magnitude of 6.29. The star is located at a distance of 98 light-years based n parallax measurements, and is drifting further away with a heliocentric radial velocity of 54 km/s. It has been given the proper name Citalá, after "river of stars" in the native Nahuat language. The name was selected in the NameExoWorlds campaign by El Salvador, during the 100th anniversary of the IAU.
This is a G-type main-sequence star with a stellar classification of G0 V. It is 21% more massive than the Sun and is 27% larger in radius. The star is 2.6 billion years, and is spinning with a rotation period of 12.3 days. It is radiating more than double the luminosity of the Sun from its photosphere at an effective temperature of 6,163 K. The level of chromospheric activity is similar to the Sun.
Planetary system
In 2000 the California and Carnegie Planet Search team announced the discovery of an extrasolar planet orbiting the star. It was independently discovered by the Geneva Extrasolar Planet Search team. The second planet in the system is suspected since 2013. The planet has since been designated Cayahuanca by the IAU, which means "the rock" in the Nahuat language.
See also
List of extrasolar planets
References
External links
Wobbly, Sunlike Star Being Pulled by Giant Alien Planet, Charles Q. Choi
G-type main-sequence stars
Planetary systems with one confirmed planet
Monoceros
2622
Durchmusterung objects
052265
033719 | HD 52265 | [
"Astronomy"
] | 365 | [
"Monoceros",
"Constellations"
] |
1,575,704 | https://en.wikipedia.org/wiki/Neil%20Robertson%20%28mathematician%29 | George Neil Robertson (born November 30, 1938) is a mathematician working mainly in topological graph theory, currently a distinguished professor emeritus at the Ohio State University.
Education
Robertson earned his B.Sc. from Brandon College in 1959 and his Ph.D. in 1969 at the University of Waterloo under his doctoral advisor William Tutte.
Biography
In 1969, Robertson joined the faculty of the Ohio State University, where he was promoted to Associate Professor in 1972 and Professor in 1984. He was a consultant with Bell Communications Research from 1984 to 1996. He has held visiting faculty positions in many institutions, most extensively at Princeton University from 1996 to 2001, and at Victoria University of Wellington, New Zealand, in 2002. He also holds an adjunct position at King Abdulaziz University in Saudi Arabia.
Research
Robertson is known for his work in graph theory, and particularly for a long series of papers co-authored with Paul Seymour and published over a span of many years, in which they proved the Robertson–Seymour theorem (formerly Wagner's Conjecture). This states that families of graphs closed under the graph minor operation may be characterized by a finite set of forbidden minors. As part of this work, Robertson and Seymour also proved the graph structure theorem describing the graphs in these families.
Additional major results in Robertson's research include the following:
In 1964, Robertson discovered the Robertson graph, the smallest possible 4-regular graph with girth five.
In 1993, with Seymour and Robin Thomas, Robertson proved the -free case for which the Hadwiger conjecture relating graph coloring to graph minors is known to be true.
In 1996, Robertson, Seymour, Thomas, and Daniel P. Sanders published a new proof of the four color theorem, confirming the Appel–Haken proof which until then had been disputed. Their proof also leads to an efficient algorithm for finding 4-colorings of planar graphs.
In 2006, Robertson, Seymour, Thomas, and Maria Chudnovsky, proved the long-conjectured strong perfect graph theorem characterizing the perfect graphs by forbidden induced subgraphs.
Awards and honors
Robertson has won the Fulkerson Prize three times, in 1994 for his work on the Hadwiger conjecture, in 2006 for the Robertson–Seymour theorem, and in 2009 for his proof of the strong perfect graph theorem.
He also won the Pólya Prize (SIAM) in 2004, the OSU Distinguished Scholar Award in 1997, and the Waterloo Alumni Achievement Medal in 2002. In 2012 he became a fellow of the American Mathematical Society.
See also
List of University of Waterloo people
References
External links
Neil Robertson's homepage at Ohio State University
Short conference video. Neil Robertson - Some thoughts on Hadwiger's Conjecture. June 28, 1999. Video produced by Bojan Mohar.
20th-century American mathematicians
21st-century American mathematicians
Graph theorists
University of Waterloo alumni
Living people
1938 births
Fellows of the American Mathematical Society
Ohio State University faculty | Neil Robertson (mathematician) | [
"Mathematics"
] | 587 | [
"Mathematical relations",
"Graph theory",
"Graph theorists"
] |
1,575,708 | https://en.wikipedia.org/wiki/Paul%20Seymour%20%28mathematician%29 | Paul D. Seymour (born 26 July 1950) is a British mathematician known for his work in discrete mathematics, especially graph theory. He (with others) was responsible for important progress on regular matroids and totally unimodular matrices, the four colour theorem, linkless embeddings, graph minors and structure, the perfect graph conjecture, the Hadwiger conjecture, claw-free graphs, χ-boundedness, and the Erdős–Hajnal conjecture. Many of his recent papers are available from his website.
Seymour is currently the Albert Baldwin Dod Professor of Mathematics at Princeton University. He won a Sloan Fellowship in 1983, and the Ostrowski Prize in 2003; and (sometimes with others) won the Fulkerson Prize in 1979, 1994, 2006 and 2009, and the Pólya Prize in 1983 and 2004. He received an honorary doctorate from the University of Waterloo in 2008, one from the Technical University of Denmark in 2013, and one from the École normale supérieure de Lyon in 2022. He was an invited speaker in the 1986 International Congress of Mathematicians and a plenary speaker in the 1994 International Congress of Mathematicians. He became a Fellow of the Royal Society in 2022.
Early life
Seymour was born in Plymouth, Devon, England. He was a day student at Plymouth College, and then studied at Exeter College, Oxford, gaining a BA degree in 1971, and D.Phil in 1975.
Career
From 1974 to 1976 he was a college research fellow at University College of Swansea, and then returned to Oxford for 1976–1980 as a Junior Research Fellow at Merton College, Oxford, with the year 1978–79 at University of Waterloo. He became an associate and then a full professor at Ohio State University, Columbus, Ohio, between 1980 and 1983, where he began research with Neil Robertson,
a fruitful collaboration that continued for many years. From 1983 until 1996, he was at Bellcore (Bell Communications Research), Morristown, New Jersey (now Telcordia Technologies). He was also an adjunct professor at Rutgers University from 1984 to 1987 and at the University of Waterloo from 1988 to 1993. He became professor at Princeton University in 1996. He is Editor-in-Chief (jointly with Carsten Thomassen) for the Journal of Graph Theory, and an editor for Combinatorica and the Journal of Combinatorial Theory, Series B.
Personal life
He married Shelley MacDonald of Ottawa in 1979, and they have two children, Amy and Emily. The couple separated amicably in 2007. His brother Leonard W. Seymour is Professor of gene therapy at Oxford University.
Major contributions
Combinatorics in Oxford in the 1970s was dominated by matroid theory, due to the influence of Dominic Welsh and Aubrey William Ingleton. Much of Seymour's early work, up to about 1980, was on matroid theory, and included three important matroid results: his D.Phil. thesis on matroids with the max-flow min-cut property (for which he won his first Fulkerson prize); a characterisation by excluded minors of the matroids representable over the three-element field; and a theorem that all regular matroids consist of graphic and cographic matroids pieced together in a simple way (which won his first Pólya prize). There were several other significant papers from this period: a paper with Welsh on the critical probabilities for bond percolation on the square lattice; a paper on edge-multicolouring of cubic graphs, which foreshadows the matching lattice theorem of László Lovász; a paper proving that all bridgeless graphs admit nowhere-zero 6-flows, a step towards Tutte's nowhere-zero 5-flow conjecture; and a paper solving the two-paths problem (also introducing the cycle double cover conjecture), which was the engine behind much of Seymour's future work.
In 1980 he moved to Ohio State University, and began work with Neil Robertson. This led eventually to Seymour's most important accomplishment, the so-called "Graph Minors Project", a series of 23 papers (joint with Robertson), published over the next thirty years, with several significant results:
the graph minors structure theorem, that for any fixed graph, all graphs that do not contain it as a minor can be built from graphs that are essentially of bounded genus by piecing them together at small cutsets in a tree structure;
a proof of a conjecture of Wagner that in any infinite set of graphs, one of them is a minor of another (and consequently that any property of graphs that can be characterised by excluded minors can be characterised by a finite list of excluded minors);
a proof of a similar conjecture of Nash-Williams that in any infinite set of graphs, one of them can be immersed in another;
and polynomial-time algorithms to test if a graph contains a fixed graph as a minor, and to solve the k vertex-disjoint paths problem for all fixed k.
In about 1990 Robin Thomas began to work with Robertson and Seymour. Their collaboration resulted in several important joint papers over the next ten years:
a proof of a conjecture of Sachs, characterising by excluded minors the graphs that admit linkless embeddings in 3-space;
a proof that every graph that is not five-colourable has a six-vertex complete graph as a minor (the four-colour theorem is assumed to obtain this result, which is a case of Hadwiger's conjecture);
with Dan Sanders, a new, simplified, computer based proof of the four-colour theorem;
and a description of the bipartite graphs that admit Pfaffian orientations.
In the same period, Seymour and Thomas also published several significant results: (with Noga Alon) a separator theorem for graphs with an excluded minor, extending the planar separator theorem of Richard Lipton and Robert Tarjan; a paper characterizing treewidth in terms of brambles; and a polynomial-time algorithm to compute the branch-width of planar graphs.
In 2000 Robertson, Seymour, and Thomas were supported by the American Institute of Mathematics to work on the strong perfect graph conjecture, a famous open question that had been raised by Claude Berge in the early 1960s. Seymour's student Maria Chudnovsky joined them in 2001, and in 2002 the four jointly proved the conjecture. Seymour continued to work with Chudnovsky, and obtained several more results about induced subgraphs, in particular (with Cornuéjols, Liu, and Vušković) a polynomial-time algorithm to test whether a graph is perfect, and a general description of all claw-free graphs. Other important results in this period include: (with Seymour's student Sang-il Oum) fixed-parameter tractable algorithms to approximate the clique-width of graphs (within an exponential bound) and the branch-width of matroids (within a linear bound); and (with Chudnovsky) a proof that the roots of the independence polynomial of every claw-free graph are real.
In the 2010s Seymour worked mainly on χ-boundedness and the Erdős–Hajnal conjecture. In a series of papers with Alex Scott and partly with Chudnovsky, they proved two conjectures of András Gyárfás, that every graph with bounded clique number and sufficiently large chromatic number has an induced cycle of odd length at least five, and has an induced cycle of length at least any specified number. The series culminated in a paper of Scott and Seymour proving that for every fixed k, every graph with sufficiently large chromatic number contains either a large complete subgraph or induced cycles of all lengths modulo k, which leads to the resolutions of two conjectures of Gil Kalai and Roy Meshulam connecting the chromatic number of a graph with the homology of its independence complex. There was also a polynomial-time algorithm (with Chudnovsky, Scott, and Chudnovsky and Seymour's student Sophie Spirkl) to test whether a graph contains an induced cycle with length more than three and odd. Most recently, the four jointly resolved the 5-cycle case of the Erdős–Hajnal conjecture, which says that every graph without an induced copy of the 5-cycle contains an independent set or a clique of polynomial size.
Selected publications
See also
Robertson–Seymour theorem
Strong perfect graph theorem
References
External links
Paul Seymour home page at Princeton University
1950 births
Living people
20th-century American mathematicians
21st-century American mathematicians
Graph theorists
Alumni of Exeter College, Oxford
Ohio State University faculty
Princeton University faculty
People educated at Plymouth College
Fellows of Merton College, Oxford
Fellows of the Royal Society | Paul Seymour (mathematician) | [
"Mathematics"
] | 1,767 | [
"Mathematical relations",
"Graph theory",
"Graph theorists"
] |
1,575,725 | https://en.wikipedia.org/wiki/K-type%20main-sequence%20star | A K-type main-sequence star, also referred to as a K-type dwarf, or orange dwarf, is a main-sequence (hydrogen-burning) star of spectral type K and luminosity class V. These stars are intermediate in size between red M-type main-sequence stars ("red dwarfs") and yellow/white G-type main-sequence stars. They have masses between 0.6 and 0.9 times the mass of the Sun and surface temperatures between 3,900 and 5,300 K. These stars are of particular interest in the search for extraterrestrial life due to their stability and long lifespan. These stars stay on the main sequence for up to 70 billion years, a length of time much larger than the time the universe has existed (13.8 billion years), as such none have had sufficient time to leave the main sequence. Well-known examples include Toliman (K1 V) and Epsilon Indi (K5 V).
Nomenclature
In modern usage, the names applied to K-type main sequence stars vary. When explicitly defined, late K dwarfs are typically grouped with early to mid-M-class stars as red dwarfs, but in other cases red dwarf is restricted just to M-class stars. In some cases all K stars are included as red dwarfs, and occasionally even earlier stars. The term orange dwarf is often applied to early-K stars, but in some cases it is used for all K-type main sequence stars.
Spectral standard stars
The revised Yerkes Atlas system (Johnson & Morgan 1953) listed 12 K-type dwarf spectral standard stars, however not all of these have survived to this day as standards. The "anchor points" of the MK classification system among the K-type main-sequence dwarf stars, i.e. those standard stars that have remain unchanged over the years, are:
Sigma Draconis (K0 V)
Epsilon Eridani (K2 V)
61 Cygni A (K5 V)
Other primary MK standard stars include:
70 Ophiuchi A (K0 V),
107 Piscium (K1 V)
HD 219134 (K3 V)
TW Piscis Austrini (K4 V)
HD 120467 (K6 V)
61 Cygni B (K7 V)
Based on the example set in some references (e.g. Johnson & Morgan 1953, Keenan & McNeil 1989), many authors consider the step between K7 V and M0 V to be a single subdivision, and the K8 and K9 classifications are rarely seen. A few examples such as HIP 111288 (K8V) and HIP 3261 (K9V) have been defined and used.
Planets
These stars are of particular interest in the search for extraterrestrial life because they are stable on the main sequence for a very long time (17–70 billion years, compared to 10 billion for the Sun). Like M-type stars, they tend to have a very small mass, leading to their extremely long lifespan that offers plenty of time for life to develop on orbiting Earth-like, terrestrial planets.
Some of the nearest K-type stars known to have planets include Epsilon Eridani, HD 192310, Gliese 86, and 54 Piscium.
K-type main-sequence stars are about three to four times as abundant as G-type main-sequence stars, making planet searches easier. K-type stars emit less total ultraviolet and other ionizing radiation than G-type stars like the Sun (which can damage DNA and thus hamper the emergence of nucleic acid based life). In fact, many peak in the red.
While M-type stars are the most abundant, they are more likely to have tidally locked planets in habitable-zone orbits and are more prone to producing solar flares and cold spots that would more easily strike nearby rocky planets, potentially making it much harder for life to develop. Due to their greater heat, the habitable zones of K-type stars are also much wider than those of M-type stars. For all of these reasons, they may be the most favorable stars to focus on in the search for exoplanets and extraterrestrial life.
Radiation hazard
Despite K-stars' lower total UV output, in order for their planets to have habitable temperatures, they must orbit much nearer to their K-star hosts, offsetting or reversing any advantage of a lower total UV output. There is also growing evidence that K-type dwarf stars emit dangerously high levels of X-rays and far ultraviolet (FUV) radiation for considerably longer into their early main sequence phase than do either heavier G-type stars or lighter early M-type dwarf stars. This prolonged radiation saturation period may sterilise, destroy the atmospheres of, or at least delay the emergence of life for Earth-like planets orbiting inside the habitable zones around K-type dwarf stars.
See also
Solar analog
Red dwarf
Yellow dwarf
Star count, survey of stars
References
Star types | K-type main-sequence star | [
"Astronomy"
] | 1,036 | [
"Star types",
"Astronomical classification systems"
] |
1,575,734 | https://en.wikipedia.org/wiki/F-type%20main-sequence%20star | An F-type main-sequence star (F V) is a main-sequence, hydrogen-fusing star of spectral type F and luminosity class V. These stars have from 1.0 to 1.4 times the mass of the Sun and surface temperatures between 6,000 and 7,600 K.Tables VII and VIII. This temperature range gives the F-type stars a whitish hue when observed by the atmosphere. Because a main-sequence star is referred to as a dwarf star, this class of star may also be termed a yellow-white dwarf (not to be confused with white dwarfs, remnant stars that are a possible final stage of stellar evolution). Notable examples include Procyon A, Gamma Virginis A and B, and KIC 8462852.
Spectral standard stars
The revised Yerkes Atlas system (Johnson & Morgan 1953) listed a dense grid of F-type dwarf spectral standard stars; however, not all of these have survived to this day as stable standards.
The anchor points of the MK spectral classification system among the F-type main-sequence dwarf stars, i.e. those standard stars that have remained unchanged over years and can be used to define the system, are considered to be 78 Ursae Majoris (F2 V) and Pi Orionis (F6 V). In addition to those two standards, Morgan & Keenan (1973) considered the following stars to be dagger standards: HR 1279 (F3 V), HD 27524 (F5 V), HD 27808 (F8 V), HD 27383 (F9 V), and Beta Virginis (F9 V).
Other primary MK standard stars include HD 23585 (F0 V), HD 26015 (F3 V), and HD 27534 (F5 V). Note that two Hyades members with almost identical HD designations (HD 27524 and HD 27534) are both considered strong F5 V standard stars, and indeed they share nearly identical colors and magnitudes.
Gray & Garrison (1989) provide a modern table of dwarf standards for the hotter F-type stars. F1 and F7 dwarf standards stars are rarely listed, but have changed slightly over the years among expert classifiers. Often-used standard stars in this class include 37 Ursae Majoris (F1 V) and Iota Piscium (F7 V). No F4 V standard stars currently have been officially published.
F9 V defines the boundary between the hot stars classified by Morgan, and the cooler stars classified by Keenan a step lower, and there are discrepancies in the literature on which stars define the F/G dwarf boundary. Morgan & Keenan (1973) listed Beta Virginis and HD 27383 as F9 V standards, but Keenan & McNeil (1989) listed HD 10647 as their F9 V standard instead.
Life cycle
F-type stars have a life-cycle similar to G-type stars. They are hydrogen-fusing and will eventually grow into a red giant that fuses helium instead of hydrogen once their supply of hydrogen is depleted. After the helium too runs out, they begin to fuse carbon. When that also runs out, they shed their outer layers, creating a planetary nebula, and leaving behind, at the center of the nebula, a hot white dwarf. These stars remain stable for ~2-4 billion years. In comparison, G-type stars, like the Sun, stay stable for ~10 billion years.
Planets
Some of the nearest F-type stars known to support planets include Upsilon Andromedae, Tau Boötis, HD 10647, HD 33564, HD 142 and HD 60532.
Habitability
Some studies show that there is a possibility that life could also develop on planets that orbit an F-type star. It is estimated that the habitable zone of a relatively hot F0 star would extend from about 2.0 AU to 3.7 AU and between 1.1 and 2.2 AU for a relatively cool F8 star. However, relative to a G-type star the main problems for a hypothetical lifeform in this particular scenario would be the more intense light and the shorter stellar lifespan of the home star.
F-type stars are known to emit much higher energy forms of light, such as UV radiation, which in the long term can have a profoundly negative effect on DNA molecules. Studies have shown that, for a hypothetical planet positioned at an equivalent habitable distance from an F-type star as the Earth is from the Sun (this is farther away from the F-type star, outside the habitable zone of a G2-type), and with a similar atmosphere, life on its surface would receive about 2.5 to 7.1 times more damage from UV light compared to that on Earth. Thus, for its native lifeforms to survive, the hypothetical planet would need to have sufficient atmospheric shielding, such as a denser ozone layer in the upper atmosphere. Without a robust ozone layer, life could theoretically develop on the planet's surface, but it would most likely be confined to underwater or underground regions or has somehow adapted external covering against it (e.g. shells).
References
Star types | F-type main-sequence star | [
"Astronomy"
] | 1,069 | [
"Star types",
"Astronomical classification systems"
] |
1,575,738 | https://en.wikipedia.org/wiki/HD%2065216 | HD 65216 is a triple star system with two exoplanetary companions in the southern constellation of Carina. With an apparent visual magnitude of 7.97 it cannot be readily seen without technical aid, but with binoculars or telescope it should be visible. The system is located at a distance of 114.7 light-years from the Sun based on parallax measurements, and is drifting further away with a radial velocity of 42.6 km/s.
The primary, component A, is an ordinary G-type main-sequence star with a stellar classification of G5V. It is nearly two billion years old and is spinning with a projected rotational velocity of 1.3 km/s. The star has 95% of the mass and 86% of the radius of the Sun. It is radiating 72% of the luminosity of the Sun from its photosphere at an effective temperature of 5,718 K.
In 2008 a co-moving binary system of low mass companions were discovered at an angular separation of from the primary, which is equivalent to a projected separation of at the distance of HD 65216. Component B is of class M7–8 () while component C is class L2–3 (); both have a mass close to the sub-stellar limit. The pair have a projected separation of from each other.
Planetary system
An extrasolar planet (designated as HD 65216 b) was discovered orbiting the primary in 2003. A second much more distant planet was suspected since 2013, but was discovered on a completely different orbit in 2019.
See also
List of extrasolar planets
References
External links
G-type main-sequence stars
M-type main-sequence stars
Triple stars
Planetary systems with two confirmed planets
Carina (constellation)
Durchmusterung objects
065216
038558 | HD 65216 | [
"Astronomy"
] | 370 | [
"Carina (constellation)",
"Constellations"
] |
1,575,751 | https://en.wikipedia.org/wiki/A-type%20main-sequence%20star | An A-type main-sequence star (A) or A dwarf star is a main-sequence (hydrogen burning) star of spectral type A and luminosity class (five). These stars have spectra defined by strong hydrogen Balmer absorption lines. They measure between 1.4 and 2.1 solar masses (), have surface temperatures between 7,600 and 10,000 K, and live for about a quarter of the lifetime of our Sun. Bright and nearby examples are Altair (A7), Sirius A (A1), and Vega (A0). A-type stars do not have convective zones and thus are not expected to harbor magnetic dynamos. As a consequence, because they do not have strong stellar winds, they lack a means to generate X-ray emissions.
Spectral standard stars
The revised Yerkes Atlas system listed a dense grid of A-type dwarf spectral standard stars, but not all of these have survived to this day as standards. The "anchor points" and "dagger standards" of the MK spectral classification system among the A-type main-sequence dwarf stars, i.e. those standard stars that have remained unchanged over years and can be considered to define the system, are Vega (A0 V), Phecda (A0 V), and Fomalhaut (A3 V). The seminal review of MK classification by Morgan & Keenan (1973) didn't provide any dagger standards between types A3 V and F2 V. HD 23886 was suggested as an A5 V standard in 1978.
Richard Gray & Robert Garrison provided the most recent contributions to the A dwarf spectral sequence in a pair of papers in 1987 and 1989. They list an assortment of fast- and slow-rotating A-type dwarf spectral standards, including HD 45320 (A1 V), HD 88955 (A2 V), 2 Hydri (A7 V), 21 Leonis Minoris (A7 V), and 44 Ceti (A9 V). Besides the MK standards provided in Morgan's papers and the Gray & Garrison papers, one also occasionally sees Zosma (A4 V) listed as a standard. There are no published A6 V and A8 V standard stars.
Planets
A-type stars are young (typically few hundred million years old) and many emit infrared (IR) radiation beyond what would be expected from the star alone. This IR excess is attributable to dust emission from a debris disk where planets form.
Surveys indicate massive planets commonly form around A-type stars although these planets are difficult to detect using the Doppler spectroscopy method. This is because A-type stars typically rotate very quickly, which makes it difficult to measure the small Doppler shifts induced by orbiting planets since the spectral lines are very broad. However, this type of massive star eventually evolves into a cooler red giant which rotates more slowly and thus can be measured using the radial velocity method. As of early 2011 about 30 Jupiter class planets have been found around evolved K-giant stars including Pollux, Gamma Cephei and Iota Draconis. Doppler surveys around a wide variety of stars indicate about 1 in 6 stars having twice the mass of the Sun are orbited by one or more Jupiter-sized planets, compared to about 1 in 16 for Sun-like stars.
A-type star systems known to feature planets include HD 15082, Beta Pictoris, HR 8799 and HD 95086.
Examples
Within 40 light years:
Delta Capricorni is likely a subgiant or giant star, and Altair is a disputed subgiant. In addition, Sirius is the brightest star in the night sky.
See also
Star count, survey of stars
B-type main-sequence star
References
Star types | A-type main-sequence star | [
"Astronomy"
] | 781 | [
"Star types",
"Astronomical classification systems"
] |
1,575,807 | https://en.wikipedia.org/wiki/HD%20188015 | HD 188015 is a yellow-hued star with an exoplanetary companion in the northern constellation of Vulpecula. It has an apparent visual magnitude of 8.24, making it an 8th magnitude star, and thus is too faint to be readily visible to the naked eye. The distance to this star can be estimated through parallax measurements, which yield a separation of 165.6 light years from the Sun.
This star was assigned a stellar classification of G5IV by J. F. Heard in 1956, matching the spectrum of an evolving G-type subgiant star. This suggests it has ceased or is about to stop hydrogen fusion in its core. The absolute magnitude of 4.47 lies just above the main sequence. It is estimated to be six billion years old and is chromospherically quiet with a projected rotational velocity of 5 km/s. The star is almost twice as metal-rich as the Sun. It has 1.1 times the mass and 1.2 times the radius of the Sun. HD 188015 is radiating 1.4 times the luminosity of the Sun from its photosphere at an effective temperature of 5,726 km/s.
Companions
A stellar common proper motion candidate was announced in 2006 and designated HD 188015 B. It is located at an angular separation of along a position angle of 85°. The photometric distance estimate for this object is , matching the primary within the margin of error. They have a projected separation of .
A Jovian planetary companion to this star was announced in 2005, based on radial velocity measurements indicating a periodic perturbation. It is orbiting the host star at a distance of with a period of and an eccentricity (ovalness) of 0.14. The inclination of the orbital plane remains unknown, so only a lower bound on the planet's mass can be determined. It has a minimum mass equal to 1.5 times the mass of Jupiter. The orbital path of this object intersects the habitable zone of the star, which is likely to eject any Earth-like planet from that region. Nevertheless, habitable moons are still possible in this system.
See also
HD 187085
List of extrasolar planets
References
External links
G-type subgiants
Planetary systems with one confirmed planet
Vulpecula
Durchmusterung objects
188015
097769 | HD 188015 | [
"Astronomy"
] | 483 | [
"Vulpecula",
"Constellations"
] |
1,575,813 | https://en.wikipedia.org/wiki/Series%20expansion | In mathematics, a series expansion is a technique that expresses a function as an infinite sum, or series, of simpler functions. It is a method for calculating a function that cannot be expressed by just elementary operators (addition, subtraction, multiplication and division).
The resulting so-called series often can be limited to a finite number of terms, thus yielding an approximation of the function. The fewer terms of the sequence are used, the simpler this approximation will be. Often, the resulting inaccuracy (i.e., the partial sum of the omitted terms) can be described by an equation involving Big O notation (see also asymptotic expansion). The series expansion on an open interval will also be an approximation for non-analytic functions.
Types of series expansions
There are several kinds of series expansions, listed below.
Taylor series
A Taylor series is a power series based on a function's derivatives at a single point. More specifically, if a function is infinitely differentiable around a point , then the Taylor series of f around this point is given by
under the convention . The Maclaurin series of f is its Taylor series about .
Laurent series
A Laurent series is a generalization of the Taylor series, allowing terms with negative exponents; it takes the form and converges in an annulus. In particular, a Laurent series can be used to examine the behavior of a complex function near a singularity by considering the series expansion on an annulus centered at the singularity.
Dirichlet series
A general Dirichlet series is a series of the form One important special case of this is the ordinary Dirichlet series Used in number theory.
Fourier series
A Fourier series is an expansion of periodic functions as a sum of many sine and cosine functions. More specifically, the Fourier series of a function of period is given by the expressionwhere the coefficients are given by the formulae
Other series
In acoustics, e.g., the fundamental tone and the overtones together form an example of a Fourier series.
Newtonian series
Legendre polynomials: Used in physics to describe an arbitrary electrical field as a superposition of a dipole field, a quadrupole field, an octupole field, etc.
Zernike polynomials: Used in optics to calculate aberrations of optical systems. Each term in the series describes a particular type of aberration.
The Stirling seriesis an approximation of the log-gamma function.
Examples
The following is the Taylor series of :
The Dirichlet series of the Riemann zeta function is
References
Algebra
Polynomials
Mathematical analysis
Mathematical series | Series expansion | [
"Mathematics"
] | 531 | [
"Sequences and series",
"Mathematical analysis",
"Mathematical structures",
"Series (mathematics)",
"Calculus",
"Series expansions",
"Polynomials",
"Mathematical relations",
"Approximations",
"Algebra"
] |
1,575,834 | https://en.wikipedia.org/wiki/Silica%20fume | Silica fume, also known as microsilica, (CAS number 69012-64-2, EINECS number 273-761-1) is an amorphous (non-crystalline) polymorph of silicon dioxide, silica. It is an ultrafine powder collected as a by-product of the silicon and ferrosilicon alloy production and consists of spherical particles with an average particle diameter of 150 nm. The main field of application is as pozzolanic material for high performance concrete.
It is sometimes confused with fumed silica (also known as pyrogenic silica, CAS number 112945-52-5). However, the production process, particle characteristics and fields of application of fumed silica are all different from those of silica fume.
History
The first testing of silica fume in Portland-cement-based concretes was carried out in 1952. The biggest drawback to exploring the properties of silica fume was a lack of material with which to experiment. Early research used an expensive additive called fumed silica, an amorphous form of silica made by combustion of silicon tetrachloride in a hydrogen-oxygen flame. Silica fume on the other hand, is a very fine pozzolanic, amorphous material, a by-product of the production of elemental silicon or ferrosilicon alloys in electric arc furnaces. Before the late 1960s in Europe and the mid-1970s in the United States, silica fumes were simply vented into the atmosphere.
With the implementation of tougher environmental laws during the mid-1970s, silicon smelters began to collect the silica fume and search for its applications. The early work done in Norway received most of the attention, since it had shown that Portland cement-based-concretes containing silica fumes had very high strengths and low porosities. Since then the research and development of silica fume made it one of the world's most valuable and versatile admixtures for concrete and cementitious products.
Properties
Silica fume is an ultrafine material with spherical particles less than 1 μm in diameter, the average being about 0.15 μm. This makes it approximately 100 times smaller than the average cement particle. The bulk density of silica fume depends on the degree of densification in the silo and varies from 130 (undensified) to 600 kg/m3. The specific gravity of silica fume is generally in the range of 2.2 to 2.3. The specific surface area of silica fume can be measured with the BET method or nitrogen adsorption method. It typically ranges from 15,000 to 30,000 m2/kg.
Production
Silica fume is a byproduct in the carbothermic reduction of high-purity quartz with carbonaceous materials like coal, coke, wood-chips, in electric arc furnaces in the production of silicon and ferrosilicon alloys.
Applications
Concrete
Because of its extreme fineness and high silica content, silica fume is a very effective pozzolanic material. Standard specifications for silica fume used in cementitious mixtures are ASTM C1240, EN 13263.
Silica fume is added to Portland cement concrete to improve its properties, in particular its compressive strength, bond strength, and abrasion resistance. These improvements stem from both the mechanical improvements resulting from addition of a very fine powder to the cement paste mix as well as from the pozzolanic reactions between the silica fume and free calcium hydroxide in the paste.
Addition of silica fume also reduces the permeability of concrete to chloride ions, which protects the reinforcing steel of concrete from corrosion, especially in chloride-rich environments such as coastal regions and those of humid continental roadways and runways (because of the use of deicing salts) and saltwater bridges. Furthermore, Silica Fumes has important uses in oil and gas operations. Silica fume can be used for a primary placement of grout as a hydraulic seal in the well bore, or secondary applications such as remedial operations including leak repairs, splits, and closing of depleted zones.
Prior to the mid-1970s, nearly all silica fume was discharged into the atmosphere. After environmental concerns necessitated the collection and landfilling of silica fume, it became economically viable to use silica fume in various applications, in particular high-performance concrete. Effects of silica fume on different properties of fresh and hardened concrete include:
Workability: With the addition of silica fume, the slump loss with time is directly proportional to increase in the silica fume content due to the introduction of large surface area in the concrete mix by its addition. Although the slump decreases, the mix remains highly cohesive.
Segregation and bleeding: Silica fume reduces bleeding significantly because the free water is consumed in wetting of the large surface area of the silica fume and hence the free water left in the mix for bleeding also decreases. Silica fume also blocks the pores in the fresh concrete so water within the concrete is not allowed to come to the surface.
Silicon carbide
The silica fumes, as byproduct, may be used to produce silicon carbide.
See also
References
Further reading
External links
Silica Fume Association
Ceramic materials
Glass types
Silicon dioxide | Silica fume | [
"Engineering"
] | 1,118 | [
"Ceramic engineering",
"Ceramic materials"
] |
1,575,837 | https://en.wikipedia.org/wiki/B-type%20main-sequence%20star | A B-type main-sequence star (B V) is a main-sequence (hydrogen-burning) star of spectral type B and luminosity class V. These stars have from 2 to 16 times the mass of the Sun and surface temperatures between 10,000 and 30,000 K. B-type stars are extremely luminous and blue. Their spectra have strong neutral helium absorption lines, which are most prominent at the B2 subclass, and moderately strong hydrogen lines. Examples include Regulus, Algol A and Acrux.
History
This class of stars was introduced with the Harvard sequence of stellar spectra and published in the Revised Harvard photometry catalogue. The definition of type B-type stars was the presence of non-ionized helium lines with the absence of singly ionized helium in the blue-violet portion of the spectrum. All of the spectral classes, including the B type, were subdivided with a numerical suffix that indicated the degree to which they approached the next classification. Thus B2 is 1/5 of the way from type B (or B0) to type A.
Later, however, more refined spectra showed lines of ionized helium for stars of type B0. Likewise, A0 stars also show weak lines of non-ionized helium. Subsequent catalogues of stellar spectra classified the stars based on the strengths of absorption lines at specific frequencies, or by comparing the strengths of different lines. Thus, in the MK Classification system, the spectral class B0 has the line at wavelength 439 nm being stronger than the line at 420 nm. The Balmer series of hydrogen lines grows stronger through the B class, then peak at type A2. The lines of ionized silicon are used to determine the sub-class of the B-type stars, while magnesium lines are used to distinguish between the temperature classes.
Properties
Type-B stars do not have a corona and lack a convection zone in their outer atmosphere. They have a higher mass loss rate than smaller stars such as the Sun, and their stellar wind has velocities of about 3,000 km/s. The energy generation in main-sequence B-type stars comes from the CNO cycle of thermonuclear fusion. Because the CNO cycle is very temperature sensitive, the energy generation is heavily concentrated at the center of the star, which results in a convection zone about the core. This results in a steady mixing of the hydrogen fuel with the helium byproduct of the nuclear fusion. Many B-type stars have a rapid rate of rotation, with an equatorial rotation velocity of about 200 km/s.
Be and B[e] stars
Spectral objects known as "Be stars" are massive yet non-supergiant entities that notably have, or had at some time, 1 or more Balmer lines in emission, with the hydrogen-related electromagnetic radiation series projected out by the stars being of particular scientific interest. Be stars are generally thought to feature unusually strong stellar winds, high surface temperatures, and significant attrition of stellar mass as the objects rotate at a curiously rapid rate, all of this in contrast to many other main-sequence star types.
Objects known as B[e] stars are distinct from Be stars in having unusual neutral or low ionization emission lines that are considered to have 'forbidden mechanisms', something denoted by the use of the square brackets. In other words, these particular stars' emissions appear to undergo processes not normally allowed under 1st-order perturbation theory in quantum mechanics. The definition of a B[e] star can include blue giants and blue supergiants.
Spectral standard stars
The revised Yerkes Atlas system (Johnson & Morgan 1953) listed a dense grid of B-type dwarf spectral standard stars, however not all of these have survived to this day as standards. The "anchor points" of the MK spectral classification system among the B-type main-sequence dwarf stars, i.e. those standard stars that have remain unchanged since at least the 1940s, are Thabit (B0 V), Haedus (B3 V), and Alkaid (B3 V).
Besides these anchor standards, the seminal review of MK classification by Morgan & Keenan (1973) listed "dagger standards" of Paikauhale (B0 V), Omega Scorpii (B1 V), 42 Orionis (B1 V), 22 Scorpii (B3 V), Rho Aurigae (B5 V), and 18 Tauri (B8 V). The Revised MK Spectra Atlas of Morgan, Abt, & Tapscott (1978) further contributed the standards Acrab (B2 V), 29 Persei (B3 V), HD 36936 (B5 V), and HD 21071 (B7 V).
Gray & Garrison (1994) contributed
two B9 V standards: Omega Fornacis and HR 2328. The only published B4 V standard is 90 Leonis,
from Lesh (1968). There has been little agreement in the literature on choice of B6 V standard.
Chemical peculiarities
Some of the B-type stars of stellar class B0–B3 exhibit unusually strong lines of non-ionized helium. These chemically peculiar stars are termed helium-strong stars. These often have strong magnetic fields in their photosphere. In contrast, there are also helium-weak B-type stars with understrength helium lines and strong hydrogen spectra. Other chemically peculiar B-types stars are the mercury-manganese stars with spectral types B7-B9.
Planets
B-type stars known to have planets include the main-sequence B-type HIP 78530 and HD 129116.
See also
Herbig Ae/Be star
Star count
References
Star types | B-type main-sequence star | [
"Astronomy"
] | 1,182 | [
"Star types",
"Astronomical classification systems"
] |
1,575,863 | https://en.wikipedia.org/wiki/Penland%20School%20of%20Craft | The Penland School of Craft ("Penland" and formerly "Penland School of Crafts") is an Arts and Crafts educational center located in the Blue Ridge Mountains in Penland, North Carolina in the Snow Creek Township near Spruce Pine, about 50 miles from Asheville.
History
The school was founded in the 1920s in the isolated mountain town of Penland, Mitchell County, NC. In 1923, Lucy Morgan (1889–1981), a teacher at the Appalachian School who had recently learned to weave at Berea College, created an association to teach the craft to local women so they could earn income from their homes. The center, called Penland Weavers and Potters, provided instruction, looms, and materials. Local volunteers built a cabin and then a larger hall. In 1929, Penland was officially founded as the Penland School of Handicrafts after Edward F. Worst, a weaving expert and author of the Foot Power Loom Weaving, visited the school to provide weaving instruction. Worst added classes in basketry and pottery.
Bill Brown, who took over in 1962 after Morgan, created a resident artist program and expanded the number and length of courses. There are 51 buildings on 400 acres. Penland buildings were designed primarily by North Carolinian architects, including Frank Harmon and Cannon Architects in Raleigh, North Carolina and Dixon Weinstein Architects in Chapel Hill.
The school campus was added to the National Register of Historic Places in 2003 as the Penland School Historic District. The district encompasses 31 contributing buildings, 1 contributing site, and 3 contributing structures. The district is characterized by one- and two-story frame farmhouses dating from the turn of the 20th century, associated agricultural outbuildings, and
Rustic Revival style log buildings. Notable buildings include the Colonial Revival style Lily Loom House and Pines; the Craft Cabin; Homer Hall; Ridgeway; and Beacon Church.
Overview
, Penland offered Spring, Summer, and Fall workshops in craft disciplines, including weaving and dyeing, bead work, glassblowing, pottery, paper making, metalworking, and woodworking. It also offers fine arts subjects, such as printmaking, painting, and photography. Workshops are taught by visiting American and international artists and professors, a tradition that started with Worst and until he died in 1949. Academic degrees are not awarded by Penland, but students can receive college credit through Western Carolina University (WCU). There are about 1200 people who study at Penland each year.
Penland holds an annual Community Day in early March, when the school's studios are open and visitors can work on a small project with the help of the artists.
An exhibition of works created at Penland was held at the Mint Museum.
References
Further reading
Bonnie Willis Ford. 1931 Weaving Institute at Penland Hunter Library Library Digital Collections, Western Carolina University
Bonnie Willis Ford. 1932 Weaving Institute at Penland Hunter Library Library Digital Collections, Western Carolina University
Appalachian Industrial School in the Mountains of North Carolina. Hunter Library Library Digital Collections, Western Carolina University
Appalachian Mountain Community Centre. Hunter Library Library Digital Collections, Western Carolina University
Records at Huntington Library Digital Collection. Hunter Library Library Digital Collections, Western Carolina University
McLaughlin, Jean, ed. Inspired: Life in Penland's Resident Artist and Core Fellowship Programs. Penland: Penland School of Crafts, 2016.
McLaughlin, Jean W., Mint Museum of Craft + Design, and Penland School of Crafts. The Nature of Craft and the Penland Experience. 1st ed. New York: Lark Books, 2004.
Morgan, Lucy and LeGette Blythe. Gift from the Hills: Miss Lucy Morgan's Story of Her Unique Penland School. First ed. New York: Bobbs-Merrill, 1958.
External links
Penland website
The Penland Experience
Art museums and galleries in North Carolina
Art schools in North Carolina
Crafts educators
Education in Mitchell County, North Carolina
Education in North Carolina
Educational institutions established in 1929
Tourist attractions in Mitchell County, North Carolina
School buildings on the National Register of Historic Places in North Carolina
Historic districts on the National Register of Historic Places in North Carolina
Colonial Revival architecture in North Carolina
Buildings and structures in Mitchell County, North Carolina
National Register of Historic Places in Mitchell County, North Carolina
Artist's retreats
1929 establishments in North Carolina
Glassmaking schools | Penland School of Craft | [
"Materials_science",
"Engineering"
] | 864 | [
"Glass engineering and science",
"Glassmaking schools"
] |
1,575,913 | https://en.wikipedia.org/wiki/Grism | A grism (also called a grating prism) is a combination of a prism and grating arranged so that light at a chosen central wavelength passes straight through. The advantage of this arrangement is that one and the same camera can be used both for imaging (without the grism) and spectroscopy (with the grism) without having to be moved. Grisms are inserted into a camera beam that is already collimated. They then create a dispersed spectrum centered on the object's location in the camera's field of view.
The resolution of a grism is proportional to the tangent of the wedge angle of the prism in much the same way as the resolutions of gratings are proportional to the angle between the input and the normal to the grating.
The dispersed wavefront sensing system (as part the NIRCam instrument) on the James Webb Space Telescope uses grisms. The system allows coarse optical path length matching between the different mirror segments.
See also
Diffraction grating
Echelle grating
Slitless spectroscopy
References
Sources
Kitchin, C. R.: Astrophysical Techniques. CRC Press 2009.
Space Telescope Science Institute
Lesson on Spectrograph
Prisms (optics)
Diffraction
Diffraction gratings | Grism | [
"Physics",
"Chemistry",
"Materials_science",
"Astronomy"
] | 252 | [
"Spectroscopy stubs",
"Spectrum (physical sciences)",
"Astronomy stubs",
"Diffraction",
"Crystallography",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
1,576,209 | https://en.wikipedia.org/wiki/Variadic%20function | In mathematics and in computer programming, a variadic function is a function of indefinite arity, i.e., one which accepts a variable number of arguments. Support for variadic functions differs widely among programming languages.
The term variadic is a neologism, dating back to 1936–1937. The term was not widely used until the 1970s.
Overview
There are many mathematical and logical operations that come across naturally as variadic functions. For instance, the summing of numbers or the concatenation of strings or other sequences are operations that can be thought of as applicable to any number of operands (even though formally in these cases the associative property is applied).
Another operation that has been implemented as a variadic function in many languages is output formatting. The C function and the Common Lisp function are two such examples. Both take one argument that specifies the formatting of the output, and any number of arguments that provide the values to be formatted.
Variadic functions can expose type-safety problems in some languages. For instance, C's , if used incautiously, can give rise to a class of security holes known as format string attacks. The attack is possible because the language support for variadic functions is not type-safe: it permits the function to attempt to pop more arguments off the stack than were placed there, corrupting the stack and leading to unexpected behavior. As a consequence of this, the CERT Coordination Center considers variadic functions in C to be a high-severity security risk.
In functional programming languages, variadics can be considered complementary to the apply function, which takes a function and a list/sequence/array as arguments, and calls the function with the arguments supplied in that list, thus passing a variable number of arguments to the function. In the functional language Haskell, variadic functions can be implemented by returning a value of a type class ; if instances of are a final return value and a function , this allows for any number of additional arguments .
A related subject in term rewriting research is called hedges, or hedge variables. Unlike variadics, which are functions with arguments, hedges are sequences of arguments themselves. They also can have constraints ('take no more than 4 arguments', for example) to the point where they are not variable-length (such as 'take exactly 4 arguments') - thus calling them variadics can be misleading. However they are referring to the same phenomenon, and sometimes the phrasing is mixed, resulting in names such as variadic variable (synonymous to hedge). Note the double meaning of the word variable and the difference between arguments and variables in functional programming and term rewriting. For example, a term (function) can have three variables, one of them a hedge, thus allowing the term to take three or more arguments (or two or more if the hedge is allowed to be empty).
Examples
In C
To portably implement variadic functions in the C language, the standard header file is used. The older header has been deprecated in favor of . In C++, the header file is used.
#include <stdarg.h>
#include <stdio.h>
double average(int count, ...) {
va_list ap;
int j;
double sum = 0;
va_start(ap, count); /* Before C23: Requires the last fixed parameter (to get the address) */
for (j = 0; j < count; j++) {
sum += va_arg(ap, int); /* Increments ap to the next argument. */
}
va_end(ap);
return sum / count;
}
int main(int argc, char const *argv[]) {
printf("%f\n", average(3, 1, 2, 3));
return 0;
}
This will compute the average of an arbitrary number of arguments. Note that the function does not know the number of arguments or their types. The above function expects that the types will be , and that the number of arguments is passed in the first argument (this is a frequent usage but by no means enforced by the language or compiler). In some other cases, for example printf, the number and types of arguments are figured out from a format string. In both cases, this depends on the programmer to supply the correct information. (Alternatively, a sentinel value like or may be used to indicate the end of the parameter list.) If fewer arguments are passed in than the function believes, or the types of arguments are incorrect, this could cause it to read into invalid areas of memory and can lead to vulnerabilities like the format string attack. Depending on the system, even using as a sentinel may encounter such problems; or a dedicated null pointer of the correct target type may be used to avoid them.
declares a type, , and defines four macros: , , , and . Each invocation of and must be matched by a corresponding invocation of . When working with variable arguments, a function normally declares a variable of type ( in the example) that will be manipulated by the macros.
takes two arguments, a object and a reference to the function's last parameter (the one before the ellipsis; the macro uses this to get its bearings). In C23, the second argument will no longer be required and variadic functions will no longer need a named parameter before the ellipsis. It initialises the object for use by or . The compiler will normally issue a warning if the reference is incorrect (e.g. a reference to a different parameter than the last one, or a reference to a wholly different object), but will not prevent compilation from completing normally.
takes two arguments, a object (previously initialised) and a type descriptor. It expands to the next variable argument, and has the specified type. Successive invocations of allow processing each of the variable arguments in turn. Unspecified behavior occurs if the type is incorrect or there is no next variable argument.
takes one argument, a object. It serves to clean up. If one wanted to, for instance, scan the variable arguments more than once, the programmer would re-initialise your object by invoking and then again on it.
takes two arguments, both of them objects. It clones the second (which must have been initialised) into the first. Going back to the "scan the variable arguments more than once" example, this could be achieved by invoking on a first , then using to clone it into a second . After scanning the variable arguments a first time with and the first (disposing of it with ), the programmer could scan the variable arguments a second time with and the second . needs to also be called on the cloned before the containing function returns.
In C#
C# describes variadic functions using the keyword. A type must be provided for the arguments, although can be used as a catch-all. At the calling site, you can either list the arguments one by one, or hand over a pre-existing array having the required element type. Using the variadic form is Syntactic sugar for the latter.
using System;
class Program
{
static int Foo(int a, int b, params int[] args)
{
// Return the sum of the integers in args, ignoring a and b.
int sum = 0;
foreach (int i in args)
sum += i;
return sum;
}
static void Main(string[] args)
{
Console.WriteLine(Foo(1, 2)); // 0
Console.WriteLine(Foo(1, 2, 3, 10, 20)); // 33
int[] manyValues = new int[] { 13, 14, 15 };
Console.WriteLine(Foo(1, 2, manyValues)); // 42
}
}
In C++
The basic variadic facility in C++ is largely identical to that in C. The only difference is in the syntax, where the comma before the ellipsis can be omitted. C++ allows variadic functions without named parameters but provides no way to access those arguments since va_start requires the name of the last fixed argument of the function.
#include <iostream>
#include <cstdarg>
void simple_printf(const char* fmt...) // C-style "const char* fmt, ..." is also valid
{
va_list args;
va_start(args, fmt);
while (*fmt != '\0') {
if (*fmt == 'd') {
int i = va_arg(args, int);
std::cout << i << '\n';
} else if (*fmt == 'c') {
// note automatic conversion to integral type
int c = va_arg(args, int);
std::cout << static_cast<char>(c) << '\n';
} else if (*fmt == 'f') {
double d = va_arg(args, double);
std::cout << d << '\n';
}
++fmt;
}
va_end(args);
}
int main()
{
simple_printf("dcff", 3, 'a', 1.999, 42.5);
}
Variadic templates (parameter pack) can also be used in C++ with language built-in fold expressions.
#include <iostream>
template <typename... Ts>
void foo_print(Ts... args)
{
((std::cout << args << ' '), ...);
}
int main()
{
std::cout << std::boolalpha;
foo_print(1, 3.14f); // 1 3.14
foo_print("Foo", 'b', true, nullptr); // Foo b true nullptr
}
The CERT Coding Standards for C++ strongly prefers the use of variadic templates (parameter pack) in C++ over the C-style variadic function due to a lower risk of misuse.
In Go
Variadic functions in Go can be called with any number of trailing arguments. is a common variadic function; it uses an empty interface as a catch-all type.
package main
import "fmt"
// This variadic function takes an arbitrary number of ints as arguments.
func sum(nums ...int) {
fmt.Print("The sum of ", nums) // Also a variadic function.
total := 0
for _, num := range nums {
total += num
}
fmt.Println(" is", total) // Also a variadic function.
}
func main() {
// Variadic functions can be called in the usual way with individual
// arguments.
sum(1, 2) // "The sum of [1 2] is 3"
sum(1, 2, 3) // "The sum of [1 2 3] is 6"
// If you already have multiple args in a slice, apply them to a variadic
// function using func(slice...) like this.
nums := []int{1, 2, 3, 4}
sum(nums...) // "The sum of [1 2 3 4] is 10"
}
Output:
The sum of [1 2] is 3
The sum of [1 2 3] is 6
The sum of [1 2 3 4] is 10
In Java
As with C#, the type in Java is available as a catch-all.
public class Program {
// Variadic methods store any additional arguments they receive in an array.
// Consequentially, `printArgs` is actually a method with one parameter: a
// variable-length array of `String`s.
private static void printArgs(String... strings) {
for (String string : strings) {
System.out.println(string);
}
}
public static void main(String[] args) {
printArgs("hello"); // short for printArgs(["hello"])
printArgs("hello", "world"); // short for printArgs(["hello", "world"])
}
}
In JavaScript
JavaScript does not care about types of variadic arguments.
function sum(...numbers) {
return numbers.reduce((a, b) => a + b, 0);
}
console.log(sum(1, 2, 3)); // 6
console.log(sum(3, 2)); // 5
console.log(sum()); // 0
It's also possible to create a variadic function using the arguments object, although it is only usable with functions created with the keyword.
function sum() {
return Array.prototype.reduce.call(arguments, (a, b) => a + b, 0);
}
console.log(sum(1, 2, 3)); // 6
console.log(sum(3, 2)); // 5
console.log(sum()); // 0
In Lua
Lua functions may pass varargs to other functions the same way as other values using the keyword. tables can be passed into variadic functions by using, in Lua version 5.2 or higher , or Lua 5.1 or lower . Varargs can be used as a table by constructing a table with the vararg as a value.function sum(...) --... designates varargs
local sum=0
for _,v in pairs({...}) do --creating a table with a varargs is the same as creating one with standard values
sum=sum+v
end
return sum
end
values={1,2,3,4}
sum(5,table.unpack(values)) --returns 15. table.unpack should go after any other arguments, otherwise not all values will be passed into the function.
function add5(...)
return ...+5 --this is incorrect usage of varargs, and will only return the first value provided
end
entries={}
function process_entries()
local processed={}
for i,v in pairs(entries) do
processed[i]=v --placeholder processing code
end
return table.unpack(processed) --returns all entries in a way that can be used as a vararg
end
print(process_entries()) --the print function takes all varargs and writes them to stdout separated by newlines
In Pascal
Pascal is standardized by ISO standards 7185 (“Standard Pascal”) and 10206 (“Extended Pascal”).
Neither standardized form of Pascal supports variadic routines, except for certain built-in routines (/ and /, and additionally in /).
Nonetheless, dialects of Pascal implement mechanisms resembling variadic routines.
Delphi defines an data type that may be associated with the last formal parameter.
Within the routine definition the is an , an array of variant records.
The member of the aforementioned data type allows inspection of the argument’s data type and subsequent appropriate handling.
The Free Pascal Compiler supports Delphi’s variadic routines, too.
This implementation, however, technically requires a single argument, that is an .
Pascal imposes the restriction that arrays need to be homogenous.
This requirement is circumvented by utilizing a variant record.
The GNU Pascal defines a real variadic formal parameter specification using an ellipsis (), but as of 2022 no portable mechanism to use such has been defined.
Both GNU Pascal and FreePascal allow externally declared functions to use a variadic formal parameter specification using an ellipsis ().
In PHP
PHP does not care about types of variadic arguments unless the argument is typed.
function sum(...$nums): int
{
return array_sum($nums);
}
echo sum(1, 2, 3); // 6
And typed variadic arguments:
function sum(int ...$nums): int
{
return array_sum($nums);
}
echo sum(1, 'a', 3); // TypeError: Argument 2 passed to sum() must be of the type int (since PHP 7.3)
In Python
Python does not care about types of variadic arguments.
def foo(a, b, *args):
print(args) # args is a tuple (immutable sequence).
foo(1, 2) # ()
foo(1, 2, 3) # (3,)
foo(1, 2, 3, "hello") # (3, "hello")
Keyword arguments can be stored in a dictionary, e.g. .
In Raku
In Raku, the type of parameters that create variadic functions are known as slurpy array parameters and they're classified into three groups:
Flattened slurpy
These parameters are declared with a single asterisk (*) and they flatten arguments by dissolving one or more layers of elements that can be iterated over (i.e, Iterables).
sub foo($a, $b, *@args) {
say @args.perl;
}
foo(1, 2) # []
foo(1, 2, 3) # [3]
foo(1, 2, 3, "hello") # [3 "hello"]
foo(1, 2, 3, [4, 5], [6]); # [3, 4, 5, 6]
Unflattened slurpy
These parameters are declared with two asterisks (**) and they do not flatten any iterable arguments within the list, but keep the arguments more or less as-is:
sub bar($a, $b, **@args) {
say @args.perl;
}
bar(1, 2); # []
bar(1, 2, 3); # [3]
bar(1, 2, 3, "hello"); # [3 "hello"]
bar(1, 2, 3, [4, 5], [6]); # [3, [4, 5], [6]]
Contextual slurpy
These parameters are declared with a plus (+) sign and they apply the "single argument rule", which decides how to handle the slurpy argument based upon context. Simply put, if only a single argument is passed and that argument is iterable, that argument is used to fill the slurpy parameter array. In any other case, +@ works like **@ (i.e., unflattened slurpy).
sub zaz($a, $b, +@args) {
say @args.perl;
}
zaz(1, 2); # []
zaz(1, 2, 3); # [3]
zaz(1, 2, 3, "hello"); # [3 "hello"]
zaz(1, 2, [4, 5]); # [4, 5], single argument fills up array
zaz(1, 2, 3, [4, 5]); # [3, [4, 5]], behaving as **@
zaz(1, 2, 3, [4, 5], [6]); # [3, [4, 5], [6]], behaving as **@
In Ruby
Ruby does not care about types of variadic arguments.
def foo(*args)
print args
end
foo(1)
# prints `[1]=> nil`
foo(1, 2)
# prints `[1, 2]=> nil`
In Rust
Rust does not support variadic arguments in functions. Instead, it uses macros.
macro_rules! calculate {
// The pattern for a single `eval`
(eval $e:expr) => {{
{
let val: usize = $e; // Force types to be integers
println!("{} = {}", stringify!{$e}, val);
}
}};
// Decompose multiple `eval`s recursively
(eval $e:expr, $(eval $es:expr),+) => {{
calculate! { eval $e }
calculate! { $(eval $es),+ }
}};
}
fn main() {
calculate! { // Look ma! Variadic `calculate!`!
eval 1 + 2,
eval 3 + 4,
eval (2 * 3) + 1
}
}
Rust is able to interact with C's variadic system via a feature switch. As with other C interfaces, the system is considered to Rust.
In Scala
object Program {
// Variadic methods store any additional arguments they receive in an array.
// Consequentially, `printArgs` is actually a method with one parameter: a
// variable-length array of `String`s.
private def printArgs(strings: String*): Unit = {
strings.foreach(println)
}
def main(args: Array[String]): Unit = {
printArgs("hello"); // short for printArgs(["hello"])
printArgs("hello", "world"); // short for printArgs(["hello", "world"])
}
}
In Swift
Swift cares about the type of variadic arguments, but the catch-all type is available.
func greet(timeOfTheDay: String, names: String...) {
// here, names is [String]
print("Looks like we have \(names.count) people")
for name in names {
print("Hello \(name), good \(timeOfTheDay)")
}
}
greet(timeOfTheDay: "morning", names: "Joseph", "Clara", "William", "Maria")
// Output:
// Looks like we have 4 people
// Hello Joseph, good morning
// Hello Clara, good morning
// Hello William, good morning
// Hello Maria, good morning
In Tcl
A Tcl procedure or lambda is variadic when its last argument is : this will contain a list (possibly empty) of all the remaining arguments. This pattern is common in many other procedure-like methods.
proc greet {timeOfTheDay args} {
puts "Looks like we have [llength $args] people"
foreach name $args {
puts "Hello $name, good $timeOfTheDay"
}
}
greet "morning" "Joseph" "Clara" "William" "Maria"
# Output:
# Looks like we have 4 people
# Hello Joseph, good morning
# Hello Clara, good morning
# Hello William, good morning
# Hello Maria, good morning
See also
Varargs in Java programming language
Variadic macro (C programming language)
Variadic template
Notes
References
External links
Variadic function. Rosetta Code task showing the implementation of variadic functions in over 120 programming languages.
Variable Argument Functions — A tutorial on Variable Argument Functions for C++
GNU libc manual
Subroutines
Programming language comparisons
Articles with example C code
Articles with example C++ code
Articles with example C Sharp code
Articles with example Haskell code
Articles with example Java code
Articles with example JavaScript code
Articles with example Pascal code
Articles with example Perl code
Articles with example Python (programming language) code
Articles with example Ruby code
Articles with example Rust code
Articles with example Swift code
Articles with example Tcl code | Variadic function | [
"Technology"
] | 5,161 | [
"Programming language comparisons",
"Computing comparisons"
] |
1,576,293 | https://en.wikipedia.org/wiki/R%C3%B6ssler%20attractor | The Rössler attractor () is the attractor for the Rössler system, a system of three non-linear ordinary differential equations originally studied by Otto Rössler in the 1970s. These differential equations define a continuous-time dynamical system that exhibits chaotic dynamics associated with the fractal properties of the attractor. Rössler interpreted it as a formalization of a taffy-pulling machine.
Some properties of the Rössler system can be deduced via linear methods such as eigenvectors, but the main features of the system require non-linear methods such as Poincaré maps and bifurcation diagrams. The original Rössler paper states the Rössler attractor was intended to behave similarly to the Lorenz attractor, but also be easier to analyze qualitatively. An orbit within the attractor follows an outward spiral close to the plane around an unstable fixed point. Once the graph spirals out enough, a second fixed point influences the graph, causing a rise and twist in the -dimension. In the time domain, it becomes apparent that although each variable is oscillating within a fixed range of values, the oscillations are chaotic. This attractor has some similarities to the Lorenz attractor, but is simpler and has only one manifold. Otto Rössler designed the Rössler attractor in 1976, but the originally theoretical equations were later found to be useful in modeling equilibrium in chemical reactions.
Definition
The defining equations of the Rössler system are:
Rössler studied the chaotic attractor with , , and , though properties of , , and have been more commonly used since. Another line of the parameter space was investigated using the topological analysis. It corresponds to , , and was chosen as the bifurcation parameter. How Rössler discovered this set of equations was investigated by Letellier and Messager.
Stability analysis
Some of the Rössler attractor's elegance is due to two of its equations being linear; setting , allows examination of the behavior on the plane
The stability in the plane can then be found by calculating the eigenvalues of the Jacobian , which are . From this, we can see that when , the eigenvalues are complex and both have a positive real component, making the origin unstable with an outwards spiral on the plane. Now consider the plane behavior within the context of this range for . So as long as is smaller than , the term will keep the orbit close to the plane. As the orbit approaches greater than , the -values begin to climb. As climbs, though, the in the equation for stops the growth in .
Fixed points
In order to find the fixed points, the three Rössler equations are set to zero and the (,,) coordinates of each fixed point were determined by solving the resulting equations. This yields the general equations of each of the fixed point coordinates:
Which in turn can be used to show the actual fixed points for a given set of parameter values:
As shown in the general plots of the Rössler Attractor above, one of these fixed points resides in the center of the attractor loop and the other lies relatively far from the attractor.
Eigenvalues and eigenvectors
The stability of each of these fixed points can be analyzed by determining their respective eigenvalues and eigenvectors. Beginning with the Jacobian:
the eigenvalues can be determined by solving the following cubic:
For the centrally located fixed point, Rössler's original parameter values of a=0.2, b=0.2, and c=5.7 yield eigenvalues of:
The magnitude of a negative eigenvalue characterizes the level of attraction along the corresponding eigenvector. Similarly the magnitude of a positive eigenvalue characterizes the level of repulsion along the corresponding eigenvector.
The eigenvectors corresponding to these eigenvalues are:
These eigenvectors have several interesting implications. First, the two eigenvalue/eigenvector pairs ( and ) are responsible for the steady outward slide that occurs in the main disk of the attractor. The last eigenvalue/eigenvector pair is attracting along an axis that runs through the center of the manifold and accounts for the z motion that occurs within the attractor. This effect is roughly demonstrated with the figure below.
The figure examines the central fixed point eigenvectors. The blue line corresponds to the standard Rössler attractor generated with , , and . The red dot in the center of this attractor is . The red line intersecting that fixed point is an illustration of the repulsing plane generated by and . The green line is an illustration of the attracting . The magenta line is generated by stepping backwards through time from a point on the attracting eigenvector which is slightly above – it illustrates the behavior of points that become completely dominated by that vector. Note that the magenta line nearly touches the plane of the attractor before being pulled upwards into the fixed point; this suggests that the general appearance and behavior of the Rössler attractor is largely a product of the interaction between the attracting and the repelling and plane. Specifically it implies that a sequence generated from the Rössler equations will begin to loop around , start being pulled upwards into the vector, creating the upward arm of a curve that bends slightly inward toward the vector before being pushed outward again as it is pulled back towards the repelling plane.
For the outlier fixed point, Rössler's original parameter values of , , and yield eigenvalues of:
The eigenvectors corresponding to these eigenvalues are:
Although these eigenvalues and eigenvectors exist in the Rössler attractor, their influence is confined to iterations of the Rössler system whose initial conditions are in the general vicinity of this outlier fixed point. Except in those cases where the initial conditions lie on the attracting plane generated by and , this influence effectively involves pushing the resulting system towards the general Rössler attractor. As the resulting sequence approaches the central fixed point and the attractor itself, the influence of this distant fixed point (and its eigenvectors) will wane.
Poincaré map
The Poincaré map is constructed by plotting the value of the function every time it passes through a set plane in a specific direction. An example would be plotting the value every time it passes through the plane where is changing from negative to positive, commonly done when studying the Lorenz attractor. In the case of the Rössler attractor, the plane is uninteresting, as the map always crosses the plane at due to the nature of the Rössler equations. In the plane for , , , the Poincaré map shows the upswing in values as increases, as is to be expected due to the upswing and twist section of the Rössler plot. The number of points in this specific Poincaré plot is infinite, but when a different value is used, the number of points can vary. For example, with a value of 4, there is only one point on the Poincaré map, because the function yields a periodic orbit of period one, or if the value is set to 12.8, there would be six points corresponding to a period six orbit.
Lorenz map
The Lorenz map is the relation between successive maxima of a coordinate in a trajectory. Consider a trajectory on the attractor, and let be the n-th maximum of its x-coordinate. Then - scatterplot is almost a curve, meaning that knowing one can almost exactly predict .
Mapping local maxima
In the original paper on the Lorenz Attractor, Edward Lorenz analyzed the local maxima of against the immediately preceding local maxima. When visualized, the plot resembled the tent map, implying that similar analysis can be used between the map and attractor. For the Rössler attractor, when the local maximum is plotted against the next local maximum, , the resulting plot (shown here for , , ) is unimodal, resembling a skewed Hénon map. Knowing that the Rössler attractor can be used to create a pseudo 1-d map, it then follows to use similar analysis methods. The bifurcation diagram is a particularly useful analysis method.
Variation of parameters
Rössler attractor's behavior is largely a factor of the values of its constant parameters , , and . In general, varying each parameter has a comparable effect by causing the system to converge toward a periodic orbit, fixed point, or escape towards infinity, however the specific ranges and behaviors induced vary substantially for each parameter. Periodic orbits, or "unit cycles," of the Rössler system are defined by the number of loops around the central point that occur before the loops series begins to repeat itself.
Bifurcation diagrams are a common tool for analyzing the behavior of dynamical systems, of which the Rössler attractor is one. They are created by running the equations of the system, holding all but one of the variables constant and varying the last one. Then, a graph is plotted of the points that a particular value for the changed variable visits after transient factors have been neutralised. Chaotic regions are indicated by filled-in regions of the plot.
Varying a
Here, is fixed at 0.2, is fixed at 5.7 and changes. Numerical examination of the attractor's behavior over changing suggests it has a disproportional influence over the attractor's behavior. The results of the analysis are:
: Converges to the centrally located fixed point
: Unit cycle of period 1
: Standard parameter value selected by Rössler, chaotic
: Chaotic attractor, significantly more Möbius strip-like (folding over itself).
: Similar to .3, but increasingly chaotic
: Similar to .35, but increasingly chaotic.
Varying b
Here, is fixed at 0.2, is fixed at 5.7 and changes. As shown in the accompanying diagram, as approaches 0 the attractor approaches infinity (note the upswing for very small values of ). Comparative to the other parameters, varying generates a greater range when period-3 and period-6 orbits will occur. In contrast to and , higher values of converge to period-1, not to a chaotic state.
Varying c
Here, and changes. The bifurcation diagram reveals that low values of are periodic, but quickly become chaotic as increases. This pattern repeats itself as increases – there are sections of periodicity interspersed with periods of chaos, and the trend is towards higher-period orbits as increases. For example, the period one orbit only appears for values of around 4 and is never found again in the bifurcation diagram. The same phenomenon is seen with period three; until , period three orbits can be found, but thereafter, they do not appear.
A graphical illustration of the changing attractor over a range of values illustrates the general behavior seen for all of these parameter analyses – the frequent transitions between periodicity and aperiodicity.
The above set of images illustrates the variations in the post-transient Rössler system as is varied over a range of values. These images were generated with .
, period-1 orbit.
, period-2 orbit.
, period-4 orbit.
, period-8 orbit.
, sparse chaotic attractor.
, period-3 orbit.
, period-6 orbit.
, sparse chaotic attractor.
, period-5 orbit.
, filled-in chaotic attractor.
Periodic orbits
The attractor is filled densely with periodic orbits: solutions for which there exists a nonzero value of such that . These interesting solutions can be numerically derived using Newton's method. Periodic orbits are the roots of the function , where is the evolution by time and is the identity. As the majority of the dynamics occurs in the x-y plane, the periodic orbits can then be classified by their winding number around the central equilibrium after projection.
It seems from numerical experimentation that there is a unique periodic orbit for all positive winding numbers. This lack of degeneracy likely stems from the problem's lack of symmetry. The attractor can be dissected into easier to digest invariant manifolds: 1D periodic orbits and the 2D stable and unstable manifolds of periodic orbits. These invariant manifolds are a natural skeleton of the attractor, just as rational numbers are to the real numbers.
For the purposes of dynamical systems theory, one might be interested in topological invariants of these manifolds. Periodic orbits are copies of embedded in , so their topological properties can be understood with knot theory. The periodic orbits with winding numbers 1 and 2 form a Hopf link, showing that no diffeomorphism can separate these orbits.
Links to other topics
The banding evident in the Rössler attractor is similar to a Cantor set rotated about its midpoint. Additionally, the half-twist that occurs in the Rössler attractor only affects a part of the attractor. Rössler showed that his attractor was in fact the combination of a "normal band" and a Möbius strip.
References
External links
Flash Animation using PovRay
Lorenz and Rössler attractors – Java animation
3D Attractors: Mac program to visualize and explore the Rössler and Lorenz attractors in 3 dimensions
Rössler attractor in Scholarpedia
Rössler Attractor : Numerical interactive experiment in 3D - experiences.math.cnrs.fr- (javascript/webgl)
Chaotic maps | Rössler attractor | [
"Mathematics"
] | 2,731 | [
"Functions and mappings",
"Mathematical objects",
"Mathematical relations",
"Chaotic maps",
"Dynamical systems"
] |
1,576,323 | https://en.wikipedia.org/wiki/Incidence%20geometry | In mathematics, incidence geometry is the study of incidence structures. A geometric structure such as the Euclidean plane is a complicated object that involves concepts such as length, angles, continuity, betweenness, and incidence. An incidence structure is what is obtained when all other concepts are removed and all that remains is the data about which points lie on which lines. Even with this severe limitation, theorems can be proved and interesting facts emerge concerning this structure. Such fundamental results remain valid when additional concepts are added to form a richer geometry. It sometimes happens that authors blur the distinction between a study and the objects of that study, so it is not surprising to find that some authors refer to incidence structures as incidence geometries.
Incidence structures arise naturally and have been studied in various areas of mathematics. Consequently, there are different terminologies to describe these objects. In graph theory they are called hypergraphs, and in combinatorial design theory they are called block designs. Besides the difference in terminology, each area approaches the subject differently and is interested in questions about these objects relevant to that discipline. Using geometric language, as is done in incidence geometry, shapes the topics and examples that are normally presented. It is, however, possible to translate the results from one discipline into the terminology of another, but this often leads to awkward and convoluted statements that do not appear to be natural outgrowths of the topics. In the examples selected for this article we use only those with a natural geometric flavor.
A special case that has generated much interest deals with finite sets of points in the Euclidean plane and what can be said about the number and types of (straight) lines they determine. Some results of this situation can extend to more general settings since only incidence properties are considered.
Incidence structures
An incidence structure consists of a set whose elements are called points, a disjoint set whose elements are called lines and an incidence relation between them, that is, a subset of whose elements are called flags. If is a flag, we say that is incident with or that is incident with (the terminology is symmetric), and write . Intuitively, a point and line are in this relation if and only if the point is on the line. Given a point and a line which do not form a flag, that is, the point is not on the line, the pair is called an anti-flag.
Distance in an incidence structure
There is no natural concept of distance (a metric) in an incidence structure. However, a combinatorial metric does exist in the corresponding incidence graph (Levi graph), namely the length of the shortest path between two vertices in this bipartite graph. The distance between two objects of an incidence structure – two points, two lines or a point and a line – can be defined to be the distance between the corresponding vertices in the incidence graph of the incidence structure.
Another way to define a distance again uses a graph-theoretic notion in a related structure, this time the collinearity graph of the incidence structure. The vertices of the collinearity graph are the points of the incidence structure and two points are joined if there exists a line incident with both points. The distance between two points of the incidence structure can then be defined as their distance in the collinearity graph.
When distance is considered in an incidence structure, it is necessary to mention how it is being defined.
Partial linear spaces
Incidence structures that are most studied are those that satisfy some additional properties (axioms), such as projective planes, affine planes, generalized polygons, partial geometries and near polygons. Very general incidence structures can be obtained by imposing "mild" conditions, such as:
A partial linear space is an incidence structure for which the following axioms are true:
Every pair of distinct points determines at most one line.
Every line contains at least two distinct points.
In a partial linear space it is also true that every pair of distinct lines meet in at most one point. This statement does not have to be assumed as it is readily proved from axiom one above.
Further constraints are provided by the regularity conditions:
RLk: Each line is incident with the same number of points. If finite this number is often denoted by .
RPr: Each point is incident with the same number of lines. If finite this number is often denoted by .
The second axiom of a partial linear space implies that . Neither regularity condition implies the other, so it has to be assumed that .
A finite partial linear space satisfying both regularity conditions with is called a tactical configuration. Some authors refer to these simply as configurations, or projective configurations. If a tactical configuration has points and lines, then, by double counting the flags, the relationship is established. A common notation refers to -configurations. In the special case where (and hence, ) the notation is often simply written as .
A linear space is a partial linear space such that:
Every pair of distinct points determines exactly one line.
Some authors add a "non-degeneracy" (or "non-triviality") axiom to the definition of a (partial) linear space, such as:
There exist at least two distinct lines.
This is used to rule out some very small examples (mainly when the sets or have fewer than two elements) that would normally be exceptions to general statements made about the incidence structures. An alternative to adding the axiom is to refer to incidence structures that do not satisfy the axiom as being trivial and those that do as non-trivial.
Each non-trivial linear space contains at least three points and three lines, so the simplest non-trivial linear space that can exist is a triangle.
A linear space having at least three points on every line is a Sylvester–Gallai design.
Fundamental geometric examples
Some of the basic concepts and terminology arises from geometric examples, particularly projective planes and affine planes.
Projective planes
A projective plane is a linear space in which:
Every pair of distinct lines meet in exactly one point,
and that satisfies the non-degeneracy condition:
There exist four points, no three of which are collinear.
There is a bijection between and in a projective plane. If is a finite set, the projective plane is referred to as a finite projective plane. The order of a finite projective plane is , that is, one less than the number of points on a line. All known projective planes have orders that are prime powers. A projective plane of order is an configuration.
The smallest projective plane has order two and is known as the Fano plane.
Fano plane
This famous incidence geometry was developed by the Italian mathematician Gino Fano. In his work on proving the independence of the set of axioms for projective n-space that he developed, he produced a finite three-dimensional space with 15 points, 35 lines and 15 planes, in which each line had only three points on it. The planes in this space consisted of seven points and seven lines and are now known as Fano planes.
The Fano plane cannot be represented in the Euclidean plane using only points and straight line segments (i.e., it is not realizable). This is a consequence of the Sylvester–Gallai theorem, according to which every realizable incidence geometry must include an ordinary line, a line containing only two points. The Fano plane has no such line (that is, it is a Sylvester–Gallai configuration), so it is not realizable.
A complete quadrangle consists of four points, no three of which are collinear. In the Fano plane, the three points not on a complete quadrangle are the diagonal points of that quadrangle and are collinear. This contradicts the Fano axiom, often used as an axiom for the Euclidean plane, which states that the three diagonal points of a complete quadrangle are never collinear.
Affine planes
An affine plane is a linear space satisfying:
For any point and line not incident with it (an anti-flag) there is exactly one line incident with (that is, ), that does not meet (known as Playfair's axiom),
and satisfying the non-degeneracy condition:
There exists a triangle, i.e. three non-collinear points.
The lines and in the statement of Playfair's axiom are said to be parallel. Every affine plane can be uniquely extended to a projective plane. The order of a finite affine plane is , the number of points on a line. An affine plane of order is an configuration.
Hesse configuration
The affine plane of order three is a configuration. When embedded in some ambient space it is called the Hesse configuration. It is not realizable in the Euclidean plane but is realizable in the complex projective plane as the nine inflection points of an elliptic curve with the 12 lines incident with triples of these.
The 12 lines can be partitioned into four classes of three lines apiece where, in each class the lines are mutually disjoint. These classes are called parallel classes of lines. Adding four new points, each being added to all the lines of a single parallel class (so all of these lines now intersect), and one new line containing just these four new points produces the projective plane of order three, a configuration. Conversely, starting with the projective plane of order three (it is unique) and removing any single line and all the points on that line produces this affine plane of order three (it is also unique).
Removing one point and the four lines that pass through that point (but not the other points on them) produces the Möbius–Kantor configuration.
Partial geometries
Given an integer , a tactical configuration satisfying:
For every anti-flag there are flags such that and ,
is called a partial geometry. If there are points on a line and lines through a point, the notation for a partial geometry is .
If these partial geometries are generalized quadrangles.
If these are called Steiner systems.
Generalized polygons
For , a generalized -gon is a partial linear space whose incidence graph has the property:
The girth of (length of the shortest cycle) is twice the diameter of (the largest distance between two vertices, in this case).
A generalized 2-gon is an incidence structure, which is not a partial linear space, consisting of at least two points and two lines with every point being incident with every line. The incidence graph of a generalized 2-gon is a complete bipartite graph.
A generalized -gon contains no ordinary -gon for and for every pair of objects (two points, two lines or a point and a line) there is an ordinary -gon that contains them both.
Generalized 3-gons are projective planes. Generalized 4-gons are called generalized quadrangles. By the Feit-Higman theorem the only finite generalized -gons with at least three points per line and three lines per point have = 2, 3, 4, 6 or 8.
Near polygons
For a non-negative integer a near -gon is an incidence structure such that:
The maximum distance (as measured in the collinearity graph) between two points is , and
For every point and line there is a unique point on that is closest to .
A near 0-gon is a point, while a near 2-gon is a line. The collinearity graph of a near 2-gon is a complete graph. A near 4-gon is a generalized quadrangle (possibly degenerate). Every finite generalized polygon except the projective planes is a near polygon. Any connected bipartite graph is a near polygon and any near polygon with precisely two points per line is a connected bipartite graph. Also, all dual polar spaces are near polygons.
Many near polygons are related to finite simple groups like the Mathieu groups and the Janko group J2. Moreover, the generalized 2d-gons, which are related to Groups of Lie type, are special cases of near 2d-gons.
Möbius planes
An abstract Möbius plane (or inversive plane) is an incidence structure where, to avoid possible confusion with the terminology of the classical case, the lines are referred to as cycles or blocks.
Specifically, a Möbius plane is an incidence structure of points and cycles such that:
Every triple of distinct points is incident with precisely one cycle.
For any flag and any point not incident with there is a unique cycle with and . (The cycles are said to touch at .)
Every cycle has at least three points and there exists at least one cycle.
The incidence structure obtained at any point of a Möbius plane by taking as points all the points other than and as lines only those cycles that contain (with removed), is an affine plane. This structure is called the residual at in design theory.
A finite Möbius plane of order is a tactical configuration with points per cycle that is a 3-design, specifically a block design.
Incidence theorems in the Euclidean plane
The Sylvester-Gallai theorem
A question raised by J.J. Sylvester in 1893 and finally settled by Tibor Gallai concerned incidences of a finite set of points in the Euclidean plane.
Theorem (Sylvester-Gallai): A finite set of points in the Euclidean plane is either collinear or there exists a line incident with exactly two of the points.
A line containing exactly two of the points is called an ordinary line in this context. Sylvester was probably led to the question while pondering about the embeddability of the Hesse configuration.
The de Bruijn–Erdős theorem
A related result is the de Bruijn–Erdős theorem. Nicolaas Govert de Bruijn and Paul Erdős proved the result in the more general setting of projective planes, but it still holds in the Euclidean plane. The theorem is:
In a projective plane, every non-collinear set of points determines at least distinct lines.
As the authors pointed out, since their proof was combinatorial, the result holds in a larger setting, in fact in any incidence geometry in which there is a unique line through every pair of distinct points. They also mention that the Euclidean plane version can be proved from the Sylvester-Gallai theorem using induction.
The Szemerédi–Trotter theorem
A bound on the number of flags determined by a finite set of points and the lines they determine is given by:
Theorem (Szemerédi–Trotter): given points and lines in the plane, the number of flags (incident point-line pairs) is:
and this bound cannot be improved, except in terms of the implicit constants.
This result can be used to prove Beck's theorem.
A similar bound for the number of incidences is conjectured for point-circle incidences, but only weaker upper bounds are known.
Beck's theorem
Beck's theorem says that finite collections of points in the plane fall into one of two extremes; one where a large fraction of points lie on a single line, and one where a large number of lines are needed to connect all the points.
The theorem asserts the existence of positive constants such that given any points in the plane, at least one of the following statements is true:
There is a line that contains at least of the points.
There exist at least lines, each of which contains at least two of the points.
In Beck's original argument, is 100 and is an unspecified constant; it is not known what the optimal values of and are.
More examples
Projective geometries
Moufang polygon
Schläfli double six
Reye configuration
Cremona–Richmond configuration
Kummer configuration
Klein configuration
Non-Desarguesian planes
See also
Combinatorial designs
Finite geometry
Intersection theorem
Levi graph
Notes
References
.
.
.
External links
incidence system at the Encyclopedia of Mathematics | Incidence geometry | [
"Mathematics"
] | 3,250 | [
"Incidence geometry",
"Combinatorics"
] |
1,576,401 | https://en.wikipedia.org/wiki/Sir%20Dystic | Josh Buchbinder,
better known as Sir Dystic, has been a member of Cult of the Dead Cow (cDc) since May 1997,
and is the author of Back Orifice.
He has also written several other hacker tools, including SMBRelay, NetE, and NBName.
Sir Dystic has appeared at multiple hacker conventions, both as a member of panels and speaking on his own. He has also been interviewed on several television and radio programs
and in an award-winning short film about hacker culture in general and cDc in particular.
Dystic's pseudonym is taken from a somewhat obscure 1930s bondage comic character named "Sir Dystic D'Arcy." According to the cDc's Sir Dystic, his namesake "tried to do evil things but always bungles it and ends up doing good inadvertently."
Software
Back Orifice
Back Orifice (often shortened to BO) is a controversial computer program designed for remote system administration. It enables a user to control a computer running the Microsoft Windows operating system from a remote location. The name is a pun on Microsoft BackOffice Server software. The program debuted at DEF CON 6 on August 1, 1998. It was the brainchild of Sir Dystic, a member of the U.S. hacker organization Cult of the Dead Cow. According to the group, its purpose was to demonstrate the lack of security in Microsoft's operating system Windows 98.
According to Sir Dystic, "BO was supposed to be a statement about the fact that people feel secure and safe, although there are wide, gaping holes in both the operating system they're using and the means of defense they're using against hostile code. I mean, that was my message and BO2K really has a different message." Vnunet.com reported Sir Dystic's claim that this message was privately commended by employees of Microsoft.
SMBRelay & SMBRelay2
SMBRelay and SMBRelay2 are computer programs that can be used to carry out SMB man in the middle (mitm) attacks on Windows machines. They were written by Sir Dystic and released 21 March 2001 at the @lantacon convention in Atlanta, Georgia.
NBName
NBName is a computer program that can be used to carry out denial-of-service attacks that can disable NetBIOS services on Windows machines. It was written by Sir Dystic and released 29 July 2000 at the DEF CON 8 convention in Las Vegas. Sir Dystic reported the issue that NBName exploits to Microsoft; he was acknowledged in a security bulletin.
External links
References
Cult of the Dead Cow members
People associated with computer security
Living people
Computer programmers
Year of birth missing (living people)
Hackers
Hacker culture | Sir Dystic | [
"Technology"
] | 569 | [
"Lists of people in STEM fields",
"Hackers"
] |
1,576,566 | https://en.wikipedia.org/wiki/Fake%20fur | Fake fur, also called faux fur, is a pile fabric engineered to have the appearance and warmth of fur. Fake fur can be made from a variety of materials, including polyester, nylon, or acrylic.
First introduced in 1929, fake furs were initially composed of hair from the South American alpaca. The ensuing decades saw substantial improvements in their quality, particularly in the 1940s, thanks to significant advances in textile manufacturing. By the mid-1950s, a transformative development in fake furs occurred when alpaca hair was replaced with acrylic polymers, leading to the creation of the synthetic fur we recognize today.
The promotion of fake furs by animal rights and animal welfare organizations has contributed to its increasing popularity as an animal-friendly alternative to traditional fur clothing.
Uses
Fake fur is used in all applications where real fur would be used, including but not limited to stuffed animals, fashion accessories, pillows, bedding and throws. It is also used for craft projects because it can be sewn on a standard sewing machine. In contrast, real fur is generally thicker and requires hand sewing or an awl. Fake fur is increasingly used in mainstream teen fashion; the stores Abercrombie & Fitch and American Eagle commonly use fake furs in their trapper hats and jackets. Ralph Lauren has promoted the use of fake fur in its collections.
Fake fur is widely used in making fursuits in the furry community.
In the Soviet, and now Russian Army, fish fur is a derogatory term for low-quality winter clothing and ushanka hats, from a proverb that "a poor man's fur coat is of fish fur".
Comparison to real fur
Sewing Process and Storage
Unlike genuine fur, faux fur is a type of fabric, which makes it relatively easy to sew. The synthetic nature of faux fur eliminates the need for cold storage, which prevents deterioration in real fur. In addition, fake fur is not infested by moths, unlike real fur. However, fake fur should be stored in a garment bag or container away from humidity, heat, and sunlight to maintain its quality.
Due to the controversy of fur garments, technology facilitating the production of fake furs has significantly improved since the early twentieth century. There are new tailoring and dyeing techniques to "disguise" fur and change the traditional image of fur with its conventional image associated with the elite fur-clad woman. Modacrylic is a high-quality 'fur' alternative that gains attraction to its convincing look as an alternative to real fur. Howard Strachman of Strachman Associates, a New York-based agent for faux fur, states that synthetic acrylic knitted fabrics have become a go-to resource for high-end faux fur, much of it coming from Asia. New methods of production are still being developed. One technique combines coarse and fine fibers to simulate mink or beaver fur.
Durability and Energy Consumption
Faux fur is perceived as less durable than real fur, and this attribute coupled with its lesser insulating properties forms part of the critique against its use. Also, unlike real fur, fake furs are not able to keep snow from melting and re-freezing on the fiber filaments, which can be dangerous in extremely cold environments.
Fake fur production could consume less energy compared to real fur. A study conducted in 1979 claimed that the energy consumption for the production of one coat made out of fake fur was 35 kilowatt-hours (120,000 British thermal units), compared to 127 kWh (433,000 Btu) for trapped animals and 2,340 kWh (7,970,000 Btu) for animals raised in fur farms. Despite these findings, the study has faced criticism for perceived bias and dated methodology.
Environmental Impact
Fake fur is less biodegradable due to its composition of various synthetic materials. These materials often include blends of acrylic and modacrylic polymers derived from coal, air, water, petroleum, and limestone, which can potentially take between 500 and 1,000 years to break down.
Pricing
Fake fur is significantly less expensive than real fur. The price spectrum for luxury fake fur items spans from as low as $127 to as high as $8,900 in the mass market. In contrast, real fur luxury outerwear begins at a significantly higher price point, starting at $2,300.
Use of actual fur
Some coats labeled as having faux-fur trim were found to use actual fur in a test conducted by the Humane Society of the United States. In the United States, up until 2012, a labeling loophole allowed any piece of clothing that contains less than $150 of fur to be labeled without mentioning that it included fur. This is the equivalent of thirty rabbits, three raccoons, three red foxes, two to five leopards, twenty ring tailed lemurs, three domestic dogs, or one bear.
Use by fashion designers
Fake fur is popular in fashion, and several fashion designers incorporate the material throughout their collections. Hannah Weiland, founder of Shrimps, a London-based fake fur company, says, "I love working with faux fur because it doesn't molt and it feels just as soft. If the faux kind feels as good, why use the real kind?" Designer Stella McCartney also incorporates faux fur throughout her collections with tagged patches reading "Fur Free Fur."
German company Hugo Boss made a public stance against animal fur by pledging to go completely fur-free, taking effect with their 2016 Fall/Winter collection. With the announcement, creative director of sportswear Bernd Keller stated the company's intention to prioritize animal protection and sustainability over convenience.
SpiritHoods is a Los Angeles based apparel company and specializes in faux fur coats. PETA (People for Ethical Treatment of Animals) awarded them the Libby Award in 2011, 2012 and 2022 for being a cruelty-free clothing brand.
Fake fur is also used for its versatility in color and shape. Julie de Libran, the former artistic director of Sonia Rykiel, incorporated a combination of both real and fake fur in her collections. De Libran stated that she utilized fake fur for its ability to take on creative colors and forms, giving it a playfulness that natural fur alone could not create.
Prada embraced synthetics in their Fall/Winter 2007 collection. Miuccia Prada, the brand's owner and designer, commented that she was bored with real fur, and as a result, she included fake fur in her collection that year. In addition, Dries Van Noten, Hussein Chalayan, Julien David, Julie de Libran for Sonia Rykiel, Kate Spade, and many others featured fake fur in their fall collections. In addition, Prada, Max Mara and Dries Van Noten have included mohair faux fur in their collections.
The global artificial fur industry is projected to grow at a rate of over 15% by 2027.
References
External links
How fake fur is made
Fur
Animal hair products
Artificial materials
Winter fabrics | Fake fur | [
"Physics"
] | 1,435 | [
"Materials",
"Matter",
"Artificial materials"
] |
1,576,696 | https://en.wikipedia.org/wiki/Reaction%20rate%20constant | In chemical kinetics, a reaction rate constant or reaction rate coefficient () is a proportionality constant which quantifies the rate and direction of a chemical reaction by relating it with the concentration of reactants.
For a reaction between reactants A and B to form a product C,
where
A and B are reactants
C is a product
a, b, and c are stoichiometric coefficients,
the reaction rate is often found to have the form:
Here is the reaction rate constant that depends on temperature, and [A] and [B] are the molar concentrations of substances A and B in moles per unit volume of solution, assuming the reaction is taking place throughout the volume of the solution. (For a reaction taking place at a boundary, one would use moles of A or B per unit area instead.)
The exponents m and n are called partial orders of reaction and are not generally equal to the stoichiometric coefficients a and b. Instead they depend on the reaction mechanism and can be determined experimentally.
Sum of m and n, that is, (m + n) is called the overall order of reaction.
Elementary steps
For an elementary step, there is a relationship between stoichiometry and rate law, as determined by the law of mass action. Almost all elementary steps are either unimolecular or bimolecular. For a unimolecular step
the reaction rate is described by , where is a unimolecular rate constant. Since a reaction requires a change in molecular geometry, unimolecular rate constants cannot be larger than the frequency of a molecular vibration. Thus, in general, a unimolecular rate constant has an upper limit of k1 ≤ ~1013 s−1.
For a bimolecular step
the reaction rate is described by , where is a bimolecular rate constant. Bimolecular rate constants have an upper limit that is determined by how frequently molecules can collide, and the fastest such processes are limited by diffusion. Thus, in general, a bimolecular rate constant has an upper limit of k2 ≤ ~1010 M−1s−1.
For a termolecular step
the reaction rate is described by , where is a termolecular rate constant.
There are few examples of elementary steps that are termolecular or higher order, due to the low probability of three or more molecules colliding in their reactive conformations and in the right orientation relative to each other to reach a particular transition state. There are, however, some termolecular examples in the gas phase. Most involve the recombination of two atoms or small radicals or molecules in the presence of an inert third body which carries off excess energy, such as O + + → + . One well-established example is the termolecular step 2 I + → 2 HI in the hydrogen-iodine reaction. In cases where a termolecular step might plausibly be proposed, one of the reactants is generally present in high concentration (e.g., as a solvent or diluent gas).
Relationship to other parameters
For a first-order reaction (including a unimolecular one-step process), there is a direct relationship between the unimolecular rate constant and the half-life of the reaction: . Transition state theory gives a relationship between the rate constant and the Gibbs free energy of activation a quantity that can be regarded as the free energy change needed to reach the transition state. In particular, this energy barrier incorporates both enthalpic and entropic changes that need to be achieved for the reaction to take place: The result from transition state theory is where h is the Planck constant and R the molar gas constant. As useful rules of thumb, a first-order reaction with a rate constant of 10−4 s−1 will have a half-life (t1/2) of approximately 2 hours. For a one-step process taking place at room temperature, the corresponding Gibbs free energy of activation (ΔG‡) is approximately 23 kcal/mol.
Dependence on temperature
The Arrhenius equation is an elementary treatment that gives the quantitative basis of the relationship between the activation energy and the reaction rate at which a reaction proceeds. The rate constant as a function of thermodynamic temperature is then given by:
The reaction rate is given by:
where Ea is the activation energy, and R is the gas constant, and m and n are experimentally determined partial orders in [A] and [B], respectively. Since at temperature T the molecules have energies according to a Boltzmann distribution, one can expect the proportion of collisions with energy greater than Ea to vary with e. The constant of proportionality A is the pre-exponential factor, or frequency factor (not to be confused here with the reactant A) takes into consideration the frequency at which reactant molecules are colliding and the likelihood that a collision leads to a successful reaction. Here, A has the same dimensions as an (m + n)-order rate constant (see Units below).
Another popular model that is derived using more sophisticated statistical mechanical considerations is the Eyring equation from transition state theory:
where ΔG‡ is the free energy of activation, a parameter that incorporates both the enthalpy and entropy change needed to reach the transition state. The temperature dependence of ΔG‡ is used to compute these parameters, the enthalpy of activation ΔH‡ and the entropy of activation ΔS‡, based on the defining formula ΔG‡ = ΔH‡ − TΔS‡. In effect, the free energy of activation takes into account both the activation energy and the likelihood of successful collision, while the factor kBT/h gives the frequency of molecular collision.
The factor (c⊖)1-M ensures the dimensional correctness of the rate constant when the transition state in question is bimolecular or higher. Here, c⊖ is the standard concentration, generally chosen based on the unit of concentration used (usually c⊖ = 1 mol L−1 = 1 M), and M is the molecularity of the transition state. Lastly, κ, usually set to unity, is known as the transmission coefficient, a parameter which essentially serves as a "fudge factor" for transition state theory.
The biggest difference between the two theories is that Arrhenius theory attempts to model the reaction (single- or multi-step) as a whole, while transition state theory models the individual elementary steps involved. Thus, they are not directly comparable, unless the reaction in question involves only a single elementary step.
Finally, in the past, collision theory, in which reactants are viewed as hard spheres with a particular cross-section, provided yet another common way to rationalize and model the temperature dependence of the rate constant, although this approach has gradually fallen into disuse. The equation for the rate constant is similar in functional form to both the Arrhenius and Eyring equations:
where P is the steric (or probability) factor and Z is the collision frequency, and ΔE is energy input required to overcome the activation barrier. Of note, , making the temperature dependence of k different from both the Arrhenius and Eyring models.
Comparison of models
All three theories model the temperature dependence of k using an equation of the form
for some constant C, where α = 0, , and 1 give Arrhenius theory, collision theory, and transition state theory, respectively, although the imprecise notion of ΔE, the energy needed to overcome the activation barrier, has a slightly different meaning in each theory. In practice, experimental data does not generally allow a determination to be made as to which is "correct" in terms of best fit. Hence, all three are conceptual frameworks that make numerous assumptions, both realistic and unrealistic, in their derivations. As a result, they are capable of providing different insights into a system.
Units
The units of the rate constant depend on the overall order of reaction.
If concentration is measured in units of mol·L−1 (sometimes abbreviated as M), then
For order (m + n), the rate constant has units of mol1−(m+n)·L(m+n)−1·s−1 (or M1−(m+n)·s−1)
For order zero, the rate constant has units of mol·L−1·s−1 (or M·s−1)
For order one, the rate constant has units of s−1
For order two, the rate constant has units of L·mol−1·s−1 (or M−1·s−1)
For order three, the rate constant has units of L2·mol−2·s−1 (or M−2·s−1)
For order four, the rate constant has units of L3·mol−3·s−1 (or M−3·s−1)
Plasma and gases
Calculation of rate constants of the processes of generation and relaxation of electronically and vibrationally excited particles are of significant importance. It is used, for example, in the computer simulation of processes in plasma chemistry or microelectronics. First-principle based models should be used for such calculation. It can be done with the help of computer simulation software.
Rate constant calculations
Rate constant can be calculated for elementary reactions by molecular dynamics simulations.
One possible approach is to calculate the mean residence time of the molecule in the reactant state. Although this is feasible for small systems with short residence times, this approach is not widely applicable as reactions are often rare events on molecular scale.
One simple approach to overcome this problem is Divided Saddle Theory. Such other methods as the Bennett Chandler procedure, and Milestoning have also been developed for rate constant calculations.
Divided saddle theory
The theory is based on the assumption that the reaction can be described by a reaction coordinate, and that we can apply Boltzmann distribution at least in the reactant state.
A new, especially reactive segment of the reactant, called the saddle domain, is introduced, and the rate constant is factored:
where α is the conversion factor between the reactant state and saddle domain, while kSD is the rate constant from the saddle domain. The first can be simply calculated from the free energy surface, the latter is easily accessible from short molecular dynamics simulations
See also
Reaction rate
Equilibrium constant
Molecularity
References
Chemical kinetics | Reaction rate constant | [
"Chemistry"
] | 2,126 | [
"Chemical reaction engineering",
"Chemical kinetics"
] |
1,576,787 | https://en.wikipedia.org/wiki/Allyl%20isothiocyanate | Allyl isothiocyanate (AITC) is a naturally occurring unsaturated isothiocyanate. The colorless oil is responsible for the pungent taste of cruciferous vegetables such as mustard, radish, horseradish, and wasabi. This pungency and the lachrymatory effect of AITC are mediated through the TRPA1 and TRPV1 ion channels. It is slightly soluble in water, but more soluble in most organic solvents.
Biosynthesis and biological functions
Allyl isothiocyanate can be obtained from the seeds of black mustard (Rhamphospermum nigrum) or brown Indian mustard (Brassica juncea). When these mustard seeds are broken, the enzyme myrosinase is released and acts on a glucosinolate known as sinigrin to give allyl isothiocyanate. This serves the plant as a defense against herbivores; since it is harmful to the plant itself, it is stored in the harmless form of the glucosinolate, separate from the myrosinase enzyme. When an animal chews the plant, the allyl isothiocyanate is released, repelling the animal. Human appreciation of the pungency is learned.
The compound has been shown to strongly repel fire ants (Solenopsis invicta). AITC vapor is also used as an antimicrobial and shelf life extender in food packaging.
Production and applications
Allyl isothiocyanate is produced commercially by the reaction of allyl chloride and potassium thiocyanate:
CH2=CHCH2Cl + KSCN → CH2=CHCH2NCS + KCl
The product obtained in this fashion is sometimes known as synthetic mustard oil.
Allyl thiocyanate isomerizes to the isothiocyanate:
Allyl isothiocyanate can also be liberated by dry distillation of the seeds. The product obtained in this fashion is known as volatile oil of mustard.
It is used principally as a flavoring agent in foods. Synthetic allyl isothiocyanate is used as an insecticide, as an anti-mold agent bacteriocide, and nematicide, and is used in certain cases for crop protection. It is also used in fire alarms for the deaf.
Hydrolysis of allyl isothiocyanate gives allylamine.
Safety
Allyl isothiocyanate has an LD50 of 151 mg/kg and is a lachrymator (similar to tear gas or mace).
Oncology
Based on in vitro experiments and animal models, allyl isothiocyanate exhibits many of the desirable attributes of a cancer chemopreventive agent.
See also
Mustard plaster, traditional home remedy
Piperine, the piquant chemical in black pepper
Capsaicin, the piquant chemical in chili peppers
Allicin, the piquant flavor chemical in raw garlic
References
Antibiotics
Insecticides
Isothiocyanates
Pungent flavors
Nematicides
Allyl compounds
Lachrymatory agents
Transient receptor potential channel modulators | Allyl isothiocyanate | [
"Chemistry",
"Biology"
] | 645 | [
"Biotechnology products",
"Chemical weapons",
"Functional groups",
"Antibiotics",
"Isothiocyanates",
"Lachrymatory agents",
"Biocides"
] |
1,576,921 | https://en.wikipedia.org/wiki/Theodor%20Fritsch | Theodor Fritsch (born Emil Theodor Fritsche; 28 October 1852 – 8 September 1933) was a German publisher and journalist. His antisemitic writings did much to influence popular German opinion against Jews in the late 19th and early 20th centuries. His writings also appeared under the pen names Thomas Frey, Fritz Thor, and Ferdinand Roderich-Stoltheim.
He is not to be confused with his son, also named Theodor Fritsch (1895–1946) and also a bookseller, who was a member of the paramilitary wing of the Nazi Party, the Sturmabteilung.
Life
Fritsch was born Emil Theodor Fritsche, the sixth of seven children to Johann Friedrich Fritsche, a farmer in the village of Wiesenena (present-day Wiedemar) in the Prussian province of Saxony, and his wife August Wilhelmine, née Ohme. Four of his siblings died in childhood. He attended vocational school (Realschule) in Delitzsch where he learned casting and machine building. He then undertook study at the Royal Trade Academy (Königliche Gewerbeakademie) in Berlin, graduating as a technician in 1875.
In the same year Fritsche found employment in a Berlin machine shop. He gained independence in 1879 through the founding of a technical bureau associated with a publishing firm. In 1880 he founded the Deutscher Müllerbund (Miller's League) which issued the publication Der Deutsche Müller (The German Miller). In 1905 he founded the "Saxon Small Business Association." He devoted himself to this organization and to the interests of crafts and small businesses (Mittelstand), as well as to the spread of antisemitic propaganda. When he changed his name to Fritsch is unclear.
Publishing
Fritsch created an early discussion forum, "Antisemitic Correspondence" in 1885 for antisemites of various political persuasions. In 1887 he sent several editions to Friedrich Nietzsche but was brusquely dismissed. Nietzsche sent Fritsch a letter in which he thanked him to be permitted "to cast a glance at the muddle of principles that lie at the heart of this strange movement", but requested not to be sent again such writings, for he was afraid that he might lose his patience. Fritsch offered editorship to right-wing politician Max Liebermann von Sonnenberg in 1894, whereafter it became an organ for Sonnenberg's German Social Party under the name "German Social Articles." Fritsch' 1896 book The City of the Future became a blueprint of the German garden city movement which was adopted by Völkisch circles.
In 1902 Fritsch founded a Leipzig publishing house, Hammer-Verlag, whose flagship publication was The Hammer: Pages for German Sense (1902–1940). The firm issued German translations of The Protocols of the Elders of Zion and The International Jew (collected writings of Henry Ford from The Dearborn Independent) as well as many of Fritsch's own works. An inflammatory article published in 1910 earned him a charge of defamation of religious societies and disturbing the public peace. Fritsch was sentenced to one week in prison, and received another ten-day term in 1911.
Political activities
In 1890, Fritsch became, along with Otto Böckel, a candidate of the German Reform Party, founded by Böckel and Oswald Zimmermann, to the German Reichstag. He was not elected. The party was renamed German Reform Party in 1893, achieving sixteen seats. The party failed, however, to achieve significant public recognition. One of Fritsch's major goals was to unite all antisemitic political parties under a single banner; he wished for antisemitism to permeate the agenda of every German social and political organization. This effort proved largely to be a failure, as by 1890 there were over 190 various antisemitic parties in Germany. He also had a powerful rival for the leadership of the antisemites in Otto Böckel, with whom he had a strong personal rivalry.
In 1912 Fritsch founded the Reichshammerbund (Reich's Hammer League) as an antisemitic collective movement. He also established the secret Germanenorden in that year. Influenced by racist Ariosophic theories, it was one of the first political groups to adopt the swastika symbol. Members of these groups formed the Thule Society in 1918, which eventually sponsored the creation of the Nazi Party.
The Reichshammerbund was eventually folded into the Deutschvölkischer Schutz und Trutzbund (German Nationalist Protection and Defiance Federation), on whose advisory board Fritsch sat. He later became a member of the German Völkisch Freedom Party (DFVP). In the general election of May 1924, Fritsch was elected to serve as a member of the National Socialist Freedom Movement, a party formed in alliance with the DFVP by the Nazis as a legal means to election after the Nazi Party had been banned in the aftermath of the Munich Beer Hall Putsch. He only served until the next election in December, 1924.
In February 1927, Fritsch left the Völkisch Freedom Party in protest. He died shortly after the 1933 Nazi seizure of power at the age of 80 in Gautzsch (today part of Markkleeberg).
A memorial to Fritsch, described as "the first antisemitic memorial in Germany", was erected in Zehlendorf (Berlin) in 1935. The memorial was the idea of Zehlendorf's mayor, Walter Helfenstein (1890–1945), and the work of Arthur Wellmann (1885–1960). The memorial was melted down in 1943 to make armaments for the war.
Works
A believer in the absolute superiority of the Aryan race, Fritsch was upset by the changes brought on by rapid industrialization and urbanization, and called for a return to the traditional peasant values and customs of the distant past, which he believed exemplified the essence of the Volk.
In 1893, Fritsch published his most famous work, The Handbook of the Jewish Question, which leveled a number of conspiratorial charges at European Jews and called upon Germans to refrain from intermingling with them. Vastly popular, the book was read by millions and was in its 49th edition by 1944 (330,000 copies). The ideas espoused by the work greatly influenced Hitler and the Nazis during their rise to power after World War I. Fritsch also founded an anti-semitic journal – the Hammer – in 1902, and this became the basis of a movement, the Reichshammerbund, in 1912.
Fritsch was an opponent of Albert Einstein's theory of relativity. He published Einsteins Truglehre (Einstein's Fraudulent Teachings) in 1921, under the pseudonym F. Roderich-Stoltheim (an anagram of his full name).
Another work, The Riddle of the Jew's Success, was published in English in 1927 under the pseudonym F. Roderich-Stoltheim.
References
Nicholas Goodrick-Clarke, 1985: The Occult Roots of Nazism, pp. 123–126.
External links
Antisemiten-Katechismus by Theodore Fritsch at archive.org
1852 births
1933 deaths
People from Nordsachsen
People from the Province of Saxony
German Protestants
German Reform Party politicians
German Völkisch Freedom Party politicians
National Socialist Freedom Movement politicians
Members of the Reichstag 1924
German political scientists
Relativity critics
Antisemitism in Germany
People convicted of speech crimes
Prisoners and detainees of Germany | Theodor Fritsch | [
"Physics"
] | 1,561 | [
"Relativity critics",
"Theory of relativity"
] |
1,577,061 | https://en.wikipedia.org/wiki/Network%20planning%20and%20design | Network planning and design is an iterative process, encompassing
topological design, network-synthesis, and network-realization, and is aimed at ensuring that a new telecommunications network or service meets the needs of the subscriber and operator.
The process can be tailored according to each new network or service.
A network planning methodology
A traditional network planning methodology in the context of business decisions involves five layers of planning, namely:
need assessment and resource assessment
short-term network planning
IT resource
long-term and medium-term network planning
operations and maintenance.
Each of these layers incorporates plans for different time horizons, i.e. the business planning layer determines the planning that the operator must perform to ensure that the network will perform as required for its intended life-span. The Operations and Maintenance layer, however, examines how the network will run on a day-to-day basis.
The network planning process begins with the acquisition of external information. This includes:
forecasts of how the new network/service will operate;
the economic information concerning costs, and
the technical details of the network’s capabilities.
Planning a new network/service involves implementing the new system across the first four layers of the OSI Reference Model. Choices must be made for the protocols and transmission technologies.
The network planning process involves three main steps:
Topological design: This stage involves determining where to place the components and how to connect them. The (topological) optimization methods that can be used in this stage come from an area of mathematics called graph theory. These methods involve determining the costs of transmission and the cost of switching, and thereby determining the optimum connection matrix and location of switches and concentrators.
Network-synthesis: This stage involves determining the size of the components used, subject to performance criteria such as the grade of service (GOS). The method used is known as "Nonlinear Optimisation", and involves determining the topology, required GoS, cost of transmission, etc., and using this information to calculate a routing plan, and the size of the components.
Network realization: This stage involves determining how to meet capacity requirements, and ensure reliability within the network. The method used is known as "Multicommodity Flow Optimisation", and involves determining all information relating to demand, costs, and reliability, and then using this information to calculate an actual physical circuit plan.
These steps are performed iteratively in parallel with one another.
The role of forecasting
During the process of Network Planning and Design, estimates are made of the expected traffic intensity and traffic load that the network must support. If a network of a similar nature already exists, traffic measurements of such a network can be used to calculate the exact traffic load. If there are no similar networks, then the network planner must use telecommunications forecasting methods to estimate the expected traffic intensity.
The forecasting process involves several steps:
Definition of a problem;
Data acquisition;
Choice of forecasting method;
Analysis/Forecasting;
Documentation and analysis of results.
Dimensioning
Dimensioning a new network determines the minimum capacity requirements that will still allow the Teletraffic Grade of Service (GoS) requirements to be met. To do this, dimensioning involves planning for peak-hour traffic, i.e. that hour during the day during which traffic intensity is at its peak.
The dimensioning process involves determining the network’s topology, routing plan, traffic matrix, and GoS requirements, and using this information to determine the maximum call handling capacity of the switches, and the maximum number of channels required between the switches. This process requires a complex model that simulates the behavior of the network equipment and routing protocols.
A dimensioning rule is that the planner must ensure that the traffic load should never approach a load of 100 percent. To calculate the correct dimensioning to comply with the above rule, the planner must take on-going measurements of the network’s traffic, and continuously maintain and upgrade resources to meet the changing requirements. Another reason for overprovisioning is to make sure that traffic can be rerouted in case a failure occurs in the network.
Because of its complexity, network dimensioning is typically done using specialized software tools. Whereas researchers typically develop custom software to study a particular problem, network operators typically make use of commercial network planning software.
Traffic engineering
Compared to network engineering, which adds resources such as links, routers, and switches into the network, traffic engineering targets changing traffic paths on the existing network to alleviate traffic congestion or accommodate more traffic demand.
This technology is critical when the cost of network expansion is prohibitively high and the network load is not optimally balanced. The first part provides financial motivation for traffic engineering while the second part grants the possibility of deploying this technology.
Survivability
Network survivability enables the network to maintain maximum network connectivity and quality of service under failure conditions. It has been one of the critical requirements in network planning and design. It involves design requirements on topology, protocol, bandwidth allocation, etc.. Topology requirement can be maintaining a minimum two-connected network against any failure of a single link or node. Protocol requirements include using a dynamic routing protocol to reroute traffic against network dynamics during the transition of network dimensioning or equipment failures. Bandwidth allocation requirements pro-actively allocate extra bandwidth to avoid traffic loss under failure conditions. This topic has been actively studied in conferences, such as the International Workshop on Design of Reliable Communication Networks (DRCN).
Data-driven network design
More recently, with the increasing role of Artificial Intelligence technologies in engineering, the idea of using data to create data-driven models of existing networks has been proposed. By analyzing large network data, also the less desired behaviors that may occur in real-world networks can be understood, worked around, and avoided in future designs.
Both the design and management of networked systems can be improved by data-driven paradigm. Data-driven models can also be used at various phases of service and network management life cycle such as service instantiation, service provision, optimization, monitoring, and diagnostic.
See also
Core-and-pod
Network Partition for Optimization
Optimal network design - an optimization problem of constructing a network which minimizes the total travel cost.
References
Planning and design
Telecommunications engineering | Network planning and design | [
"Engineering"
] | 1,258 | [
"Network architecture",
"Electrical engineering",
"Telecommunications engineering",
"Computer networks engineering"
] |
1,577,192 | https://en.wikipedia.org/wiki/Tennis%20ball | A tennis ball is a small, hollow ball used in games of tennis and real tennis. Tennis balls are fluorescent yellow in professional competitions, but in recreational play other colors are also used. Tennis balls are covered in a fibrous felt, which modifies their aerodynamic properties, and each has a white curvilinear oval covering it.
Specifications
Modern tennis balls must conform to certain size, weight, deformation, and bounce criteria to be approved for regulation play. The International Tennis Federation (ITF) defines the official diameter as . Balls must have masses in the range . A tennis ball generally has more of a nitrogen and oxygen mixture than the sea level ambient air pressure. Yellow and white are the only colors approved by the ITF. Most balls produced are a fluorescent color known as "optic yellow", first introduced in 1972 following research demonstrating they were more visible on television. What color to call the ball is mildly controversial; one poll showed that a little less than half of people consider this color yellow, while a slight majority consider it green.
Tennis balls are filled with air and are surfaced by a uniform felt-covered rubber compound. Tennis ball felts comprise wool, nylon, and cotton in a mixture surrounding the rubber edge. The felt delays flow separation in the boundary layer which reduces aerodynamic drag and gives the ball better flight properties. Often, the balls will have a number in addition to the brand name. This helps distinguish one set of balls from another of the same brand on an adjacent court.
Tennis balls begin to lose their bounce as soon as the tennis ball can is opened. Tennis balls lose bounciness because the air inside the ball is pushing harder when a can is opened compared to when a ball is packaged. When packaged, the pressure in the can equally pushes the ball from the outside as the air inside the balls, preserving the pressure inside. When a tennis ball is unpackaged, its frequent use allows for air to escape from the ball. They can be tested to determine their bounce. Modern regulation tennis balls are kept under pressure (approximately two atmospheres) until initially used; balls intended for use at high altitudes have a lower initial pressure, and inexpensive practice balls are made without internal pressurization. A ball is tested for bounce by dropping it from a height of onto concrete; a bounce between is acceptable if taking place at sea-level and with relative humidity of 60%; high-altitude balls have different characteristics when tested at sea level.
Slower balls
The ITF's "Play and Stay" campaign aims to increase tennis participation worldwide by improving how starter players are introduced to the game. The ITF recommends a progression that focuses on a range of slower balls and smaller court sizes to introduce the game to adults and children effectively. The slowest balls, marked with red, or using half-red felt, are oversized and unpressurized or made from foam rubber. The next, in orange, are unpressurized normal-sized balls. The last, with green, are half pressured normal sized.
History
Lawn tennis, as the modern game was originally known, was developed in the early 1870s as a new version of the courtly game of real tennis. England banned the importation of real tennis balls, playing cards, dice, and other goods in the Importation (No. 2) Act 1463 (3 Edw. 4. c. 4). In 1480, Louis XI of France forbade the filling of tennis balls with chalk, sand, sawdust, or earth, and stated that they were to be made of good leather, well-stuffed with wool. Other early tennis balls were made by Scottish craftsmen from a wool-wrapped stomach of a sheep or goat and tied with rope. Those recovered from the hammer-beam roof of Westminster Hall during a period of restoration in the 1920s were found to have been manufactured from a combination of putty and human hair and were dated to the reign of Henry VIII. Other versions, using materials such as animal fur, rope made from animal intestines and muscles, and pine wood, were found in Scottish castles dating back to the 16th century. In the 18th century, strips of wool were wound tightly around a nucleus made by rolling several strips into a little ball. String was then tied in many directions around the ball, and a white cloth covering was sewn around the ball.
In the early 1870s, lawn tennis arose in Britain through the pioneering efforts of Walter Clopton Wingfield and Harry Gem, often using Victorian lawns laid out for croquet. Wingfield marketed lawn tennis sets which included rubber balls imported from Germany. After Charles Goodyear invented vulcanised rubber, the Germans had been most successful in developing air-filled vulcanised rubber balls. These were light and coloured grey or red with no covering. John Moyer Heathcote suggested and tried the experiment of covering the rubber ball with flannel, and by 1882 Wingfield was advertising his balls as clad in stout cloth made in Melton Mowbray. Tennis balls were initially entirely made of rubber, but they were later refined by using flannel and stitching it around the core, which used to be filled with rubber. The tennis ball quickly switched to having a hollow core, using gas to pressurize the inside. Originally, tennis ball manufacturing was done by cutting vulcanized rubber sheets into a shape similar to that of a three-leaf clover. Before the formation of the rubber into a sphere (which was executed via machinery), chemicals that reacted to produce a gas were added to produce pressure into the hollow inside once the sphere was assembled. The switch to the modern method of joining two hemispheres was done to improve uniformity of wall thickness.
Until 1972, tennis balls were white (or sometimes black). In 1972, the International Tennis Federation introduced yellow balls, as these were easier to see on television, and these quickly became generally popular. Wimbledon continued using white balls until 1986.
Packaging
Before 1925, tennis balls were packaged in wrapped paper and paperboard boxes. In 1925, Wilson-Western Sporting Goods Company introduced cardboard tubes. In 1926, the Pennsylvania Rubber Company released a hermetically sealed pressurized metal tube that held three balls with a churchkey to open the top. Beginning in the 1980s, plastic (from recycled PET) cans with a full-top pull-tab seal and plastic lid fit three or four balls per can. Pressureless balls often come in net bags or buckets since they need not be pressure-sealed.
Disposal
Each year approximately 325 million balls are produced, which contributes roughly of waste in the form of rubber that is not easily biodegradable. Historically, tennis ball recycling has not existed. Balls from The Championships, Wimbledon are now recycled to provide field homes for the nationally threatened Eurasian harvest mouse.
In literature
The gift of tennis balls offered to Henry in Shakespeare's Henry V is portrayed as the final insult which re-ignites the Hundred Years' War between England and France.
John Webster also refers to tennis balls in The Duchess of Malfi.
References
External links
International Tennis Federation's history of the rules of the tennis ball. .
ITF Grand Slam Rules: Section I: The Ball (PDF). .
Balls
Tennis equipment
Spherical objects | Tennis ball | [
"Physics"
] | 1,454 | [
"Spherical objects",
"Physical objects",
"Matter"
] |
1,577,335 | https://en.wikipedia.org/wiki/Borehole%20mining | Borehole Mining (BHM) is a remote operated method of extraction (mining) of mineral resources through boreholes based on in-situ conversion of ores into a mobile form (slurry) by means of high pressure water jetting (hydraulicking). This process is carried-out from a land surface, open pit floor, underground mine or floating vessel through pre-drilled boreholes.
History
Based on the US Patent database, the method history can be traced back to the beginning of the 20th century (1906). The most advanced BHM tool design was patented in 1926. With a few improvements, this design concept remains a base for the modern BHM tools and technologies. In the US, the major R&D were conducted by USBM in the 1970 and 1980s. Borehole mining of Uranium in Kazakhstan remains the most advanced BHM project in the world.
The process
A borehole is drilled to a required depth;
A casing column is lowered down the hole. Since BHM takes place in an open hole, the casing shoe is located just above the upper border of the production interval (an ore body) leaving the rest open;
A BHM tool is lowered into the hole;
High-pressure water is pumped down and the tool is rotated and moved up and down.
Description of a BHM tool
The tool consists of at least two concentric pipes which are forming two hydraulic channels—one for pumping down a high-pressure working agent (water) and second for delivering pregnant slurry back to the surface. A BHM tool typically consists of 3 major units: (1) a Bottom Head with an eductor (waterjet pump) and hydromonitor sections; (2) an extension section consisting of a set of standard pipes connecting the Bottom Head to the Upper Head; and (3) an Upper Head including swivel allowing the tool suspension and rotation in a borehole as well as its connection to the working agent source (pump station) and the slurry collector. An airlift often used to assist the eductor to pump slurry from greater depths. A standard drill rig is normally used to operate a BHM tool.
The tool is lowered into a well until the hydromonitor reaches the required depth. Then the high-pressure water is pumped down. Approximately one-half of it leaves the tool through the hydromonitor and is expelled outside the tool in a form of a powerful waterjet. The jet cuts and loosens ore producing a slurry which is pumped back to the surface by the Eductor. In a collecting pond or tank, the slurry is separated and clarified water is re-circulated. While extracting rocks and ores, different-shaped caverns can be created. Their shape depends on the BHM tool manipulation while mining which obviously consist of the tool rotation, sliding it up and down and combination of these two. Borehole mining is applied from vertical, horizontal and deviated wells.
Advantages of BHM
The main advantages of the method include its low capital and operating cost, mobility, selectivity, ability to work in hazardous and dangerous conditions, low environmental impact and several more. The method is used in mining of such natural resources and industrial materials as: uranium, iron ore, quartz sand, fine gravel, coal, poly-metallic ores, phosphate, gold, diamonds, manganese, rare earths, phosphate, amber and several more. Borehole mining is also used in exploration, oil, gas and water stimulation, underground storage construction, environmental applications and a few more.
References
External links
Learn more on borehole mining
Mining techniques
Drilling technology
Hydraulic engineering | Borehole mining | [
"Physics",
"Engineering",
"Environmental_science"
] | 741 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
1,577,353 | https://en.wikipedia.org/wiki/Jaka%20%C5%BDeleznikar | Jaka Železnikar (born 1971) is a Slovenian artist known for his computational poetry and internet art. The base of his work is a nonlinear language-based expression combined with visual art. Since 1997 he has been part of the net art community, and since 2004 he has created several expressive add-ons for the Firefox browser.
Life
Železnikar was born in 1971 in Ljubljana, Slovenia.
Work
His works are mostly bilingual (Slovene and English).
Železnikar has created about 25 net art works. Notable works include Interactivalia, a 1997 interactive poetry work in Slovenian and English and "Ascii Kosovel", mix of nonlinear poetic interactive or computational narratives based on a work by Srečko Kosovel, Slovenian avantgard poet (1904-1926).
His 2002 work "Poems for Echelon was a computer-based interactive work that invited users to send emails to the program itself. In doing so , the users participated in the production of poems that were designed to confuse ECHELON, a US Intelligence gathering system.
Since 2004 he has created several expressive add-ons for the Firefox web browser, including the on-line visual poem "Letters · 字母 2.0" and "Disorganizer", made in 2007.
Gallery
References
External links
Official site
1971 births
Living people
Slovenian poets
Slovenian male poets
Slovenian digital artists
Net.artists | Jaka Železnikar | [
"Technology"
] | 289 | [
"Multimedia",
"Net.artists"
] |
1,577,354 | https://en.wikipedia.org/wiki/Buell%20dryer | The Buell dryer, also known as the "turbo shelf" dryer, is an indirectly-heated industrial dryer once widely used in the Cornwall and Devon china clay mining industry. The Buell dryer was introduced to the china clay industry by English Clays Lovering Pochin & Co. Ltd for their china clay drying plants in Cornwall and Devon, as part of the mechanization and modernization of the industry, which up to that point had been using the same primitive processing methods for almost 100 years.
History
The industry's first attempt to mechanize its drying process, an oil-fired rotary dryer installed at Rockhill near Stenalees in 1939, had been halted before it could be commissioned by the outbreak of war, with the Board of Trade exercising its wartime powers to place restrictions on the industry, rationing in particular the use of oil and steel. To circumvent these restrictions, in 1944 a Buell dryer was purchased second hand from a fluorspar mine in Derbyshire, and was installed in an existing building at ECLP's Drinnick site in Nanpean, heated by exhaust steam from Drinnick power plant. As such, it became the first operating mechanical dryer in the Cornish china clay industry, despite not being the first to be constructed.
A 1948 Board Of Trade Working Party report into the China Clay industry concluded that restrictions on the industry should be relaxed to allow mechanization to begin. The 1948 report led to Parliament ordering an end to Board Of Trade restrictions on the china clay industry, leading to a period of rapid mechanization.
In the 25 years that followed, additional Buell dryers were constructed at Kernick, Drinnick, Rocks, Blackpool near Burngullow, Marsh Mills, Parkandillack, Par Harbour, and Goonvean & Rostowrack Ltd's Trelavour site. The Drinnick dryer site was also expanded to include several more steam-heated dryers.
Construction
The dryer itself is composed of a large upright cylindrical chamber, inside of which are 25 to 30 layers of trays or "hearths". Indirectly-heated air from an oil-fired (latterly natural gas-fired) furnace or steam heater is distributed throughout the dryer by a series of fans and ducts. At the center of the dryer is a rotating column, to which the trays are attached and positioned radially within the dryer. Material enters the top of the dryer and lands on one of the top trays. As the central column rotates, fixed arms push the material off the tray, dropping it down onto the one below it. Gradually the material works its way down through the dryer in this manner, and after 45 minutes, clay exits the bottom of the dryer onto conveyor belts.
The material to be dried usually enters the top of the dryer with a moisture content of around 18%, and exit at around 8-10%. Generally, these figures all depend on the dewatering processes employed before the material reaches the dryer. Commonly, the Buell dryer handled shredded filter press cakes from standard square plate filter presses, although these were later replaced by a circular-plate filter press capable of operating at much higher pressure. After being shredded, these press cakes were brought by conveyor belt to a paddle mixer in which the cakes were back-mixed with dried clay. The back-mixed clay could then either be extruded into pellets and fed directly to the dryer or, depending on the grade of clay to be produced, might go through an additional stage of pug milling.
External links
Versatile Portable Dryer
Industrial equipment
Dryers | Buell dryer | [
"Chemistry",
"Engineering"
] | 754 | [
"Dryers",
"Chemical equipment",
"nan"
] |
1,577,611 | https://en.wikipedia.org/wiki/San%20Jacinto%20Monument | The San Jacinto Monument is a column located on the Houston Ship Channel in unincorporated Harris County, Texas, about 16 miles due east of downtown Houston. The Art Deco monument is topped with a 220-ton star that commemorates the site of the Battle of San Jacinto, the decisive battle of the Texas Revolution. The monument, constructed between 1936 and 1939 and dedicated on April 21, 1939, is the world's tallest masonry column and is part of the San Jacinto Battleground State Historic Site. By comparison, the Washington Monument is tall, which is the tallest stone monument in the world. The column is an octagonal shaft topped with a Lone Star – the symbol of Texas. Visitors can take an elevator to the monument's observation deck for a view of Houston and the San Jacinto battlefield.
The San Jacinto Museum of History is located inside the base of the monument and focuses on the history of the Battle of San Jacinto and Texas culture and heritage. The San Jacinto Battlefield, of which the monument is a part, was designated a National Historic Landmark on December 19, 1960, and is therefore also automatically listed on the National Register of Historic Places. It was designated a Historic Civil Engineering Landmark in 1992.
History
In 1856, the Texas Veterans Association began lobbying the state legislature to create a memorial to the men who died during the Texas Revolution. The legislature commemorated the final battle of the revolution in the 1890s, when funds were appropriated to purchase the land where the battle took place. After a careful survey to determine the boundaries of the original battle site, land was purchased for a new state park east of Houston, in 1897. This became San Jacinto Battleground State Historic Site.
The Daughters of the Republic of Texas began pressuring the legislature to provide an official monument at the site of the Battle of San Jacinto. The chairman of the Texas Centennial Celebrations, Jesse H. Jones, provided an idea for a monument to memorialize all Texans who served during the Texas Revolution. Architect Alfred C. Finn provided the final design, in conjunction with engineer Robert J. Cummins. In March 1936, as part of the Texas Centennial Celebration, ground was broken for the San Jacinto Monument. Construction began on April 21, 1936, the centennial anniversary date of the Battle of San Jacinto. The cornerstone was set one year later on April 21, 1937, and two years later construction ended, also on the anniversary date, April 21, 1939. Jesse H. Jones was in attendance along with the commencement ceremony in 1939 when he and Sam Houston's last surviving son, Andrew Jackson Houston, and others officially dedicated the monument. The project was completed in exactly three years costing $1.5 million. The funds were provided by both the Texas legislature and the United States Congress.
From its opening, the monument has been run by the nonprofit association, the San Jacinto Museum of History Association. In 1966, the monument was placed under the control of the Texas Parks and Wildlife Department. The Parks Department allows the history association to continue its oversight of the monument.
The monument was renovated in 1983. In 1990, the base of the monument was redone to contain the San Jacinto Museum of History and the Jesse H. Jones Theatre for Texas Studies. The exterior of the monument underwent a further renovation in 1995, and the entire structure was renovated from 2004 through 2006.
Description
The San Jacinto monument is an octagonal column. It was built by W.S. Bellows Construction and primarily constructed of reinforced concrete. Its exterior is faced with Texas limestone from a quarry near the Texas State Capitol. It stands tall and is the tallest monument column in the world. It is taller than the next tallest, the Juche Tower in North Korea.
The base of the monument contains a museum and a 160-seat theater. The base is decorated with eight engraved panels depicting the history of Texas. The bronze doors which allow entry into the museum show the six flags of Texas. At the point where the shaft rises from the base, it is square (). The shaft narrows to square () at the observation deck. At the top of the monument is a 220-ton, high star, representing the Lone Star of Texas. A reflecting pool shows the entire shaft.
As of 2006, approximately 250,000 people visited the monument each year, including 40,000 children on school trips.
Inscription
An inscription on the monument tells the story of the birth of Texas:
Gallery
See also
San Jacinto Battleground State Historic Site
San Jacinto Day
Notes
References
External links
Texas Parks and Wildlife Department: Official San Jacinto Monument webpage
San Jacinto Museum of History
The Portal to Texas History: Images of the San Jacinto Monument
American Society of Civil Engineers, Historic Civil Engineering Landmarks: San Jacinto Monument
Monuments and memorials in Texas
Obelisks in the United States
Texas Revolution
Buildings and structures in Harris County, Texas
Buildings and structures completed in 1939
Historic Civil Engineering Landmarks
National Register of Historic Places in Houston
Tourist attractions in Harris County, Texas
Art Deco architecture in Texas
Art Deco sculptures and memorials
Works Progress Administration in Texas
Alfred C. Finn buildings
1939 establishments in Texas | San Jacinto Monument | [
"Engineering"
] | 1,048 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
1,577,896 | https://en.wikipedia.org/wiki/Symmetric%20graph | In the mathematical field of graph theory, a graph is symmetric (or arc-transitive) if, given any two pairs of adjacent vertices and of , there is an automorphism
such that
and
In other words, a graph is symmetric if its automorphism group acts transitively on ordered pairs of adjacent vertices (that is, upon edges considered as having a direction). Such a graph is sometimes also called -transitive or flag-transitive.
By definition (ignoring and ), a symmetric graph without isolated vertices must also be vertex-transitive. Since the definition above maps one edge to another, a symmetric graph must also be edge-transitive. However, an edge-transitive graph need not be symmetric, since might map to , but not to . Star graphs are a simple example of being edge-transitive without being vertex-transitive or symmetric. As a further example, semi-symmetric graphs are edge-transitive and regular, but not vertex-transitive.
Every connected symmetric graph must thus be both vertex-transitive and edge-transitive, and the converse is true for graphs of odd degree. However, for even degree, there exist connected graphs which are vertex-transitive and edge-transitive, but not symmetric. Such graphs are called half-transitive. The smallest connected half-transitive graph is Holt's graph, with degree 4 and 27 vertices. Confusingly, some authors use the term "symmetric graph" to mean a graph which is vertex-transitive and edge-transitive, rather than an arc-transitive graph. Such a definition would include half-transitive graphs, which are excluded under the definition above.
A distance-transitive graph is one where instead of considering pairs of adjacent vertices (i.e. vertices a distance of 1 apart), the definition covers two pairs of vertices, each the same distance apart. Such graphs are automatically symmetric, by definition.
A is defined to be a sequence of vertices, such that any two consecutive vertices in the sequence are adjacent, and with any repeated vertices being more than 2 steps apart. A graph is a graph such that the automorphism group acts transitively on , but not on . Since are simply edges, every symmetric graph of degree 3 or more must be for some , and the value of can be used to further classify symmetric graphs. The cube is , for example.
Note that conventionally the term "symmetric graph" is not complementary to the term "asymmetric graph," as the latter refers to a graph that has no nontrivial symmetries at all.
Examples
Two basic families of symmetric graphs for any number of vertices are the cycle graphs (of degree 2) and the complete graphs. Further symmetric graphs are formed by the vertices and edges of the regular and quasiregular polyhedra: the cube, octahedron, icosahedron, dodecahedron, cuboctahedron, and icosidodecahedron. Extension of the cube to n dimensions gives the hypercube graphs (with 2n vertices and degree n). Similarly extension of the octahedron to n dimensions gives the graphs of the cross-polytopes, this family of graphs (with 2n vertices and degree 2n-2) are sometimes referred to as the cocktail party graphs - they are complete graphs with a set of edges making a perfect matching removed. Additional families of symmetric graphs with an even number of vertices 2n, are the evenly split complete bipartite graphs Kn,n and the crown graphs on 2n vertices. Many other symmetric graphs can be classified as circulant graphs (but not all).
The Rado graph forms an example of a symmetric graph with infinitely many vertices and infinite degree.
Cubic symmetric graphs
Combining the symmetry condition with the restriction that graphs be cubic (i.e. all vertices have degree 3) yields quite a strong condition, and such graphs are rare enough to be listed. They all have an even number of vertices. The Foster census and its extensions provide such lists. The Foster census was begun in the 1930s by Ronald M. Foster while he was employed by Bell Labs, and in 1988 (when Foster was 92) the then current Foster census (listing all cubic symmetric graphs up to 512 vertices) was published in book form. The first thirteen items in the list are cubic symmetric graphs with up to 30 vertices (ten of these are also distance-transitive; the exceptions are as indicated):
Other well known cubic symmetric graphs are the Dyck graph, the Foster graph and the Biggs–Smith graph. The ten distance-transitive graphs listed above, together with the Foster graph and the Biggs–Smith graph, are the only cubic distance-transitive graphs.
Properties
The vertex-connectivity of a symmetric graph is always equal to the degree d. In contrast, for vertex-transitive graphs in general, the vertex-connectivity is bounded below by 2(d + 1)/3.
A t-transitive graph of degree 3 or more has girth at least 2(t – 1). However, there are no finite t-transitive graphs of degree 3 or more for t ≥ 8. In the case of the degree being exactly 3 (cubic symmetric graphs), there are none for t ≥ 6.
See also
Algebraic graph theory
Gallery of named graphs
Regular map
References
External links
Cubic symmetric graphs (The Foster Census). Data files for all cubic symmetric graphs up to 768 vertices, and some cubic graphs with up to 1000 vertices. Gordon Royle, updated February 2001, retrieved 2009-04-18.
Trivalent (cubic) symmetric graphs on up to 10000 vertices. Marston Conder, 2011.
Algebraic graph theory
Graph families
Regular graphs | Symmetric graph | [
"Mathematics"
] | 1,168 | [
"Mathematical relations",
"Graph theory",
"Algebra",
"Algebraic graph theory"
] |
1,578,199 | https://en.wikipedia.org/wiki/NGC%201049 | NGC 1049 is a globular cluster located in the Local Group galaxy of the Fornax Dwarf, visible in the constellation of Fornax. At a distance of 460,000 light years, it is visible in moderate sized telescopes, while the parent galaxy is nearly invisible. This globular cluster was discovered by John Herschel on October 19, 1835, while the parent galaxy was discovered in 1938 by Harlow Shapley.
References
External links
Globular clusters
1049
Fornax
Milky Way Subgroup | NGC 1049 | [
"Astronomy"
] | 105 | [
"Fornax",
"Constellations"
] |
1,578,212 | https://en.wikipedia.org/wiki/Hille%E2%80%93Yosida%20theorem | In functional analysis, the Hille–Yosida theorem characterizes the generators of strongly continuous one-parameter semigroups of linear operators on Banach spaces. It is sometimes stated for the special case of contraction semigroups, with the general case being called the Feller–Miyadera–Phillips theorem (after William Feller, Isao Miyadera, and Ralph Phillips). The contraction semigroup case is widely used in the theory of Markov processes. In other scenarios, the closely related Lumer–Phillips theorem is often more useful in determining whether a given operator generates a strongly continuous contraction semigroup. The theorem is named after the mathematicians Einar Hille and Kōsaku Yosida who independently discovered the result around 1948.
Formal definitions
If X is a Banach space, a one-parameter semigroup of operators on X is a family of operators indexed on the non-negative real numbers
{T(t)} t ∈ [0, ∞) such that
The semigroup is said to be strongly continuous, also called a (C0) semigroup, if and only if the mapping
is continuous for all x ∈ X, where [0, ∞) has the usual topology and X has the norm topology.
The infinitesimal generator of a one-parameter semigroup T is an operator A defined on a possibly proper subspace of X as follows:
The domain of A is the set of x ∈ X such that
has a limit as h approaches 0 from the right.
The value of Ax is the value of the above limit. In other words, Ax is the right-derivative at 0 of the function
The infinitesimal generator of a strongly continuous one-parameter semigroup is a closed linear operator defined on a dense linear subspace of X.
The Hille–Yosida theorem provides a necessary and sufficient condition for a closed linear operator A on a Banach space to be the infinitesimal generator of a strongly continuous one-parameter semigroup.
Statement of the theorem
Let A be a linear operator defined on a linear subspace D(A) of the Banach space X, ω a real number, and M > 0. Then A generates a strongly continuous semigroup T that satisfies if and only if
A is closed and D(A) is dense in X,
every real λ > ω belongs to the resolvent set of A and for such λ and for all positive integers n,
Hille-Yosida theorem for contraction semigroups
In the general case the Hille–Yosida theorem is mainly of theoretical importance since the estimates on the powers of the resolvent operator that appear in the statement of the theorem can usually not be checked in concrete examples. In the special case of contraction semigroups (M = 1 and ω = 0 in the above theorem) only the case n = 1 has to be checked and the theorem also becomes of some practical importance. The explicit statement of the Hille–Yosida theorem for contraction semigroups is:
Let A be a linear operator defined on a linear subspace D(A) of the Banach space X. Then A generates a contraction semigroup if and only if
A is closed and D(A) is dense in X,
every real λ > 0 belongs to the resolvent set of A and for such λ,
See also
C0 semigroup
Lumer–Phillips theorem
Stone's theorem on one-parameter unitary groups
Notes
References
Semigroup theory
Theorems in functional analysis | Hille–Yosida theorem | [
"Mathematics"
] | 709 | [
"Theorems in mathematical analysis",
"Mathematical structures",
"Theorems in functional analysis",
"Fields of abstract algebra",
"Algebraic structures",
"Semigroup theory"
] |
1,578,455 | https://en.wikipedia.org/wiki/Jean-Baptiste%20Bory%20de%20Saint-Vincent | Jean-Baptiste Geneviève Marcellin Bory de Saint-Vincent was a French naturalist, officer and politician. He was born on 6 July 1778 in Agen (Lot-et-Garonne) and died on 22 December 1846 in Paris. Biologist and geographer, he was particularly interested in volcanology, systematics and botany.
Life
Youth
Jean-Baptiste Bory de Saint Vincent was born at Agen on 6 July 1778. His parents were Géraud Bory de Saint-Vincent and Madeleine de Journu; his father's family were petty nobility who played important roles at the bar and in the judiciary, during and after the French Revolution. Instilled with sentiments hostile to the revolution from childhood, he studied first at the college of Agen, then with his uncle Journu-Auber in Bordeaux in 1787. He may have attended courses in medicine and surgery from 1791 to 1793. During the Reign of Terror in 1793, his family was persecuted and took refuge in the Landes.
In 1794, as a precocious naturalist, aged 15, Bory was instrumental in freeing from prison the entomologist Pierre André Latreille, whose early work he had read, and in saving Latreille from deportation to the penal colony of Cayenne. Latreille later became one of the leading entomologists of his time; he and Bory remained lifelong friends. A student of geologist and mineralogist Déodat Gratet de Dolomieu at the Paris School of Mines, Bory sent his first scholarly publications to the Academy of Bordeaux the same year, and consequently came into contact with many established naturalists.
After the death of his father, he joined the French Revolutionary armies in 1799. Thanks to the recommendation of Jean-Gérard Lacuée, also from Agen, he was soon appointed second lieutenant. He served first in the Army of the West, then in the Army of the Rhine under the orders of General Jean Victor Marie Moreau. He was then assigned to Brittany and moved to Rennes; it was at this time that he acquired his Bonapartist sentiments.
First expeditions in the oceans of Africa
In 1799, Bory learned about the upcoming departure of a scientific expedition to Australia organized by the government and obtained, thanks to his uncle and to the famous naturalist Bernard-Germain de Lacépède, the position of chief botanist aboard one of the three participating corvettes. Thus, after having left the Army of the West at the end of August and receiving from the Ministry of War an indefinite leave, Bory left Paris on 30 September and embarked in Le Havre on 19 October 1799 aboard the corvette commanded by Captain Nicolas Baudin, Le Naturaliste.
After several stops in Madeira, the Canary islands, and Cape Verde and then rounding the Cape of Good Hope, towards the middle of the trip Bory suddenly left the ship of Captain Baudin with whom he was in conflict and explored alone (and with limited resources) several islands of the African seas. He visited Mauritius in March 1800 during a stopover. From there, he sailed to the neighboring island of Réunion, where in October 1801 he ascended the Piton de la Fournaise, the active volcano of the island, and wrote the first general scientific description of it. He gave the name of his former professor Dolomieu, of whose death he had just learned, to one of the craters he described as a mamelon. He gave his own name to the summit crater, the Bory crater. On the way back, he continued his geographical, physical and botanical explorations on the island of Saint Helena
Bory was back in France by 11 July 1802 and learned that his mother had died during his absence. He published his Essai sur les Îles Fortunées (Essay on the archipelago of the Canary islands), which earned him his election first as correspondent of the National Museum of Natural History in August 1803, and later as correspondent first class of the Institut de France (division of Physical Sciences) in the spring of 1808. In 1804, he published his Voyage dans les quatre principales îles des mers d'Afrique.
Military campaigns
Following his return, he resumed service in the army and, promoted to captain, he was transferred to the 5th Dragoon Regiment of cavalry, in the 3rd Army Corps of Marshal Davout, of which he became assistant staff captain on 3 October 1804. He was then assigned to the Camp of Boulogne for the creation of Emperor Napoleon I's Grande Armée.
From 1805 to 1814, Bory followed the greater part of Napoleon's campaigns within the Grande Armée. In 1805, he took part in the campaign of Austria as captain of dragoons and was present at the Battle of Ulm (15-20 October 1805) and at the Battle of Austerlitz (2 December 1805). Captain Bory then spent two years in Prussia and Poland and fought at the Battle of Jena (14 October 1806) and at the Battle of Friedland (14 June 1807). He continued drawing military maps of Franconia and Swabia and during his visits to Bavaria, Vienna and Berlin, where he found his own works translated into German, he took the opportunity to meet several scientists including the botanists Nikolaus Joseph von Jacquin and Carl Ludwig Willdenow, who received him with open arms and presented him with valuable gifts. In October 1808, he served on the staff of Marshal Ney, which he soon left to be attached to Marshal Soult, Duke of Dalmatia, as aide-de-camp, in October 1809. Having been promoted to major, Bory was mainly involved in military reconnaissance thanks to his skills in graphic work. From 1809 to 1813, he took part in the French campaign of Spain and distinguished himself at the Siege of Badajoz in the spring of 1811, at the Battle of Quebara and at the Battle of Albuera (16 May 1811). Events having placed him at the head of the troops that formed the garrison of Agen, he found himself commanding soldiers from his hometown for about two weeks. In May 1811, he became squadron leader and was then appointed Knight of the Legion of Honor and attained the rank of lieutenant colonel by the end of the year.
Alongside Soult, Bory hastily left Spain to take part to the German campaign and participate in the Battle of Lützen (2 May 1813) and in the Battle of Bautzen (20-21 May 1813). After these victories, he returned to his homeland for the campaign of France of 1814 and fought at the Battle of Orthez (27 February 1814). He also took part in the Battle of Toulouse (10 April 1814), and on the following day organized troops of partisans and scouts in his own region of Agen. After the first abdication of Napoleon I in April 1814 and his exile to the island of Elba, of which Bory learned at Agen on 13 April 1814, he went to Paris.
Marshal Soult, rallying to the new government and having been appointed Minister of War, summoned Bory to his staff and appointed him to the rank of colonel. He also offered Bory, on 10 October 1814, the service of the ministry's Dépôt de la Guerre (a depository of maps and archives), to which his topographic work entitled him. He remained there until his proscription on 25 July 1815. Bory worked on scientific and literary works as well, and took part in the writing of the satirical liberal, anti-monarchist and pro-Bonapartist newspaper, the Nain Jaune.
Political exile
On the return of Napoleon from exile, Bory was elected by the college of the department of Lot-et-Garonne, on 16 May 1815, to the office of representative of Agen at the Chamber of the Hundred Days and sat with the liberals. He proclaimed the constitution, gave a resounding speech before the tribune, and virulently opposed the Minister of Police, Joseph Fouché, Duke of Otranto.
Absent at Waterloo, his mandate as deputy confining him to the legislative body, he saw the abdication of Napoleon I and the return of king Louis XVIII. Placed by Fouché on the lists of proscription by the Ordonnance of 24 July 1815, which condemned 57 persons for having served Napoleon during the Hundred Days after having pledged allegiance to Louis XVIII, Bory first took refuge in the valley of Montmorency, from where, hidden, he published his Justification de la conduite et des opinions de M. Bory de Saint-Vincent. Then, the amnesty law of 12 January 1816 was proclaimed by the King, condemning Bory to exile, and he went to Liège under a false name. First invited by the King of Prussia (thanks to Bory's friendship with the naturalist Alexander von Humboldt) to stay in Berlin, then in Aachen, he was expelled after eighteen months. He refused to submit to the decision which assigned him to Königsberg or Prague for his residence, and when he was offered a commission as General in Bolívar's new Republic of Colombia by botanist and friend (and vice-president) Francisco Antonio Zea, he declined. He finally managed to reach Holland, disguised as a brandy merchant and with a false passport, then Brussels, where he met Emmanuel Joseph Sieyès and where he lived until 1820. With Auguste Drapiez and Jean-Baptiste Van Mons, he founded and became one of the scientific directors of the Annales générales des Sciences physiques, edited in Brussels by the printer Weissenbruch from 1819 to 1821. The articles, written by international scientific luminaries, were illustrated with lithographs printed first by Duval de Mercourt and then by Marcellin Jobard.
On 1 January 1820, Bory was finally allowed to return to France. Dismissed from the army and deprived of pay, he returned to Paris where he lived until 1825. He was obliged to devote himself entirely to editorial work (on his Annales of Brussels in particular) and he collaborated with various liberal newspapers, including the Courrier français, which reserved for him the drafting of the reports on the sessions of the Chamber of Deputies. He gave up later, when, devoting himself entirely to science, he found in the many books he was selling to booksellers an honorable means of livelihood. However, in 1823, he fought a duel with a pistol and was wounded in the foot, and, in 1825, he was thrown in prison at Sainte-Pélagie for debts, where he remained until 1827.
It was during this productive period in 1822, that Bory, along with most of the scientists of his time, including Arago, Brongniart, Drapiez, Geoffroy de Saint-Hilaire, von Humboldt, de Jussieu, de Lacépède, Latreille, etc., began the writing of one of his greatest works, the Dictionnaire classique d'histoire naturelle en 17 volumes (1822-1831).
Scientific Expedition to Morea (1829)
A war of Independence had been raging in Greece since 1821, but the Greek victories were short-lived and the Turkish-Egyptian troops had reconquered the Peloponnese in 1825. King Charles X, supported by a strong current of philhellenism, decided to intervene alongside the Greek insurgents. After the naval Battle of Navarino (20 October 1827), which saw the annihilation of the Turkish-Egyptian fleet by the Allied Franco-Russo-British fleet, a French expeditionary force of 15,000 men landed in the south-west of the Peloponnese in August 1828. The purpose of the Expédition de Morée was to liberate the area from the Turkish-Egyptian occupation forces and return it to the young independent Greek state; this would be accomplished in just one month.
Towards the end of the year 1828, the Viscount of Martignac, Interior minister of King Charles X and the real head of the government at that time (as well as being a childhood friend of Bory in Bordeaux), charged six academicians of the Institute de France (Georges Cuvier, Étienne Geoffroy Saint-Hilaire, Charles-Benoît Hase, Desiré-Raoul Rochette, Jean-Nicolas Hyot and Jean-Antoine Letronne) with appointing the chief-officers and members of each section of a scientific committee to be attached to the Morea expedition, just as had been done previously with the Commission of Sciences and Arts during Napoleon's campaign in Egypt. Bory was thus appointed director of the commission. The minister and the academicians also determined the routes and objectives. Bory wrote, "Messrs. De Martignac and Siméon had asked me expressly not to restrict my observations to Flies and Herbs, but to extend them to places and to men later..." .
Bory and his team of 19 scientists (including Edgar Quinet, Abel Blouet and Pierre Peytier) representing various scientific disciplines, such as natural history and antiquities (archaeology, architecture and sculpture) disembarked from the frigate Cybèle at Navarino on 3 March 1829 and there joined General Nicolas Joseph Maison, who was commanding the French expeditionary force. Bory then met General Antoine Simon Durrieu, chief of staff of the expedition, who was also from the Landes region and with whom Bory had been connected for a decade. Bory stayed in Greece for 8 months, until November 1829, and explored the Peloponnese, Attica and the Cyclades The scientific work of the commission was of major importance to the increase knowledge of the country. The topographic maps they produced, which were widely acknowledged, were of an unprecedented high quality and surveys, drawings, cuts, plans and proposals for the theoretical restoration of the monuments were a new attempt to systematically and exhaustively catalogue the ancient Greek vestiges. The Morea expedition and its scientific publications offered a near complete description of the regions visited and formed a scientific, aesthetic and human inventory that remained for a long time one of the best achieved about Greece. Bory registered all the results of his research and published them later in his major opus of 1832.
Academic and political career
After his return from Greece, Bory pursued his scholarly career; in 1830, he presented his candidacy at the Institute de France for the vacant seat left by the death of Jean-Baptiste de Lamarck, obtaining the votes of Arago, Cuvier, Fourier and Thénard among others. He also participated in the founding of the Entomological Society of France, the oldest entomological society in the world, on 29 February 1832, alongside his old friend Pierre-André Latreille.
In 1830, while Bory was occupied writing his work on Morea (by ministry order), the July Ordinances promulgated by King Charles X to obtain elections more favorable to the Ultra-royalists, and which suspended freedom of the press, revived his political sentiments. He fought on the barricades of the Faubourg Saint-Germain and was first at the Hôtel de Ville. After these Three Glorious [Days] (the July Revolution) and the new appointment of Marshal Soult to the Ministry of War on 3 November 1830, Bory was finally, after 15 years, reinstated in the army, to his former rank of Colonel with the General Staff and to his post at the war depository, which he had held in 1815. He remained there throughout the course of the July Monarchy (1830-1848) until 1842, four years before his death. On 1 May 1831, Bory was appointed Officer of the Legion of Honor.
Around the same time, on 5 July 1831, Bory was elected deputy of the 3rd college of Lot-et-Garonne (Marmande) to replace his friend the Viscount of Martignac. In his profession of faith, he denounced the hereditary titles of the peerage, which he declared to be contradictory to the principle of equality before the law, spoke against the incompatibility of the legislature's mandate with a public function, and advocated for the revision of the municipal, electoral and national guard laws. The conservative tendencies of the majority forced him, after only two months in that position, to resign as deputy on 19 August 1831. He was replaced in October by Monsieur de Martignac.
In 1832, Bory published the report of his exploration in Greece in his work, the Relation du voyage de la commission scientifique de Morée dans le Péloponnèse, les Cyclades et l'Attique, for which he received many accolades, and which allowed him to be finally elected a member of the French Academy of Sciences on 17 November 1834.
Scientific Expedition of Algeria (1839)
Between 1835 and 1838, Bory sat on the General Staff commission and republished his Justifications of 1815 under the title of Mémoires in 1838. On 24 August 1839, a commission of scientific exploration of Algeria (Commission d'exploration scientifique d'Algérie) on the model of those which were put in place in Egypt (1798) and in Morea (1829), was designed for the newly conquered, but not yet pacified Algeria. Bory de Saint-Vincent, who had been one of its promoters, became its president as a staff Colonel and went there, accompanied by his collaborators, to conduct his identifications, researches, samplings and scientific explorations. He arrived in the first days of January 1840 in Algiers and visited other cities of the coast. He left Algeria in the first trimester of 1842. Bory published numerous books on the country, such as Notice sur la commission exploratrice et scientifique d'Algérie (1838), Sur la flore de l'Algérie (1843), Sur l'anthropologie de l'Afrique française (1845) and the Exploration scientifique de l'Algérie pendant les années 1840, 1841, 1842. Sciences physiques (1846-1867).
Last years
Although sick, Bory was still thinking about making a trip to the islands of the Indian Ocean or to Algeria. He died, however, on 22 December 1846, at the age of sixty-eight, of a heart attack, in his apartment on the 5th floor, 6 rue de Bussy in Paris. He left behind him only debts and his herbarium, which was sold the following year. He was buried in the Père-Lachaise Cemetery (49th Division).
An indefatigable worker, Bory wrote on several branches of natural history, including the study of reptiles, fish, microscopic animals, plants, cryptogams, etc. He was the main editor of the Bibliothèque physico-économique, of the Dictionnaire classique d'histoire naturelle en 7 volumes and of the scientific part of the Expédition de Morée. He participated in composing for the Encyclopédie Méthodique the sections concerning the zoophytes and the worms, as well as for the volumes of the physical geography and atlas that accompanied them. He also wrote well-composed geographical summaries, especially the one concerning Spain, and contributed many articles notable for the originality of their ideas to the Encyclopédie moderne.
Bory was a proponent of the theory of the transmutation of species alongside, among others, Jean-Baptiste de Lamarck. According to historian Adrian Desmond Bory was a leading anti-Cuvierian materialist who blended the best of Lamarck's philosophy with Geoffroy's higher anatomy. His Dictionnaire classique d'histoire naturelle already contained information about Lamarck and the species debate, and is notable for a copy of it having been carried by Charles Darwin on the Beagle.
Bory was also a fervent defender of spontaneous generation (theme of the famous controversy between Louis Pasteur and Félix Archimède Pouchet) and an ardent polygenist. He thought that the different human "races", according to the sense of the time, were true species, each having its own origin and history. He was finally a notorious opponent of slavery; Victor Schœlcher quotes him among his scientific allies in favor of abolition.
Toponymy
In tribute to Bory's pioneering exploration of the volcano, one of the two summit craters of Piton de la Fournaise on the island of Réunion was named the Bory crater by his companion de Jouvancourt during their ascent of the volcano in 1801.
A Primary school was named after him in Saint Denis (La Réunion) (École primaire publique Bory de Saint Vincent).
A College was named after him in Saint Philippe (La Réunion) (Collège Bory de Saint-Vincent).
Streets were named after Bory in Saint-Pierre (La Réunion) and in his hometown of Agen (Lot-et-Garonne).
Private life
In September 1802, at Rennes where he was garrisoned, Bory married Anne-Charlotte Delacroix of la Thébaudais, with whom he had two daughters: Clotilde, born on 7 February 1801, and Augustine, born on 25 May 1803, whom he called "his little Antigone" and with whom he remained very close all his life. His marriage, "contracted too young to be a happy one" did not last. His wife died in 1823, after their separation.
When he was proscribed by the Ordonnance of 24 July 1815 and was fleeing to Rouen, he met the actress Maria Gros. She followed him throughout his exile between 1815 and 1820; they began to cohabitate in 1817. On 17 May 1818, their first daughter, Cassilda, was born. A second daughter, Athanalgide, was born on 22 July 1823, after the separation of her parents.
Decorations and Honors
Chevalier de la Légion d'Honneur (Knight rank - May 1811)
Officier de la Légion d'Honneur (Officer rank - 1 May 1831)
Publications
A complete list of Jean-Baptiste Bory de Saint-Vincent's publications can be found at the end of the introduction by Philippe Lauzun (pp. 52–55) in Bory de Saint-Vincent, Correspondence, published and annotated by Philippe Lauzun, Maison d'édition et imprimerie moderne, 1908. (Read online on Archive.org)
1803 : Essais sur Les Isles Fortunées et l'Antique Atlantide ou Précis de l'Histoire générale de l'Archipel des Canaries, (on the website of the National Library of France)
1804 : Voyage dans les quatre principales îles des mers d'Afrique, Ténériffe, Maurice, Bourbon et Sainte-Hélène. - trois volumes complétés par un atlas.
1808 : Mémoires sur un genre nouveau de la cryptogamie aquatique, nommé Thorea, Mémoire sur le genre Lemanea de la famille des Conferves, Mémoire sur le genre Batrachosperma de la famille des Conferves, Mémoire sur le genre Draparnaldia de la famille des Conferves. Paris, Belin. (on Google books)
1808 : Mémoire sur les forêts souterraines de Wolfesck en haute Autriche, Berlin, Gesell. Nat. Freunde Mag. II, p. 295-302.
1809 : Mémoire sur le genre Lemanea de la famille des Conferves, Berlin Gesell. Nat. Freunde Mag. III, p. 274-281.
1815 : Chambre des représentants, Rapport fait à la Chambre des représentants par M. Bory de Saint-Vincent, au nom des députés à l'armée, séance du 1er juillet 1815. in-8°, Paris, imprimerie de la Chambre des représentants.
1815 : Justification de la conduite et des opinions de M. Bory de Saint-Vincent, membre de la chambre des représentants et proscrit par l'ordonnance du 24 juillet 1815, Paris, Chez les marchands de nouveautés (ou Paris, chez Eymery, août 1815), 110 p. (on Google books)
1816 : Lamuel ou le livre du Seigneur, dédié à M. de Chateaubriand, traduction d'un manuscrit hébreu exhumé de la bibliothèque ci-devant impériale. Histoire authentique de l'Empereur Appolyon et du roi Béhémot par le très Saint-Esprit, Liège et Paris, P. J. Collardin, Frères Michau, in-18°, 232 p.
1819-1821 (de juillet 1819 à juin 1821) : Annales générales des sciences physiques, par MM. Bory de Saint-Vincent, Pierre Auguste Joseph Drapiez et Jean-Baptiste Van Mons, , Bruxelles, Weissenbruch. (on Google books)
1821 : Voyage souterrain, ou, Description du plateau de Saint-Pierre de Maestricht et de ses vastes cryptes, Ponthieu libraire Palais-Royal, Galerie de bois no. 252, Paris.
1822-1831 : Dictionnaire classique d'histoire naturelle, par Messieurs Audoin, Bourdon, Brongniart, de Candolle, Daudebard de Férussac, Desmoulins, Drapiez, Edwards, Flourens, Geoffroy de Saint-Hilaire, Jussieu, Kunth, de Lafosse, Lamouroux, Latreille, Lucas fils, Presles-Duplessis, Prévost, Richard, Thiébaut de Bernard. Ouvrage dirigé par Bory de Saint-Vincent, Paris, Rey et Gravier, Baudouin frères, 1822-1831, 17 volumes in-8, 160 planches gravées et coloriées. (on the website of the National Library of France)
1823 : Guide du voyageur en Espagne, Librairie Louis Jouanet, Paris, 1823. (on the website of the National Library of France)
1823 : Discours préliminaire à Histoire de la Grèce : description des Îles ioniennes, éditions Dondey-Dupré Paris, 1823. (on Google books)
1823-1832 : Encyclopédie moderne, ou dictionnaire abrégé des hommes et des choses, des sciences, des lettres et des arts, avec l'indication des ouvrages ou les divers sujets sont développés et approfondis par Eustache-Marie Courtin, Paris, Lejeune.
1824 : Tableau encyclopédique et méthodique des trois règnes de la nature, contenant l'Helminthologie ou les vers infusoires, les vers intestinaux, les vers mollusques, etc. En quatre volumes : vol.1, 1791, ( à 189) par Bruguière chez Panckoucke ; vol.2, 1797, ( à 286) chez Agasse ; vol.3, an VI (1797), () par Lamarck chez Agasse ; vol.4, 1816, ( à 488) par Lamarck ; vingt-troisième partie, mollusques et polypes divers, par MM. Jean-Baptiste Lamarck, Jean-Guillaume Bruguiere, Jean-Baptiste Bory de Saint-Vincent, Isaac Lea, Dall William Healey, Otto Frederick Müller, Paris, chez Mme Veuve Agasse et Paris, Panckoucke, 1791-1824.
1825 : L'homme (homo), essai zoologique sur le genre humain, (extrait du DCHN), première édition, Paris, Le Normand Fils, 2 vol., vol.1 325p., vol.2 251p., In-8°. (on Archive.org)
1826-1830 : Voyage autour du monde, exécuté par ordre du Roi, sur la corvette de Sa Majesté, La Coquille, pendant les années 1822, 1823,1824 et 1825... Lesson, René-Primevère, Bory de Saint-Vincent, JBGM, Brongniart, Adolphe, Dumont d'Urville, Duperrey, Louis-Isidore, (Botanique par MM. Dumont d'Urville, Bory de Saint-Vincent et Ad. Brongniart), Paris, Arthus Bertrand. (on the website of the National Library of France)
1826 : Essai d'une classification des animaux microscopiques, imprimerie Mme Veuve Agasse, Paris, 1826. (on the website of the National Library of France)
1826 : Résumé géographique de la Péninsule Ibérique, éditions A. Dupont et Roret, Paris. (on the website of the National Library of France)
1827 : 2nd edition of Essai sur l'Homme.
1827-1831 : Bibliothèque physico-économique, instructive et amusante. Ou Journal des découvertes et perfectionnements de l'industrie nationale et étrangère, de l'économie rurale et domestique, de la physique, la chimie, l'histoire naturelle, la médecine domestique et vétérinaire, enfin des sciences et des arts qui se rattachent aux besoins de la vie, rédigée par Bory de Saint-Vincent et Julia-Fontenelle, Jean-Sébastien-Eugène, 1827-1831. Tome I (-X), Arthus Bertrand, Paris. 6 vol.
1830-1844 : Expédition d'Égypte. Histoire scientifique et militaire de l'Expédition française en Égypte J.B.G.M. Bory de Saint-Vincent - Paris , précédée d'une introduction présentant le tableau de l'Égypte ancienne et moderne d'Ali-Bey ; et suivie du récit des événements survenus en ce pays depuis le départ des Français et sous le règne de Mohamed-Ali, d'après les mémoires, matériaux, documents inédits fournis par divers membres de l'expédition, dont Chateaugiron, Desgenettes, Dulertre, Larrey … Rédaction réalisée par Étienne et Isidore Geoffroy Saint-Hilaire, Fortia d'Urban, Bory de Saint-Vincent, etc. 10 volumes in-8°, 2 volumes d'atlas, A.-J. Dénain, Paris.
1832-1838 : Expédition scientifique de Morée, Jean-Baptiste Bory de Saint-Vincent, Émile Le Puillon de Boblaye, Pierre Théodore Virlet d'Aoust, Étienne et Isidore Geoffroy de saint-hilaire, Gabriel Bibron, Gérard Paul Deshayes, Gaspard Auguste Brulle, Félix-Edouard Guerin-Meneville, Adolphe Brongniart, Louis-Anastase Chaubard, Commission scientifique de Morée ; 4 vol. in 4° et atlas, Paris, Strasbourg, F.G. Levrault. 1832 : Travaux de la section des Sciences Physiques, tome 1, sous la direction de Bory de Saint-Vincent. 1832 : Cryptogamie, avec atlas de 38 pl., section des sciences Physiques (281-337), tome 3 partie 2.
1838 : Note sur la commission exploratrice et scientifique d'Algérie présentée à son Excellence le ministre de la guerre, (16 octobre 1838.) 20 pp ; Imprimerie Cosson, Paris. (on the website of the National Library of France)
1845 : Sur l'anthropologie de l'Afrique française, lu à l'Académie royale des sciences dans sa séance du 30 juin 1845, Extrait du Magasin de zoologie, d'anatomie comparée et de paléontologie publié par M. Guérin-Méneville en octobre 1845, Paris, Imprimerie de Fain et Thunot, 19 p., pl.59 à 61.
1846-1867 : Exploration scientifique de l'Algérie (pendant les années 1840, 1841, 1842. Sciences physiques.) publiée par ordre du gouvernement. Sciences Naturelles, Botanique par MM. Bory de Saint-Vincent et Durieu de Maisoneuve, Paris, imprimerie impériale, Gide et Baudry, en 3 vol., in-fol., dont un atlas. (1846-1849) Vol. I, Flore d'Algérie. Cryptogamie, par Durieu de Maisoneuve, avec le concours de MM. Montagne, Bory de Saint-Vincent, L.-R., Tulasne, C. Tulasne, Leveille. Paris, imprimerie impériale, dans la collection Exploration scientifique de l'Algérie, publiée par ordre du Gouvernement, 600 p., 39 f. de pl. col. Vol. II Flore d'Algérie. Phanérogamie. Groupe des glumacées, par E. Cosson et Durieu de Maisonneuve. Vol. III Atlas.
Bibliography
Cited sources
Biography of Jean-Baptiste Bory de Saint-Vincent on the website of the French National Assembly: http://www2.assemblee-nationale.fr/sycomore/fiche/(num_dept)/16507
Germain Sarrut and B. Saint-Edme, Biographie des hommes du jour: industriels..., Volume 2, page 79, Henri Krabbe, Paris, 1836. (Lire en ligne)
Bory de Saint-Vincent, Correspondence, published and annotated by Philippe Lauzun, Maison d'édition et imprimerie moderne, 1908. (Lire en ligne)
Wladimir Brunet de Presle and Alexandre Blanchet, La Grèce depuis la conquête romaine jusqu'à nos jours, Firmin Didot, Paris, 1860. (Lire en ligne)
Georges Contogeorgis, Histoire de la Grèce, Hatier, coll. Nations d'Europe, Paris, 1992.
Serge Briffaud, L'Expédition scientifique de Morée et le paysage méditerranéen. in L'invention scientifique de la Méditerranée, p. 293.
Marie-Noëlle Bourguet, Bernard Lepetit, Daniel Nordman, Maroula Sinarellis, L'Invention scientifique de la Méditerranée. Égypte, Morée, Algérie., Éditions de l'EHESS, 1998. ()
Olga Polychronopoulou, Archéologues sur les pas d'Homère. La naissance de la protohistoire égéenne., Noêsis, Paris, 1999. ()
Yiannis Saïtas et al., L'œuvre de l'expédition scientifique de Morée 1829-1838, Edited by Yiannis Saïtas, Editions Melissa, 2011 (Part I) - 2017 (Part II).
Monique Dondin-Payre, La Commission d'exploration scientifique d'Algérie : une héritière méconnue de la Commission d'Égypte, Mémoires de l'académie des inscriptions et belles-lettres, tome XIV, 142 p., 11 fig., 1994.
Ed. Bonnet, Deux lettres de Bory de Saint-Vincent relatives aux travaux de la Commission d'Algérie, Bull. Société de Botanique de France, 1909, 4e, t. IX (56: 1-9.)
Hervé Ferrière, Bory de Saint-Vincent, militaire naturaliste entre Révolution et Restauration. PhD thesis submitted in 2001 to the doctoral school of Paris 1 University Sorbonne-Pantheon, director Pietro Corsi, 2006
Hervé Ferrière, 2009. Bory de Saint-Vincent: L'évolution d'un voyageur naturaliste. Syllepse.
James A. Second, Visions of Science: Books and Readers at the Dawn of the Victorian Age. University Of Chicago Press. p. 60., 2015.
Aldo Fascolo, The Theory of Evolution and Its Impact. Springer. p. 27, 2011.
Ann Thomson, Issues at stake in eighteenth-century racial classification , Cromohs, 8 (2003): 1-20
General works
Lucie Allorgue, La fabuleuse odyssée des plantes, Chez Lattès, Paris, 2003.
H. Baillon, Dictionnaire de botanique, Paris, Hachette, 1876, p. 456
P. Biers, L'Herbier tricolore de Bory de Saint-Vincent, Bull Muséum Histoire naturelle, n° 5
P. Biers, Bory de Saint-Vincent, chef directeur de l'Expédition scientifique de Morée, Bulletin Muséum Histoire Naturelle, n° 32, 1926,
P. Biers, L'herbier cryptogamique de Bory de Saint-Vincent au Muséum, Bulletin Muséum Histoire Naturelle No. 30, 1924,
P. Biers, Bory de Saint-Vincent à l'Île Bourbon , Revue de l'Agenais, 1927, t. 54, pp. 179–186 (Lire en ligne)
Adrien Blanchet, Le Voyage en Grèce de J.B. Bory de Saint-Vincent, bull. de l'association Guillaume Budé, Paris, 1829, p. 26-46
R. Bouvier, E. Maynial, Une aventure dans les mers australes, l'expédition du commandant Baudin, (1800-1803), Paris, 1947.
Christophe Brun, 2013, Découper la Terre, inventorier l'Homme : le planisphère de Bory de Saint-Vincent, 1827, Monde(s). Histoire, Espaces, Relations, mai 2013, et encart couleur ( ), annexes sur le site de la revue (lire en ligne).
Juan C. Castañón y Francisco Quirós, La contribución de Bory de Saint-Vincent (1778-1846) al conocimiento geográfico de la Península Ibérica. Redescubrimiento de una obra cartográfica y orográfica olvidada. Ería. Revista cuatrimestral de Geografía, n° 64-65, 2004,
Pietro Corsi, Lamarck, genèse et enjeux du transformisme, 1770-1830, CNRS Éditions, Paris, 2001.
B. Dayrat, Les botanistes et la flore de France – trois siècles de découverte, Paris, Publication du Muséum National d'Histoire Naturelle, 2003.
Amédée Dechambre, Dictionnaire encyclopédique des sciences médicales, (première série), t. X, Ble-Bro, publié sous la direction de M. A. Dechambre, 1869.
Léon Dufour, Souvenirs d'un savant français à travers un siècle, (1780-1865.) Science et histoire, Paris, J. Rothschild, 1888, p. 43-45
Paul Guérin (dir.), Dictionnaire des dictionnaires, Paris, Librairie des imprimeries réunies, Motteroz, 1880.
Louis-Étienne Héricart de Thury, Notice sur le baron Bory de Saint-Vincent, Bruxelles, in-12. Note que Lacroix dit ne pas avoir retrouvée (Lacroix, p. 58.) Cette notice est parue en 1848 dans les Notices bio-bibliographiques de l'Académie des Sciences de Belgique, tome VIII, p. 832
Alfred Lacroix, Le naturaliste Bory de Saint-Vincent, Revue scientifique, 55e année n° 8, avril, 1917, Éloge du savant prononcé en octobre 1916 à l'Académie des Sciences.
Goulven Laurent, Paléontologie et évolution en France : 1800-1860. De Cuvier-Lamarck à Darwin, Paris, CTHS, 1987, p. 377-380.
P. Maryllis, Bory de Saint-Vincent, naturaliste et voyageur, 6 p., La Couronne agenaise, Villeneuve-sur-Lot, 1910
François Picavet, Les Idéologues, essai sur l'histoire des idées et des théories scientifiques, philosophiques, religieuses, etc. en France depuis 1789, édité en 1891.
André Role, Un destin hors série : la vie aventureuse d'un savant : Bory de Saint-Vincent 1778-1846, 256 pls. La Pensée universelle, Paris, 1973.
Thomas Rouillard, un herbier de Bory saint-Vincent à Angers Bulletin de la Société d'Études Scientifiques de l'Anjou, t.XVIII, 2004
C. Sauvageau, Bory de Saint-Vincent, d'après sa correspondance publiée par M. Lauzun, Journal de Botanique, 1908, 2e série, 1 : 198-222.
P. Tcherkezoff, Tahiti 1768, Jeunes filles en pleurs. Au vent des îles Éditions, Tahiti, Pirae, 2004.
Jean Tucoo-Chala, Le Voyage en Grèce d'un naturaliste gascon en 1829, Bull. de l'association Guillaume Budé, en deux parties : dans le bulletin 2 et 3 de l'année, bull.2, p. 190-200 et Bull.3 p. 300-320, Paris, 1976.
Pierre Vidal-Naquet, L'Atlantide, petite histoire d'un mythe platonicien, Paris, Les Belles Lettres, 2005.
Notes and references
Notes
References
1778 births
1846 deaths
People from Agen
French nobility
Members of the Chamber of Representatives (France)
Members of the 2nd Chamber of Deputies of the July Monarchy
Members of Parliament for Lot-et-Garonne
19th-century French botanists
19th-century French naturalists
Lamarckism
Proto-evolutionary biologists
Botanists with author abbreviations
Members of the French Academy of Sciences
French military personnel of the Napoleonic Wars
Officers of the Legion of Honour
Burials at Père Lachaise Cemetery
French entomologists | Jean-Baptiste Bory de Saint-Vincent | [
"Biology"
] | 9,034 | [
"Obsolete biology theories",
"Lamarckism",
"Non-Darwinian evolution",
"Biology theories",
"Proto-evolutionary biologists"
] |
1,578,810 | https://en.wikipedia.org/wiki/Photogram | A photogram is a photographic image made without a camera by placing objects directly onto the surface of a light-sensitive material such as photographic paper and then exposing it to light.
The usual result is a negative shadow image that shows variations in tone that depends upon the transparency of the objects used. Areas of the paper that have received no light appear white; those exposed for a shorter time or through transparent or semi-transparent objects appear grey, while fully-exposed areas are black in the final print.
The technique is sometimes called cameraless photography. It was used by Man Ray in his rayographs. Other artists who have experimented with the technique include László Moholy-Nagy, Christian Schad (who called them "Schadographs"), Imogen Cunningham and Pablo Picasso.
Variations of the technique have also been used for scientific purposes, in shadowgraph studies of flow in transparent media and in high-speed Schlieren photography, and in the medical X-ray.
The term photogram comes from the combining form () of Ancient Greek (, "light"), and Ancient Greek suffix (), from (γράμμα, "written character, letter, that which is drawn"), from (, "to scratch, to scrape, to graze").
History
Prehistory
The phenomenon of the shadow has long aroused human curiosity and inspired artistic representation, as recorded by Pliny the Elder, and various forms of shadow play since the 1st millennium BCE. The photogram, in essence, is a means by which the fall of light and shade on a surface may be automatically captured and preserved. To do so required a substance that would react to light. From the 17th century, photochemical reactions were progressively observed or discovered in salts of silver, iron, uranium and chromium. In 1725, Johann Heinrich Schulze was the first to demonstrate a temporary photographic effect in silver salts, confirmed by Carl Wilhhelm Scheele in 1777, who found that violet light caused the greatest reaction in silver chloride. Humphry Davy and Thomas Wedgewood reported that they had produced temporary images from placing stencils/light sources on photo-sensitized materials, but had no means of fixing (making permanent) the images.
Nineteenth century
The first photographic negatives made were photograms (though the first permanent photograph was made with a camera by Nicéphore Niépce). William Henry Fox Talbot called these photogenic drawings, which he made by placing leaves or pieces of lace onto sensitized paper, then left them outdoors on a sunny day to expose. This produced a dark background with a white silhouette of the placed object.
In 1843, Anna Atkins produced a book titled British Algae: Cyanotype Impressions in installments; the first to be illustrated with photographs. The images were all photograms of botanical specimens, mostly seaweeds, which she made using Sir John Herschel's cyanotype process, which yields blue images.
Modernism
Photograms and artists who worked with(in)the medium have participated in/contributed to several studied/demarcated modern art movements, such as Dada and Constructivism, and in architecture in the formalist dissections of the Bauhaus.
The relative ease of access (not needing a camera and, depending on the medium, a darkroom) and perhaps the interactive to the point of feeling incidental nature of creating photograms enabled experiments in abstraction by Christian Schad as early as 1918, Man Ray in 1921, and Moholy-Nagy in 1922, through dematerialisation and distortion, merging and interpenetration of forms, and flattening of perspective.
Christian Schad's 'schadographs'
In 1918, Christian Schad's experiments with the photogram were inspired by Dada, creating photograms from random arrangements of discarded objects he had collected such as torn tickets, receipts and rags. Some argue that he was the first to make this an art form, preceding Man Ray and László Moholy-Nagy by at least a year or two, and one was published in March 1920 in the magazine Dadaphone by Tristan Tzara, who dubbed them 'Schadographs'.
Man Ray's 'rayographs'
Photograms were used in the 20th century by a number of photographers, particularly Man Ray, whose "rayographs" were also given the name by Dada leader Tzara. Ray described his (re-)discovery of the process in his 1963 autobiography;
In his photograms, Man Ray made combinations of objects—a comb, a spiral of cut paper, an architect's French curve—some recognisable, others transformed, typifying Dada's rejection of 'style', emphasising chance and abstraction. He published a selection of these rayographs as Champs délicieux in December 1922, with an introduction by Tzara. His 1923 film Le Retour à la Raison ('Return to Reason') adapts rayograph technique to moving images.
Other 20th century artists
In the 1930s, artists including Theodore Roszak, and Piet Zwart also made photograms. Luigi Veronesi combined the photographic image with oil on canvas in large-scale colour images by preparing a light-sensitive canvas on which he placed objects in the dark for exposure and then fixing. The shapes became the matrix for an abstract painting to which he applied colour and added drawn geometric lines to enhance the dynamics, exhibiting them at the Galerie L'Equipe in Paris in 1938–1939. Bronislaw Schlabs, Julien Coulommier, Andrzej Pawlowski and Beksinki were photogram artists in the 1940s and 1950s; Heinz Hajek-Halke and Kurt Wendlandt with their light graphics in the 1960s; Lina Kolarova, Rene Mächler, Dennis Oppenheim, and Andreas Mulas in the 1970s; and Tomy Ceballos, Kare Magnole, Andreas Müller-Pohle, and Floris M. Neusüss in the 1980s.
Contemporary
Established contemporary artists who are widely known for using photograms are Adam Fuss, Susan Derges, Christian Marclay, and Karen Amy Finkel Fishof, who has digitized and minted her photograms as NFTs. Younger artists worldwide continue to value the materiality of the technique in the digital age. Mauritian artist Audrey Albert uses cameraless techniques to connect material culture to contemporary identities of Chagos Islanders.
Procedure
The customary approach to making a photogram is to use a darkroom and enlarger and to proceed as one would in making a conventional print, but instead of using a negative, to arrange objects on top of a piece of photographic paper for exposure under the enlarger lamp which can be controlled with the timer switch and aperture controls. That will give a result similar to the image at left; since the enlarger emits light through a lens aperture, the shadows of even tall objects like the beaker standing upright on the paper will stay sharp; the more so at smaller apertures.
The print is then processed, washed, and dried.
At this stage the image will look similar to a negative, in which shadows are white. A contact-print onto a fresh sheet of photographic paper will reverse the tones if a more naturalistic result is desired, which may be facilitated by making the initial print on film.
However, there are other arrangements for making photograms, and devising them is part of the creative process. Alice Lex-Nerlinger used the conventional darkroom approach in making photograms as a variation on her airbrushed stencil paintings, since lighting penetrating the translucent paper from which she cut her pictures would print a variegated texture she could not otherwise obtain.
Another component of this medium is the light source, or sources, used. A broad source of light will cast nuances of shadow; umbra, penumbra and antumbra, as shown in the accompanying diagram.
Photograms may be made outdoors providing the photographic emulsion is sufficiently slow to permit it. Direct sunlight is a point-source of light (like that of an enlarger), while cloudy conditions give soft-edged shadows around three-dimensional objects placed on the photosensitive surface. The cyanotype process ('blueprints') such as that used by Anna Atkins (see above), is slow and insensitive enough that fixing an impression on paper, fabric, timber or other supports can be done in subdued light indoors. Exposure outdoors may take many minutes depending on conditions, and its progress may be gauged by inspection as the coating darkens. 'Printing-out paper' or other daylight-printing material such as gum bichromate may also enable outdoor exposure. Christian Schad simply placed tram tickets and other ephemera under glass on printing-out paper on his window-sill for exposure.
Conventional monochrome or colour, or direct-positive photographic material may be exposed in the dark using a flash unit, as does Adam Fuss for his photograms that capture the movement of a crawling baby, or an eel in shallow water. Susan Derges captures water currents in the same way, while Harry Nankin has immersed large sheets of monochrome photographic paper at the edge of the sea and mounted a flash on a specially-constructed oversize tripod above it to capture the action of waves and seaweeds washing over the paper surface. In 1986, Floris Neusüss began his Nachtbilder ('nocturnal pictures'), exposed by lightning.
Other variations include using the light of a television screen or computer display, pressing the photosensitive paper to the surface. Multiple light sources or exposing with multiple flashes of light, or moving the light source during exposure, projecting shadows from a low-angle light, and using successive exposures while moving, removing or adding shadows, will produce multiple shadows of varying quality.
List of notable photographers using photograms
Markus Amm
Anna Atkins
Walead Beshty
Christopher Bucklow
Kate Cordsen
Olive Cotton
Susan Derges
Michael Flomen
Adam Fuss
Heinz Hajek-Halke
Raoul Hausmann
John Herschel
Edmund Kesting
Len Lye
László Moholy-Nagy
Alice Lex-Nerlinger
Floris Michael Neusüss
Anne Noble
Andrzej Pawlowski
Pablo Picasso
Man Ray
Alexander Rodchenko
Theodore Roszak
Christian Schad
Greg Stimac
August Strindberg
Jean-Pierre Sudre
Kunié Sugiura
Henry Fox Talbot
Mikhail Tarkhanov
Elsa Thiemann
Luigi Veronesi
Kurt Wendlandt
Nancy Wilson-Pajic
Keith Carter
See also
Luminogram – photogram using light only with no objects
Schlieren photography – light is focused with a lens or mirror and a knife edge is placed at the focal point to create graduated shadows of flow and waves in otherwise transparent media like air, water, or glass
Shadowgraph – like Schlieren photography, but without the knife-edge, reveals non-uniformities in transparent media
Chemigram – camera-less technique using photographic (and other) chemistry with light
Neues Sehen – László Moholy-Nagy's 'New Vision' photography movement
Cliché verre – semiphotographic printmaking technique using a negative created by drawing
Drawn-on-film animation – cliche-verre technique in which movie film emulsion is scratched and drawn frame-by-frame
Cyanotype – photographic printing process that produces a cyan-blue print
Kirlian photography – photographic techniques used to capture the phenomenon of electrical coronal discharges
References
Photographic techniques
History of photography
Light
Shadows | Photogram | [
"Physics"
] | 2,406 | [
"Physical phenomena",
"Shadows",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Optical phenomena",
"Waves",
"Light"
] |
1,578,822 | https://en.wikipedia.org/wiki/Hill%27s%20muscle%20model | In biomechanics, Hill's muscle model refers to the 3-element model consisting of a contractile element (CE) in series with a lightly-damped elastic spring element (SE) and in parallel with lightly-damped elastic parallel element (PE). Within this model, the estimated force-velocity relation for the CE element is usually modeled by what is commonly called Hill's equation, which was based on careful experiments involving tetanized muscle contraction where various muscle loads and associated velocities were measured. They were derived by the famous physiologist Archibald Vivian Hill, who by 1938 when he introduced this model and equation had already won the Nobel Prize for Physiology. He continued to publish in this area through 1970. There are many forms of the basic "Hill-based" or "Hill-type" models, with hundreds of publications having used this model structure for experimental and simulation studies. Most major musculoskeletal simulation packages make use of this model.
AV Hill's force-velocity equation for tetanized muscle
This is a popular state equation applicable to skeletal muscle that has been stimulated to show Tetanic contraction. It relates tension to velocity with regard to the internal thermodynamics. The equation is
where
is the tension (or load) in the muscle
is the velocity of contraction
is the maximum isometric tension (or load) generated in the muscle
coefficient of shortening heat
is the maximum velocity, when
Although Hill's equation looks very much like the van der Waals equation, the former has units of energy dissipation, while the latter has units of energy. Hill's equation demonstrates that the relationship between F and v is hyperbolic. Therefore, the higher the load applied to the muscle, the lower the contraction velocity. Similarly, the higher the contraction velocity, the lower the tension in the muscle. This hyperbolic form has been found to fit the empirical constant only during isotonic contractions near resting length.
The muscle tension decreases as the shortening velocity increases. This feature has been attributed to two main causes. The major appears to be the loss in tension as the cross bridges in the contractile element and then reform in a shortened condition. The second cause appears to be the fluid viscosity in both the contractile element and the connective tissue. Whichever the cause of loss of tension, it is a viscous friction and can therefore be modeled as a fluid damper
.
Three-element model
The three-element Hill muscle model is a representation of the muscle mechanical response. The model is constituted by a contractile element (CE) and two non-linear spring elements, one in series (SE) and another in parallel (PE). The active force of the contractile element comes from the force generated by the actin and myosin cross-bridges at the sarcomere level. It is fully extensible when inactive but capable of shortening when activated. The connective tissues (fascia, epimysium, perimysium and endomysium) that surround the contractile element influences the muscle's force-length curve. The parallel element represents the passive force of these connective tissues and has a soft tissue mechanical behavior. The parallel element is responsible for the muscle passive behavior when it is stretched, even when the contractile element is not activated. The series element represents the tendon and the intrinsic elasticity of the myofilaments. It also has a soft tissue response and provides energy storing mechanism.
The net force-length characteristics of a muscle is a combination of the force-length characteristics of both active and passive elements. The forces in the contractile element, in the series element and in the parallel element, , and , respectively, satisfy
On the other hand, the muscle length and the lengths , and of those elements satisfy
During isometric contractions the series elastic component is under tension and therefore is stretched a finite amount. Because the overall length of the muscle is kept constant, the stretching of the series element can only occur if there is an equal shortening of the contractile element itself.
The forces in the parallel, series and contractile elements are defined by:where are strain measures for the different elements defined by:where is the deformed muscle length and is the deformed muscle length due to motion of the contractile element, both from equation (3). is the rest length of the muscle. can be split as . The force term, , is the peak isometric muscle force and the functions are given by:
where are empirical constants. The function from equation (4) represents the muscle activation. It is defined based on the ordinary differential equation:where are time constants related to rise and decay for muscle activation and is a minimum bound, all determined from experiments. is the neural excitation that leads to muscle contraction.
Viscoelasticity
Muscles present viscoelasticity, therefore a viscous damper may be included in the model, when the dynamics of the second-order critically damped twitch is regarded. One common model for muscular viscosity is an exponential form damper, where
is added to the model's global equation, whose and are constants.
See also
Muscle contraction
References
Biomechanics
Equations
Exercise physiology | Hill's muscle model | [
"Physics",
"Mathematics"
] | 1,073 | [
"Biomechanics",
"Mathematical objects",
"Mechanics",
"Equations"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.