id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
21,669,817
https://en.wikipedia.org/wiki/Solvency%20ratio
A solvency ratio measures the extent to which assets cover commitments for future payments, the liabilities. The solvency ratio of an insurance company is the size of its capital relative to all risks it has taken. The solvency ratio is most often defined as: The solvency ratio is a measure of the risk an insurer faces of claims that it cannot absorb. The amount of premium written is a better measure than the total amount insured because the level of premiums is linked to the likelihood of claims. Different countries use different methodologies to calculate the solvency ratio, and have different requirements. For example, in India insurers are required to maintain a minimum ratio of 1.5. For pension plans, the solvency ratio is the ratio of pension plan assets to liabilities (the pensions to be paid). Another measure of the pension plan's ability to pay all pensions in perpetuity is the going concern ratio, which measures the cost of pensions if the pension plan continues to operate. For the solvency ratio, the pension liabilities are measured using stringent rules including the assumption that the plan will be close immediately so must purchase of annuities to transfer responsibility of the pensions to another party. This is more expensive so the solvency ratio is usually lower than the going concern ratio, which measures the pension plan's ability to pay pensions if it continues to operate. In finance, the solvency ratio measures a company's cash flow compared to its liabilities: Solvency ratio = (net income + depreciation) / liabilities See also Current ratio Solvency Solvency II Directive 2009 the EU requirement on the ratio References Actuarial science
Solvency ratio
Mathematics
340
63,541
https://en.wikipedia.org/wiki/Glutamic%20acid
Glutamic acid (symbol Glu or E; the anionic form is known as glutamate) is an α-amino acid that is used by almost all living beings in the biosynthesis of proteins. It is a non-essential nutrient for humans, meaning that the human body can synthesize enough for its use. It is also the most abundant excitatory neurotransmitter in the vertebrate nervous system. It serves as the precursor for the synthesis of the inhibitory gamma-aminobutyric acid (GABA) in GABAergic neurons. Its molecular formula is . Glutamic acid exists in two optically isomeric forms; the dextrorotatory -form is usually obtained by hydrolysis of gluten or from the waste waters of beet-sugar manufacture or by fermentation. Its molecular structure could be idealized as HOOC−CH()−()2−COOH, with two carboxyl groups −COOH and one amino group −. However, in the solid state and mildly acidic water solutions, the molecule assumes an electrically neutral zwitterion structure −OOC−CH()−()2−COOH. It is encoded by the codons GAA or GAG. The acid can lose one proton from its second carboxyl group to form the conjugate base, the singly-negative anion glutamate −OOC−CH()−()2−COO−. This form of the compound is prevalent in neutral solutions. The glutamate neurotransmitter plays the principal role in neural activation. This anion creates the savory umami flavor of foods and is found in glutamate flavorings such as MSG. In Europe, it is classified as food additive E620. In highly alkaline solutions the doubly negative anion −OOC−CH()−()2−COO− prevails. The radical corresponding to glutamate is called glutamyl. The one-letter symbol E for glutamate was assigned as the letter following D for aspartate, as glutamate is larger by one methylene –CH2– group. Chemistry Ionization When glutamic acid is dissolved in water, the amino group (−) may gain a proton (), and/or the carboxyl groups may lose protons, depending on the acidity of the medium. In sufficiently acidic environments, both carboxyl groups are protonated and the molecule becomes a cation with a single positive charge, HOOC−CH()−()2−COOH. At pH values between about 2.5 and 4.1, the carboxylic acid closer to the amine generally loses a proton, and the acid becomes the neutral zwitterion −OOC−CH()−()2−COOH. This is also the form of the compound in the crystalline solid state. The change in protonation state is gradual; the two forms are in equal concentrations at pH 2.10. At even higher pH, the other carboxylic acid group loses its proton and the acid exists almost entirely as the glutamate anion −OOC−CH()−()2−COO−, with a single negative charge overall. The change in protonation state occurs at pH 4.07. This form with both carboxylates lacking protons is dominant in the physiological pH range (7.35–7.45). At even higher pH, the amino group loses the extra proton, and the prevalent species is the doubly-negative anion −OOC−CH()−()2−COO−. The change in protonation state occurs at pH 9.47. Optical isomerism Glutamic acid is chiral; two mirror-image enantiomers exist: (−), and (+). The form is more widely occurring in nature, but the form occurs in some special contexts, such as the bacterial capsule and cell walls of the bacteria (which produce it from the form with the enzyme glutamate racemase) and the liver of mammals. History Although they occur naturally in many foods, the flavor contributions made by glutamic acid and other amino acids were only scientifically identified early in the 20th century. The substance was discovered and identified in the year 1866 by the German chemist Karl Heinrich Ritthausen, who treated wheat gluten (for which it was named) with sulfuric acid. In 1908, Japanese researcher Kikunae Ikeda of the Tokyo Imperial University identified brown crystals left behind after the evaporation of a large amount of kombu broth as glutamic acid. These crystals, when tasted, reproduced the novel flavor he detected in many foods, most especially in seaweed. Professor Ikeda termed this flavor umami. He then patented a method of mass-producing a crystalline salt of glutamic acid, monosodium glutamate. Synthesis Biosynthesis Industrial synthesis Glutamic acid is produced on the largest scale of any amino acid, with an estimated annual production of about 1.5 million tons in 2006. Chemical synthesis was supplanted by the aerobic fermentation of sugars and ammonia in the 1950s, with the organism Corynebacterium glutamicum (also known as Brevibacterium flavum) being the most widely used for production. Isolation and purification can be achieved by concentration and crystallization; it is also widely available as its hydrochloride salt. Function and uses Metabolism Glutamate is a key compound in cellular metabolism. In humans, dietary proteins are broken down by digestion into amino acids, which serve as metabolic fuel for other functional roles in the body. A key process in amino acid degradation is transamination, in which the amino group of an amino acid is transferred to an α-ketoacid, typically catalysed by a transaminase. The reaction can be generalised as such: R1-amino acid + R2-α-ketoacid ⇌ R1-α-ketoacid + R2-amino acid A very common α-keto acid is α-ketoglutarate, an intermediate in the citric acid cycle. Transamination of α-ketoglutarate gives glutamate. The resulting α-ketoacid product is often a useful one as well, which can contribute as fuel or as a substrate for further metabolism processes. Examples are as follows: Alanine + α-ketoglutarate ⇌ pyruvate + glutamate Aspartate + α-ketoglutarate ⇌ oxaloacetate + glutamate Both pyruvate and oxaloacetate are key components of cellular metabolism, contributing as substrates or intermediates in fundamental processes such as glycolysis, gluconeogenesis, and the citric acid cycle. Glutamate also plays an important role in the body's disposal of excess or waste nitrogen. Glutamate undergoes deamination, an oxidative reaction catalysed by glutamate dehydrogenase, as follows: glutamate + H2O + NADP+ → α-ketoglutarate + NADPH + NH3 + H+ Ammonia (as ammonium) is then excreted predominantly as urea, synthesised in the liver. Transamination can thus be linked to deamination, effectively allowing nitrogen from the amine groups of amino acids to be removed, via glutamate as an intermediate, and finally excreted from the body in the form of urea. Glutamate is also a neurotransmitter (see below), which makes it one of the most abundant molecules in the brain. Malignant brain tumors known as glioma or glioblastoma exploit this phenomenon by using glutamate as an energy source, especially when these tumors become more dependent on glutamate due to mutations in the gene IDH1. Neurotransmitter Glutamate is the most abundant excitatory neurotransmitter in the vertebrate nervous system. At chemical synapses, glutamate is stored in vesicles. Nerve impulses trigger the release of glutamate from the presynaptic cell. Glutamate acts on ionotropic and metabotropic (G-protein coupled) receptors. In the opposing postsynaptic cell, glutamate receptors, such as the NMDA receptor or the AMPA receptor, bind glutamate and are activated. Because of its role in synaptic plasticity, glutamate is involved in cognitive functions such as learning and memory in the brain. The form of plasticity known as long-term potentiation takes place at glutamatergic synapses in the hippocampus, neocortex, and other parts of the brain. Glutamate works not only as a point-to-point transmitter, but also through spill-over synaptic crosstalk between synapses in which summation of glutamate released from a neighboring synapse creates extrasynaptic signaling/volume transmission. In addition, glutamate plays important roles in the regulation of growth cones and synaptogenesis during brain development as originally described by Mark Mattson. Brain nonsynaptic glutamatergic signaling circuits Extracellular glutamate in Drosophila brains has been found to regulate postsynaptic glutamate receptor clustering, via a process involving receptor desensitization. A gene expressed in glial cells actively transports glutamate into the extracellular space, while, in the nucleus accumbens-stimulating group II metabotropic glutamate receptors, this gene was found to reduce extracellular glutamate levels. This raises the possibility that this extracellular glutamate plays an "endocrine-like" role as part of a larger homeostatic system. GABA precursor Glutamate also serves as the precursor for the synthesis of the inhibitory gamma-aminobutyric acid (GABA) in GABA-ergic neurons. This reaction is catalyzed by glutamate decarboxylase (GAD). GABA-ergic neurons are identified (for research purposes) by revealing its activity (with the autoradiography and immunohistochemistry methods) which is most abundant in the cerebellum and pancreas. Stiff person syndrome is a neurologic disorder caused by anti-GAD antibodies, leading to a decrease in GABA synthesis and, therefore, impaired motor function such as muscle stiffness and spasm. Since the pancreas has abundant GAD, a direct immunological destruction occurs in the pancreas and the patients will have diabetes mellitus. Flavor enhancer Glutamic acid, being a constituent of protein, is present in foods that contain protein, but it can only be tasted when it is present in an unbound form. Significant amounts of free glutamic acid are present in a wide variety of foods, including cheeses and soy sauce, and glutamic acid is responsible for umami, one of the five basic tastes of the human sense of taste. Glutamic acid often is used as a food additive and flavor enhancer in the form of its sodium salt, known as monosodium glutamate (MSG). Nutrient All meats, poultry, fish, eggs, dairy products, and kombu are excellent sources of glutamic acid. Some protein-rich plant foods also serve as sources. 30% to 35% of gluten (much of the protein in wheat) is glutamic acid. Ninety-five percent of the dietary glutamate is metabolized by intestinal cells in a first pass. Plant growth Auxigro is a plant growth preparation that contains 30% glutamic acid. NMR spectroscopy In recent years, there has been much research into the use of residual dipolar coupling (RDC) in nuclear magnetic resonance spectroscopy (NMR). A glutamic acid derivative, poly-γ-benzyl-L-glutamate (PBLG), is often used as an alignment medium to control the scale of the dipolar interactions observed. Role of glutamate in aging Pharmacology The drug phencyclidine (more commonly known as PCP or 'Angel Dust') antagonizes glutamic acid non-competitively at the NMDA receptor. For the same reasons, dextromethorphan and ketamine also have strong dissociative and hallucinogenic effects. Acute infusion of the drug eglumetad (also known as eglumegad or LY354740), an agonist of the metabotropic glutamate receptors 2 and 3) resulted in a marked diminution of yohimbine-induced stress response in bonnet macaques (Macaca radiata); chronic oral administration of eglumetad in those animals led to markedly reduced baseline cortisol levels (approximately 50 percent) in comparison to untreated control subjects. Eglumetad has also been demonstrated to act on the metabotropic glutamate receptor 3 (GRM3) of human adrenocortical cells, downregulating aldosterone synthase, CYP11B1, and the production of adrenal steroids (i.e. aldosterone and cortisol). Glutamate does not easily pass the blood brain barrier, but, instead, is transported by a high-affinity transport system. It can also be converted into glutamine. Glutamate toxicity can be reduced by antioxidants, and the psychoactive principle of cannabis, tetrahydrocannabinol (THC), and the non psychoactive principle cannabidiol (CBD), and other cannabinoids, is found to block glutamate neurotoxicity with a similar potency, and thereby potent antioxidants. See also References Further reading External links Glutamic acid MS Spectrum Amino acids Proteinogenic amino acids Glucogenic amino acids Excitatory amino acids Flavor enhancers Umami enhancers Glutamates Glutamic acids Excitatory amino acid receptor agonists Glycine receptor agonists Peripherally selective drugs Chelating agents Glutamate (neurotransmitter) E-number additives
Glutamic acid
Chemistry
3,087
68,032,808
https://en.wikipedia.org/wiki/Balneolaceae
Balneolaceae is a family of bacteria. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). See also List of bacterial orders List of bacteria genera References Bacteria families
Balneolaceae
Biology
64
10,372,077
https://en.wikipedia.org/wiki/Grammatical%20Framework%20%28programming%20language%29
Grammatical Framework (GF) is a programming language for writing grammars of natural languages. GF is capable of parsing and generating texts in several languages simultaneously while working from a language-independent representation of meaning. Grammars written in GF can be compiled into a platform independent format and then used from different programming languages including C and Java, C#, Python and Haskell. A companion to GF is the GF Resource Grammar Library, a reusable library for dealing with the morphology and syntax of a growing number of natural languages. Both GF itself and the GF Resource Grammar Library are open-source. Typologically, GF is a functional programming language. Mathematically, it is a type-theoretic formal system (a logical framework to be precise) based on Martin-Löf's intuitionistic type theory, with additional judgments tailored specifically to the domain of linguistics. Language features a static type system, to detect potential programming errors functional programming for powerful abstractions support for writing libraries, to be used on other grammars tools for Information extraction, to convert linguistic resources into GF Tutorial Goal: write a multilingual grammar for expressing statements about John and Mary loving each other. Abstract and concrete modules In GF, grammars are divided to two module types: an abstract module, containing judgement forms and . or category declarations list categories i.e. all the possible types of trees there can be. or function declarations state functions and their types, these must be implemented by concrete modules (see below). one or more concrete modules, containing judgement forms and . or linearization type definitions, says what type of objects linearization produces for each category listed in . or linearization rules implement functions declared in . They say how trees are linearized. Consider the following: Abstract syntax abstract Zero = { cat S ; NP ; VP ; V2 ; fun Pred : NP -> VP -> S ; Compl : V2 -> NP -> VP ; John, Mary : NP ; Love : V2 ; } Concrete syntax: English concrete ZeroEng of Zero = { lincat S, NP, VP, V2 = Str ; lin Pred np vp = np ++ vp ; Compl v2 np = v2 ++ np ; John = "John" ; Mary = "Mary" ; Love = "loves" ; } Notice: (token list or "string") as the only linearization type. Making a grammar multilingual A single abstract syntax may be applied to many concrete syntaxes, in our case one for each new natural language we wish to add. The same system of trees can be given: different words different word orders different linearization types Concrete syntax: French concrete ZeroFre of Zero = { lincat S, NP, VP, V2 = Str ; lin Pred np vp = np ++ vp ; Compl v2 np = v2 ++ np ; John = "Jean" ; Mary = "Marie" ; Love = "aime" ; } Translation and multilingual generation We can now use our grammar to translate phrases between French and English. The following commands can be executed in the GF interactive shell. Import many grammars with the same abstract syntax > import ZeroEng.gf ZeroFre.gf Languages: ZeroEng ZeroFre Translation: pipe linearization to parsing > parse -lang=Eng "John loves Mary" | linearize -lang=Fre Jean aime Marie Multilingual generation: linearize into all languages > generate_random | linearize -treebank Zero: Pred Mary (Compl Love Mary) ZeroEng: Mary loves Mary ZeroFre: Marie aime Marie Parameters, tables Latin has cases: nominative for subject, accusative for object. Ioannes Mariam amat "John-Nom loves Mary-Acc" Maria Ioannem amat "Mary-Nom loves John-Acc" We use a parameter type for case (just 2 of Latin's 6 cases). The linearization type of NP is a table type: from to . The linearization of is an inflection table. When using an NP, we select () the appropriate case from the table. Concrete syntax: Latin concrete ZeroLat of Zero = { lincat S, VP, V2 = Str ; NP = Case => Str ; lin Pred np vp = np ! Nom ++ vp ; Compl v2 np = np ! Acc ++ v2 ; John = table {Nom => "Ioannes" ; Acc => "Ioannem"} ; Mary = table {Nom => "Maria" ; Acc => "Mariam"} ; Love = "amat" ; param Case = Nom | Acc ; } Discontinuous constituents, records In Dutch, the verb heeft lief is a discontinuous constituent. The linearization type of is a record type with two fields. The linearization of is a record. The values of fields are picked by projection () Concrete syntax: Dutch concrete ZeroDut of Zero = { lincat S, NP, VP = Str ; V2 = {v : Str ; p : Str} ; lin Pred np vp = np ++ vp ; Compl v2 np = v2.v ++ np ++ v2.p ; John = "Jan" ; Mary = "Marie" ; Love = {v = "heeft" ; p = "lief"} ; } Variable and inherent features, agreement, Unicode support For Hebrew, NP has gender as its inherent feature a field in the record. VP has gender as its variable feature an argument of a table. In predication, the VP receives the gender of the NP. Concrete syntax: Hebrew concrete ZeroHeb of Zero = { flags coding=utf8 ; lincat S = Str ; NP = {s : Str ; g : Gender} ; VP, V2 = Gender => Str ; lin Pred np vp = np.s ++ vp ! np.g ; Compl v2 np = table {g => v2 ! g ++ "את" ++ np.s} ; John = {s = "ג׳ון" ; g = Masc} ; Mary = {s = "מרי" ; g = Fem} ; Love = table {Masc => "אוהב" ; Fem => "אוהבת"} ; param Gender = Masc | Fem ; } Visualizing parse trees GF has inbuilt functions which can be used for visualizing parse trees and word alignments. The following commands will generate parse trees for the given phrases and open the produced PNG image using the system's command. > parse -lang=Eng "John loves Mary" | visualize_parse -view="eog" > parse -lang=Dut "Jan heeft Marie lief" | visualize_parse -view="eog" Generating word alignment In languages L1 and L2: link every word with its smallest spanning subtree. Delete the intervening tree, combining links directly from L1 to L2. In general, this gives phrase alignment. Links can be crossing, phrases can be discontinuous. The command follows a similar syntax: > parse -lang=Fre "Marie aime Jean" | align_words -lang=Fre,Dut,Lat -view="eog" Resource Grammar Library In natural language applications, libraries are a way to cope with thousands of details involved in syntax, lexicon, and inflection. The GF Resource Grammar Library is the standard library for Grammatical Framework. It covers the morphology and basic syntax for an increasing number of languages, currently including Afrikaans, Amharic (partial), Arabic (partial), Basque (partial), Bulgarian, Catalan, Chinese, Czech (partial), Danish, Dutch, English, Estonian, Finnish, French, German, Greek ancient (partial), Greek modern, Hebrew (fragments), Hindi, Hungarian (partial), Interlingua, Italian, Japanese, Korean (partial), Latin (partial), Latvian, Maltese, Mongolian, Nepali, Norwegian bokmål, Norwegian nynorsk, Persian, Polish, Punjabi, Romanian, Russian, Sindhi, Slovak (partial), Slovene (partial), Somali (partial), Spanish, Swahili (fragments), Swedish, Thai, Turkish (fragments), and Urdu. In addition, 14 languages have WordNet lexicon and large-scale parsing extensions. A full API documentation of the library can be found at the RGL Synopsis page. The RGL status document gives the languages currently available in the GF Resource Grammar Library, including their maturity. Uses of GF GF was first created in 1998 at Xerox Research Centre Europe, Grenoble, in the project Multilingual Document Authoring. At Xerox, it was used for prototypes including a restaurant phrase book, a database query system, a formalization of an alarm system instructions with translations to 5 languages, and an authoring system for medical drug descriptions. Later projects using GF and involving third parties include: REMU: Reliable Multilingual Digital Communication, a project funded by the Swedish Research Council between 2013–2017. MOLTO: multilingual online translation, an EU project that ran between 2010–2013. SALDO: Swedish morphological dictionary based on rules developed for GF and Functional Morphology WebAlt: multilingual generation of mathematical exercises (commercial project) TALK: multilingual and multimodal spoken dialogue systems Academically, GF has been used in many PhD theses and resulted in a lot of scientific publications (see the GF publication list for some of them). Commercially, GF has been used by a number of companies, in domains such as e-commerce, health care and translating formal specifications to natural language. Community Developer mailing list There is an active group for developers and users of GF alike, located at https://groups.google.com/group/gf-dev Summer schools 2020 – GF as a resource for Computational Law (Singapore) The seventh GF summer school, postponed due to COVID-19, is to be held in Singapore. Co-organised with the Singapore Management University's Centre for Computational Law, the summer school will have a special focus on computational law. 2018 – Sixth GF Summer School (Stellenbosch, South Africa) The sixth GF summer school was the first one held outside Europe. The major themes of the summer school were African language resources, and the growing usage of GF in commercial applications. 2017 – GF in a Full Stack of Language Technology (Riga, Latvia) The fifth GF summer school was held in Riga, Latvia. This summer school had a number of participant from startups, presenting industrial use cases of GF. 2016 – Summer School in Rule-Based Machine Translation (Alicante, Spain) GF was one of the four platforms featured at the Summer School in Rule-Based Machine Translation, along with Apertium, Matxin and TectoMT. 2015 – Fourth GF Summer School (Gozo, Malta) The fourth GF summer school was held on Gozo island in Malta. Like the previous edition in 2013, this summer school featured collaborations with other resources, such as Apertium and FrameNet. 2013 – Scaling Up Grammatical Resources (Lake Chiemsee, Germany) The third GF Summer school, was held on Frauenchiemsee island in Bavaria, Germany with the special theme "Scaling up Grammar Resources". This summer school focused on extending the existing resource grammars with the ultimate goal of dealing with any text in the supported languages. Lexicon extension is an obvious part of this work, but also new grammatical constructions were also of interest. There was a special interest in porting resources from other open-source approaches, such as WordNets and Apertium, and reciprocally making GF resources easily reusable in other approaches. 2011 – Frontiers of Multilingual Technologies (Barcelona, Spain) The second GF Summer school, subtitled Frontiers of Multilingual Technologies was held in 2011 in Barcelona, Spain. It was sponsored by CLT, the Centre for Language Technology of the University of Gothenburg, and by UPC, Universitat Politècnica de Catalunya. The School addressed new languages and also promoted ongoing work in those languages which are already under construction. Missing EU languages were especially encouraged. The school began with a 2-day GF tutorial, serving those interested in getting an introduction to GF or an overview of on-going work. All results of the summer school are available as open-source software released under the LGPL license. 2009 – GF Summer School (Gothenburg, Sweden) The first GF summer school was held in 2009 in Gothenburg, Sweden. It was a collaborative effort to create grammars of new languages in Grammatical Framework, GF. These grammars were added to the Resource Grammar Library, which previously had 12 languages. Around 10 new languages are already under construction, and the School aimed to address 23 new languages. All results of the Summer School were made available as open-source software released under the LGPL license. The summer school was organized by the Language Technology Group at the Department of Computer Science and Engineering. The group is a part of the Centre of Language Technology, a focus research area of the University of Gothenburg. The code created by the school participants is made accessible in the GF darcs repository, subdirectory . References External links Grammatical Framework homepage Computational linguistics Grammar frameworks Functional languages Natural language processing software Translation software Machine translation software
Grammatical Framework (programming language)
Technology
2,868
22,449,676
https://en.wikipedia.org/wiki/Tulosesus%20bisporiger
Tulosesus bisporiger is a species of mushroom producing fungus in the family Psathyrellaceae. Taxonomy It was first described by the English mycologist Peter Darbishire Orton in 1976 and placed in the Coprinus genus. In 2001, a phylogenetic study resulted in a major reorganization and reshuffling of that genus and this species was transferred to Coprinellus. The species was known as Coprinellus bisporiger until 2020 when the German mycologists Dieter Wächter & Andreas Melzer reclassified many species in the Psathyrellaceae family based on phylogenetic analysis. References Fungi described in 1976 bisporiger Fungus species
Tulosesus bisporiger
Biology
134
78,435,872
https://en.wikipedia.org/wiki/Clemeprol
Clemeprol is an serotonin–norepinephrine reuptake inhibitor (SNRI) antidepressant and anticholinergic agent. It is an enantiomeric mixture of R and S isomers. Both isomers show similar pharmacological activity. Synthesis A synthetic pathway for clemeprol is disclosed: The Corey-Chaykovsky epoxidation between 3-chlorobenzophenone [1016-78-0] (1) and dimethylsulfoxonium methylide (Corey's reagent) [5367-24-8], gives 2-(3-chlorophenyl)-2-phenyloxirane [71827-53-7] (2). Further reaction with boron trifluoride etherate [109-63-7] gives m-chlorophenyl-phenylacetaldehyde, PC12549135 (3). A second Corey-Chaykovsky epoxidation gives 2-[(3-chlorophenyl)-phenylmethyl]oxirane, PC12549073 (4). Quenching with dimethylamine completes the synthesis of clemeprol (5). See also BRL15572 [734517-40-9] References Serotonin–norepinephrine reuptake inhibitors Amines Phenyl compounds Dimethylamino compounds 4-Chlorophenyl compounds
Clemeprol
Chemistry
326
1,601,611
https://en.wikipedia.org/wiki/Tempering%20%28metallurgy%29
Tempering is a process of heat treating, which is used to increase the toughness of iron-based alloys. Tempering is usually performed after hardening, to reduce some of the excess hardness, and is done by heating the metal to some temperature below the critical point for a certain period of time, then allowing it to cool in still air. The exact temperature determines the amount of hardness removed, and depends on both the specific composition of the alloy and on the desired properties in the finished product. For instance, very hard tools are often tempered at low temperatures, while springs are tempered at much higher temperatures. Introduction Tempering is a heat treatment technique applied to ferrous alloys, such as steel or cast iron, to achieve greater toughness by decreasing the hardness of the alloy. The reduction in hardness is usually accompanied by an increase in ductility, thereby decreasing the brittleness of the metal. Tempering is usually performed after quenching, which is rapid cooling of the metal to put it in its hardest state. Tempering is accomplished by controlled heating of the quenched workpiece to a temperature below its "lower critical temperature". This is also called the lower transformation temperature or lower arrest (A1) temperature: the temperature at which the crystalline phases of the alloy, called ferrite and cementite, begin combining to form a single-phase solid solution referred to as austenite. Heating above this temperature is avoided, so as not to destroy the very-hard, quenched microstructure, called martensite. Precise control of time and temperature during the tempering process is crucial to achieve the desired balance of physical properties. Low tempering temperatures may only relieve the internal stresses, decreasing brittleness while maintaining a majority of the hardness. Higher tempering temperatures tend to produce a greater reduction in the hardness, sacrificing some yield strength and tensile strength for an increase in elasticity and plasticity. However, in some low alloy steels, containing other elements like chromium and molybdenum, tempering at low temperatures may produce an increase in hardness, while at higher temperatures the hardness will decrease. Many steels with high concentrations of these alloying elements behave like precipitation hardening alloys, which produces the opposite effects under the conditions found in quenching and tempering, and are referred to as maraging steels. In carbon steels, tempering alters the size and distribution of carbides in the martensite, forming a microstructure called "tempered martensite". Tempering is also performed on normalized steels and cast irons, to increase ductility, machinability, and impact strength. Steel is usually tempered evenly, called "through tempering," producing a nearly uniform hardness, but it is sometimes heated unevenly, referred to as "differential tempering," producing a variation in hardness. History Tempering is an ancient heat-treating technique. The oldest known example of tempered martensite is a pick axe which was found in Galilee, dating from around 1200 to 1100 BC. The process was used throughout the ancient world, from Asia to Europe and Africa. Many different methods and cooling baths for quenching have been attempted during ancient times, from quenching in urine, blood, or metals like mercury or lead, but the process of tempering has remained relatively unchanged over the ages. Tempering was often confused with quenching and, often, the term was used to describe both techniques. In 1889, Sir William Chandler Roberts-Austen wrote, "There is still so much confusion between the words "temper," "tempering," and "hardening," in the writings of even eminent authorities, that it is well to keep these old definitions carefully in mind. I shall employ the word tempering in the same sense as softening." Terminology In metallurgy, one may encounter many terms that have very specific meanings within the field, but may seem rather vague when viewed from the outside. Terms such as "hardness," "impact resistance," "toughness," and "strength" can carry many different connotations, making it sometimes difficult to discern the specific meaning. Some of the terms encountered, and their specific definitions are: Strength – Resistance to permanent deformation and tearing. Strength, in metallurgy, is still a rather vague term, so is usually divided into yield strength (strength beyond which deformation becomes permanent), tensile strength (the ultimate tearing strength), shear strength (resistance to transverse, or cutting forces), and compressive strength (resistance to elastic shortening under a load). Toughness – Resistance to fracture, as measured by the Charpy test. Toughness often increases as strength decreases, because a material that bends is less likely to break. Hardness – A surface's resistance to scratching, abrasion, or indentation. In conventional metal alloys, there is a linear relation between indentation hardness and tensile strength, which eases the measurement of the latter. Brittleness – Brittleness describes a material's tendency to break before bending or deforming either elastically or plastically. Brittleness increases with decreased toughness, but is greatly affected by internal stresses as well. Plasticity – The ability to mold, bend or deform in a manner that does not spontaneously return to its original shape. This is proportional to the ductility or malleability of the substance. Elasticity – Also called flexibility, this is the ability to deform, bend, compress, or stretch and return to the original shape once the external stress is removed. Elasticity is inversely related to the Young's modulus of the material. Impact resistance – Usually synonymous with high-strength toughness, it is the ability to resist shock-loading with minimal deformation. Wear resistance – Usually synonymous with hardness, this is resistance to erosion, ablation, spalling, or galling. Structural integrity – The ability to withstand a maximum-rated load while resisting fracture, resisting fatigue, and producing a minimal amount of flexing or deflection, to provide a maximum service life. Carbon steel Very few metals react to heat treatment in the same manner, or to the same extent, that carbon steel does, and carbon-steel heat-treating behavior can vary radically depending on alloying elements. Steel can be softened to a very malleable state through annealing, or it can be hardened to a state as hard and brittle as glass by quenching. However, in its hardened state, steel is usually far too brittle, lacking the fracture toughness to be useful for most applications. Tempering is a method used to decrease the hardness, thereby increasing the ductility of the quenched steel, to impart some springiness and malleability to the metal. This allows the metal to bend before breaking. Depending on how much temper is imparted to the steel, it may bend elastically (the steel returns to its original shape once the load is removed), or it may bend plastically (the steel does not return to its original shape, resulting in permanent deformation), before fracturing. Tempering is used to precisely balance the mechanical properties of the metal, such as shear strength, yield strength, hardness, ductility, and tensile strength, to achieve any number of a combination of properties, making the steel useful for a wide variety of applications. Tools such as hammers and wrenches require good resistance to abrasion, impact resistance, and resistance to deformation. Springs do not require as much wear resistance, but must deform elastically without breaking. Automotive parts tend to be a little less strong, but need to deform plastically before breaking. Except in rare cases where maximum hardness or wear resistance is needed, such as the untempered steel used for files, quenched steel is almost always tempered to some degree. However, steel is sometimes annealed through a process called normalizing, leaving the steel only partially softened. Tempering is sometimes used on normalized steels to further soften it, increasing the malleability and machinability for easier metalworking. Tempering may also be used on welded steel, to relieve some of the stresses and excess hardness created in the heat affected zone around the weld. Quenched steel Tempering is most often performed on steel that has been heated above its upper critical (A3) temperature and then quickly cooled, in a process called quenching, using methods such as immersing the hot steel in water, oil, or forced-air. The quenched steel, being placed in or very near its hardest possible state, is then tempered to incrementally decrease the hardness to a point more suitable for the desired application. The hardness of the quenched steel depends on both cooling speed and on the composition of the alloy. Steel with a high carbon content will reach a much harder state than steel with a low carbon content. Likewise, tempering high-carbon steel to a certain temperature will produce steel that is considerably harder than low-carbon steel that is tempered at the same temperature. The amount of time held at the tempering temperature also has an effect. Tempering at a slightly elevated temperature for a shorter time may produce the same effect as tempering at a lower temperature for a longer time. Tempering times vary, depending on the carbon content, size, and desired application of the steel, but typically range from a few minutes to a few hours. Tempering quenched steel at very low temperatures, between , will usually not have much effect other than a slight relief of some of the internal stresses and a decrease in brittleness. Tempering at higher temperatures, from , will produce a slight reduction in hardness, but will primarily relieve much of the internal stresses. In some steels with low alloy content, tempering in the range of causes a decrease in ductility and an increase in brittleness, and is referred to as the "tempered martensite embrittlement" (TME) range. Except in the case of blacksmithing, this range is usually avoided. Steel requiring more strength than toughness, such as tools, are usually not tempered above . Instead, a variation in hardness is usually produced by varying only the tempering time. When increased toughness is desired at the expense of strength, higher tempering temperatures, from , are used. Tempering at even higher temperatures, between , will produce excellent toughness, but at a serious reduction in strength and hardness. At , the steel may experience another stage of embrittlement, called "temper embrittlement" (TE), which occurs if the steel is held within the temperature range of temper embrittlement for too long. When heating above this temperature, the steel will usually not be held for any amount of time, and quickly cooled to avoid temper embrittlement. Normalized steel Steel that has been heated above its upper critical temperature and then cooled in standing air is called normalized steel. Normalized steel consists of pearlite, martensite, and sometimes bainite grains, mixed together within the microstructure. This produces steel that is much stronger than full-annealed steel, and much tougher than tempered quenched steel. However, added toughness is sometimes needed at a reduction in strength. Tempering provides a way to carefully decrease the hardness of the steel, thereby increasing the toughness to a more desirable point. Cast steel is often normalized rather than annealed, to decrease the amount of distortion that can occur. Tempering can further decrease the hardness, increasing the ductility to a point more like annealed steel. Tempering is often used on carbon steels, producing much the same results. The process, called "normalize and temper", is used frequently on steels such as 1045 carbon steel, or most other steels containing 0.35 to 0.55% carbon. These steels are usually tempered after normalizing, to increase the toughness and relieve internal stresses. This can make the metal more suitable for its intended use and easier to machine. Welded steel Steel that has been arc welded, gas welded, or welded in any other manner besides forge welded, is affected in a localized area by the heat from the welding process. This localized area, called the heat-affected zone (HAZ), consists of steel that varies considerably in hardness, from normalized steel to steel nearly as hard as quenched steel near the edge of this heat-affected zone. Thermal contraction from the uneven heating, solidification, and cooling creates internal stresses in the metal, both within and surrounding the weld. Tempering is sometimes used in place of stress relieving (even heating and cooling of the entire object to just below the A1 temperature) to both reduce the internal stresses and to decrease the brittleness around the weld. Localized tempering is often used on welds when the construction is too large, intricate, or otherwise too inconvenient to heat the entire object evenly. Tempering temperatures for this purpose are generally around and . Quench and self-temper Modern reinforcing bar of 500 MPa strength can be made from expensive microalloyed steel or by a quench and self-temper (QST) process. After the bar exits the final rolling pass, where the final shape of the bar is applied, the bar is then sprayed with water which quenches the outer surface of the bar. The bar speed and the amount of water are carefully controlled in order to leave the core of the bar unquenched. The hot core then tempers the already quenched outer part, leaving a bar with high strength but with a certain degree of ductility too. Blacksmithing Tempering was originally a process used and developed by blacksmiths (forgers of iron). The process was most likely developed by the Hittites of Anatolia (modern-day Turkey), in the twelfth or eleventh century BC. Without knowledge of metallurgy, tempering was originally devised through a trial-and-error method. Because few methods of precisely measuring temperature existed until modern times, the temperature was usually judged by watching the tempering colors of the metal. Tempering often consisted of heating above a charcoal or coal forge, or by fire, so holding the work at exactly the right temperature for the correct amount of time was usually not possible. Tempering was usually performed by slowly, evenly overheating the metal, as judged by the color, and then immediately cooling, either in open air or by immersing it in water. This produced much the same effect as heating at the proper temperature for the right amount of time, and avoided embrittlement by tempering within a short time period. However, although tempering-color guides exist, this method of tempering usually requires a good amount of practice to perfect, because the final outcome depends on many factors, including the composition of the steel, the speed at which it was heated, the type of heat source (oxidizing or carburizing), the cooling rate, oil films or impurities on the surface, and many other circumstances which vary from smith to smith or even from job to job. The thickness of the steel also plays a role. With thicker items, it becomes easier to heat only the surface to the right temperature, before the heat can penetrate through. However, very thick items may not be able to harden all the way through during quenching. Tempering colors If steel has been freshly ground, sanded, or polished, it will form an oxide layer on its surface when heated. As the temperature of the steel is increased, the thickness of the iron oxide will also increase. Although iron oxide is not normally transparent, such thin layers do allow light to pass through, reflecting off both the upper and lower surfaces of the layer. This causes a phenomenon called thin-film interference, which produces colors on the surface. As the thickness of this layer increases with temperature, it causes the colors to change from a very light yellow, to brown, to purple, and then to blue. These colors appear at very precise temperatures and provide the blacksmith with a very accurate gauge for measuring the temperature. The various colors, their corresponding temperatures, and some of their uses are: Faint-yellow – – gravers, razors, scrapers Light-straw – – rock drills, reamers, metal-cutting saws Dark-straw – – scribers, planer blades Brown – – taps, dies, drill bits, hammers, cold chisels Purple – – surgical tools, punches, stone carving tools Dark blue – – screwdrivers, wrenches Light blue – – springs, wood-cutting saws Grey-blue – and higher – structural steel For carbon steel, beyond the grey-blue color the iron oxide loses its transparency, and the temperature can no longer be judged in this way, although other alloys like stainless steel may produce a much broader range including golds, teals, and magentas. The layer will also increase in thickness as time passes, which is another reason overheating and immediate cooling is used. Steel in a tempering oven, held at for a long time, will begin to turn brown, purple, or blue, even though the temperature did not exceed that needed to produce a light-straw color. Oxidizing or carburizing heat sources may also affect the final result. The iron oxide layer, unlike rust, also protects the steel from corrosion through passivation. Differential tempering Differential tempering is a method of providing different amounts of temper to different parts of the steel. The method is often used in bladesmithing, for making knives and swords, to provide a very hard edge while softening the spine or center of the blade. This increased the toughness while maintaining a very hard, sharp, impact-resistant edge, helping to prevent breakage. This technique was more often found in Europe, as opposed to the differential hardening techniques more common in Asia, such as in Japanese swordsmithing. Differential tempering consists of applying heat to only a portion of the blade, usually the spine, or the center of double-edged blades. For single-edged blades, the heat, often in the form of a flame or a red-hot bar, is applied to the spine of the blade only. The blade is then carefully watched as the tempering colors form and slowly creep toward the edge. The heat is then removed before the light-straw color reaches the edge. The colors will continue to move toward the edge for a short time after the heat is removed, so the smith typically removes the heat a little early, so that the pale yellow just reaches the edge, and travels no farther. A similar method is used for double-edged blades, but the heat source is applied to the center of the blade, allowing the colors to creep out toward each edge. Interrupted quenching Interrupted quenching methods are often referred to as tempering, although the processes are very different from traditional tempering. These methods consist of quenching to a specific temperature that is above the martensite start (Ms) temperature, and then holding at that temperature for extended amounts of time. Depending on the temperature and the amount of time, this allows either pure bainite to form, or holds off forming the martensite until much of the internal stresses relax. These methods are known as austempering and martempering. Austempering Austempering is a technique used to form pure bainite, a transitional microstructure found between pearlite and martensite. In normalizing, both upper and lower bainite are usually found mixed with pearlite. To avoid the formation of pearlite or martensite, the steel is quenched in a bath of molten metals or salts. This quickly cools the steel past the point where pearlite can form and into the bainite-forming range. The steel is then held at the bainite-forming temperature, beyond the point where the temperature reaches an equilibrium, until the bainite fully forms. The steel is then removed from the bath and allowed to air-cool, without the formation of either pearlite or martensite. Depending on the holding temperature, austempering can produce either upper or lower bainite. Upper bainite is a laminate structure formed at temperatures typically above and is a much tougher microstructure. Lower bainite is a needle-like structure, produced at temperatures below 350 °C, and is stronger but much more brittle. In either case, austempering produces greater strength and toughness for a given hardness, which is determined mostly by composition rather than cooling speed, and reduced internal stresses which could lead to breakage. This produces steel with superior impact resistance. Modern punches and chisels are often austempered. Because austempering does not produce martensite, the steel does not require further tempering. Martempering Martempering is similar to austempering, in that the steel is quenched in a bath of molten metal or salts to quickly cool it past the pearlite-forming range. However, in martempering, the goal is to create martensite rather than bainite. The steel is quenched to a much lower temperature than is used for austempering; to just above the martensite start temperature. The metal is then held at this temperature until the temperature of the steel reaches an equilibrium. The steel is then removed from the bath before any bainite can form, and then is allowed to air-cool, turning it into martensite. The interruption in cooling allows much of the internal stresses to relax before the martensite forms, decreasing the brittleness of the steel. However, the martempered steel will usually need to undergo further tempering to adjust the hardness and toughness, except in rare cases where maximum hardness is needed but the accompanying brittleness is not. Modern files are often martempered. Physical processes Tempering involves a three-step process in which unstable martensite decomposes into ferrite and unstable carbides, and finally into stable cementite, forming various stages of a microstructure called tempered martensite. The martensite typically consists of laths (strips) or plates, sometimes appearing acicular (needle-like) or lenticular (lens-shaped). Depending on the carbon content, it also contains a certain amount of "retained austenite." Retained austenite are crystals that are unable to transform into martensite, even after quenching below the martensite finish (Mf) temperature. An increase in alloying agents or carbon content causes an increase in retained austenite. Austenite has much higher stacking-fault energy than martensite or pearlite, lowering the wear resistance and increasing the chances of galling, although some or most of the retained austenite can be transformed into martensite by cold and cryogenic treatments prior to tempering. The martensite forms during a diffusionless transformation, in which the transformation occurs due to shear stresses created in the crystal lattices rather than by chemical changes that occur during precipitation. The shear stresses create many defects, or "dislocations," between the crystals, providing less-stressful areas for the carbon atoms to relocate. Upon heating, the carbon atoms first migrate to these defects and then begin forming unstable carbides. This reduces the amount of total martensite by changing some of it to ferrite. Further heating reduces the martensite even more, transforming the unstable carbides into stable cementite. The first stage of tempering occurs between room temperature and . In the first stage, carbon precipitates into ε-carbon (Fe2,4C). In the second stage, occurring between and , the retained austenite transforms into a form of lower-bainite containing ε-carbon rather than cementite (archaically referred to as "troostite"). The third stage occurs at and higher. In the third stage, ε-carbon precipitates into cementite, and the carbon content in the martensite decreases. If tempered at higher temperatures, between and , or for longer amounts of time, the martensite may become fully ferritic and the cementite may become coarser or more spherical. In spheroidized steel, the cementite network breaks apart and recedes into rods or spherical-shaped globules, and the steel becomes softer than annealed steel; nearly as soft as pure iron, making it very easy to form or machine. Embrittlement Embrittlement occurs during tempering when, through a specific temperature range, the steel experiences an increase in hardness and a reduction in ductility, as opposed to the normal decrease in hardness that occurs on either side of this range. The first type is called tempered martensite embrittlement (TME) or one-step embrittlement. The second is referred to as temper embrittlement (TE) or two-step embrittlement. One-step embrittlement usually occurs in carbon steel at temperatures between and , and was historically referred to as "500 degree [Fahrenheit] embrittlement." This embrittlement occurs due to the precipitation of Widmanstatten needles or plates, made of cementite, in the interlath boundaries of the martensite. Impurities such as phosphorus, or alloying agents like manganese, may increase the embrittlement, or alter the temperature at which it occurs. This type of embrittlement is permanent, and can only be relieved by heating above the upper critical temperature and then quenching again. However, these microstructures usually require an hour or more to form, so are usually not a problem in the blacksmith method of tempering. Two-step embrittlement typically occurs by aging the metal within a critical temperature range, or by slowly cooling it through that range, For carbon steel, this is typically between and , although impurities like phosphorus and sulfur increase the effect dramatically. This generally occurs because the impurities are able to migrate to the grain boundaries, creating weak spots in the structure. The embrittlement can often be avoided by quickly cooling the metal after tempering. Two-step embrittlement, however, is reversible. The embrittlement can be eliminated by heating the steel above and then quickly cooling. Alloy steels Many elements are often alloyed with steel. The main purpose for alloying most elements with steel is to increase its hardenability and to decrease softening under temperature. Tool steels, for example, may have elements like chromium or vanadium added to increase both toughness and strength, which is necessary for things like wrenches and screwdrivers. On the other hand, drill bits and rotary files need to retain their hardness at high temperatures. Adding cobalt or molybdenum can cause the steel to retain its hardness, even at red-hot temperatures, forming high-speed steels. Often, small amounts of many different elements are added to the steel to give the desired properties, rather than just adding one or two. Most alloying elements (solutes) have the benefit of not only increasing hardness, but also lowering both the martensite start temperature and the temperature at which austenite transforms into ferrite and cementite. During quenching, this allows a slower cooling rate, which allows items with thicker cross-sections to be hardened to greater depths than is possible in plain carbon steel, producing more uniformity in strength. Tempering methods for alloy steels may vary considerably, depending on the type and amount of elements added. In general, elements like manganese, nickel, silicon, and aluminum will remain dissolved in the ferrite during tempering while the carbon precipitates. When quenched, these solutes will usually produce an increase in hardness over plain carbon steel of the same carbon content. When hardened alloy-steels, containing moderate amounts of these elements, are tempered, the alloy will usually soften somewhat proportionately to carbon steel. However, during tempering, elements like chromium, vanadium, and molybdenum precipitate with the carbon. If the steel contains fairly low concentrations of these elements, the softening of the steel can be retarded until much higher temperatures are reached, when compared to those needed for tempering carbon steel. This allows the steel to maintain its hardness in high-temperature or high-friction applications. However, this also requires very high temperatures during tempering, to achieve a reduction in hardness. If the steel contains large amounts of these elements, tempering may produce an increase in hardness until a specific temperature is reached, at which point the hardness will begin to decrease. For instance, molybdenum steels will typically reach their highest hardness around whereas vanadium steels will harden fully when tempered to around . When very large amounts of solutes are added, alloy steels may behave like precipitation-hardening alloys, which do not soften at all during tempering. Cast iron Cast iron comes in many types, depending on the carbon content. However, they are usually divided into grey and white cast iron, depending on the form that the carbides take. In grey cast iron, the carbon is mainly in the form of graphite, but in white cast iron, the carbon is usually in the form of cementite. Grey cast iron consists mainly of the microstructure called pearlite, mixed with graphite and sometimes ferrite. Grey cast iron is usually used as cast, with its properties being determined by its composition. White cast iron is composed mostly of a microstructure called ledeburite mixed with pearlite. Ledeburite is very hard, making cast iron very brittle. If the white cast iron has a hypoeutectic composition, it is usually tempered to produce malleable or ductile cast iron. Two methods of tempering are used, called "white tempering" and "black tempering." The purpose of both tempering methods is to cause the cementite within the ledeburite to decompose, increasing the ductility. White tempering Malleable (porous) cast iron is manufactured by white tempering. White tempering is used to burn off excess carbon, by heating it for extended amounts of time in an oxidizing environment. The cast iron will usually be held at temperatures as high as for as long as 60 hours. The heating is followed by a slow cooling rate of around 10 °C (18 °F) per hour. The entire process may last 160 hours or more. This causes the cementite to decompose from the ledeburite, and then the carbon burns out through the surface of the metal, increasing the malleability of the cast iron. Black tempering Ductile (non-porous) cast iron (often called "black iron") is produced by black tempering. Unlike white tempering, black tempering is done in an inert gas environment, so that the decomposing carbon does not burn off. Instead, the decomposing carbon turns into a type of graphite called "temper graphite" or "flaky graphite," increasing the malleability of the metal. Tempering is usually performed at temperatures as high as for up to 20 hours. The tempering is followed by slow cooling through the lower critical temperature, over a period that may last from 50 to over 100 hours. Precipitation hardening alloys Precipitation-hardening alloys first came into use during the early 1900s. Most heat-treatable alloys fall into the category of precipitation-hardening alloys, including alloys of aluminum, magnesium, titanium, and nickel. Several high-alloy steels are also precipitation-hardening alloys. These alloys become softer than normal when quenched and then harden over time. For this reason, precipitation hardening is often referred to as "aging." Although most precipitation-hardening alloys will harden at room temperature, some will only harden at elevated temperatures and, in others, the process can be sped up by aging at elevated temperatures. Aging at temperatures higher than room-temperature is called "artificial aging". Although the method is similar to tempering, the term "tempering" is usually not used to describe artificial aging, because the physical processes, (i.e.: precipitation of intermetallic phases from a supersaturated alloy) the desired results, (i.e.: strengthening rather than softening), and the amount of time held at a certain temperature is very different from tempering as used in carbon-steel. See also Annealing (metallurgy) Austempering Precipitation strengthening Tempered glass References Further reading Manufacturing Processes Reference Guide by Robert H. Todd, Dell K. Allen, and Leo Alting pg. 410 External links A thorough discussion of tempering processes Webpage showing heating glow and tempering colors Metal heat treatments
Tempering (metallurgy)
Chemistry
6,776
19,022
https://en.wikipedia.org/wiki/Measurement
Measurement is the quantification of attributes of an object or event, which can be used to compare with other objects or events. In other words, measurement is a process of determining how large or small a physical quantity is as compared to a basic reference quantity of the same kind. The scope and application of measurement are dependent on the context and discipline. In natural sciences and engineering, measurements do not apply to nominal properties of objects or events, which is consistent with the guidelines of the International vocabulary of metrology published by the International Bureau of Weights and Measures. However, in other fields such as statistics as well as the social and behavioural sciences, measurements can have multiple levels, which would include nominal, ordinal, interval and ratio scales. Measurement is a cornerstone of trade, science, technology and quantitative research in many disciplines. Historically, many measurement systems existed for the varied fields of human existence to facilitate comparisons in these fields. Often these were achieved by local agreements between trading partners or collaborators. Since the 18th century, developments progressed towards unifying, widely accepted standards that resulted in the modern International System of Units (SI). This system reduces all physical measurements to a mathematical combination of seven base units. The science of measurement is pursued in the field of metrology. Measurement is defined as the process of comparison of an unknown quantity with a known or standard quantity. History Methodology The measurement of a property may be categorized by the following criteria: type, magnitude, unit, and uncertainty. They enable unambiguous comparisons between measurements. The level of measurement is a taxonomy for the methodological character of a comparison. For example, two states of a property may be compared by ratio, difference, or ordinal preference. The type is commonly not explicitly expressed, but implicit in the definition of a measurement procedure. The magnitude is the numerical value of the characterization, usually obtained with a suitably chosen measuring instrument. A unit assigns a mathematical weighting factor to the magnitude that is derived as a ratio to the property of an artifact used as standard or a natural physical quantity. An uncertainty represents the random and systemic errors of the measurement procedure; it indicates a confidence level in the measurement. Errors are evaluated by methodically repeating measurements and considering the accuracy and precision of the measuring instrument. Standardization of measurement units Measurements most commonly use the International System of Units (SI) as a comparison framework. The system defines seven fundamental units: kilogram, metre, candela, second, ampere, kelvin, and mole. All of these units are defined without reference to a particular physical object which serves as a standard. Artifact-free definitions fix measurements at an exact value related to a physical constant or other invariable phenomena in nature, in contrast to standard artifacts which are subject to deterioration or destruction. Instead, the measurement unit can only ever change through increased accuracy in determining the value of the constant it is tied to. The first proposal to tie an SI base unit to an experimental standard independent of fiat was by Charles Sanders Peirce (1839–1914), who proposed to define the metre in terms of the wavelength of a spectral line. This directly influenced the Michelson–Morley experiment; Michelson and Morley cite Peirce, and improve on his method. Standards With the exception of a few fundamental quantum constants, units of measurement are derived from historical agreements. Nothing inherent in nature dictates that an inch has to be a certain length, nor that a mile is a better measure of distance than a kilometre. Over the course of human history, however, first for convenience and then for necessity, standards of measurement evolved so that communities would have certain common benchmarks. Laws regulating measurement were originally developed to prevent fraud in commerce. Units of measurement are generally defined on a scientific basis, overseen by governmental or independent agencies, and established in international treaties, pre-eminent of which is the General Conference on Weights and Measures (CGPM), established in 1875 by the Metre Convention, overseeing the International System of Units (SI). For example, the metre was redefined in 1983 by the CGPM in terms of the speed of light, the kilogram was redefined in 2019 in terms of the Planck constant and the international yard was defined in 1960 by the governments of the United States, United Kingdom, Australia and South Africa as being exactly 0.9144 metres. In the United States, the National Institute of Standards and Technology (NIST), a division of the United States Department of Commerce, regulates commercial measurements. In the United Kingdom, the role is performed by the National Physical Laboratory (NPL), in Australia by the National Measurement Institute, in South Africa by the Council for Scientific and Industrial Research and in India the National Physical Laboratory of India. Units and systems unit is known or standard quantity in terms of which other physical quantities are measured. Imperial and US customary systems Before SI units were widely adopted around the world, the British systems of English units and later imperial units were used in Britain, the Commonwealth and the United States. The system came to be known as U.S. customary units in the United States and is still in use there and in a few Caribbean countries. These various systems of measurement have at times been called foot-pound-second systems after the Imperial units for length, weight and time even though the tons, hundredweights, gallons, and nautical miles, for example, are different for the U.S. units. Many Imperial units remain in use in Britain, which has officially switched to the SI system—with a few exceptions such as road signs, which are still in miles. Draught beer and cider must be sold by the imperial pint, and milk in returnable bottles can be sold by the imperial pint. Many people measure their height in feet and inches and their weight in stone and pounds, to give just a few examples. Imperial units are used in many other places, for example, in many Commonwealth countries that are considered metricated, land area is measured in acres and floor space in square feet, particularly for commercial transactions (rather than government statistics). Similarly, gasoline is sold by the gallon in many countries that are considered metricated. Metric system The metric system is a decimal system of measurement based on its units for length, the metre and for mass, the kilogram. It exists in several variations, with different choices of base units, though these do not affect its day-to-day use. Since the 1960s, the International System of Units (SI) is the internationally recognised metric system. Metric units of mass, length, and electricity are widely used around the world for both everyday and scientific purposes. International System of Units The International System of Units (abbreviated as SI from the French language name Système International d'Unités) is the modern revision of the metric system. It is the world's most widely used system of units, both in everyday commerce and in science. The SI was developed in 1960 from the metre–kilogram–second (MKS) system, rather than the centimetre–gram–second (CGS) system, which, in turn, had many variants. The SI units for the seven base physical quantities are: In the SI, base units are the simple measurements for time, length, mass, temperature, amount of substance, electric current and light intensity. Derived units are constructed from the base units, for example, the watt, i.e. the unit for power, is defined from the base units as m2·kg·s−3. Other physical properties may be measured in compound units, such as material density, measured in kg/m3. Converting prefixes The SI allows easy multiplication when switching among units having the same base but different prefixes. To convert from metres to centimetres it is only necessary to multiply the number of metres by 100, since there are 100 centimetres in a metre. Inversely, to switch from centimetres to metres one multiplies the number of centimetres by 0.01 or divides the number of centimetres by 100. Length A ruler or rule is a tool used in, for example, geometry, technical drawing, engineering, and carpentry, to measure lengths or distances or to draw straight lines. Strictly speaking, the ruler is the instrument used to rule straight lines and the calibrated instrument used for determining length is called a measure, however common usage calls both instruments rulers and the special name straightedge is used for an unmarked rule. The use of the word measure, in the sense of a measuring instrument, only survives in the phrase tape measure, an instrument that can be used to measure but cannot be used to draw straight lines. As can be seen in the photographs on this page, a two-metre carpenter's rule can be folded down to a length of only 20 centimetres, to easily fit in a pocket, and a five-metre-long tape measure easily retracts to fit within a small housing. Time Time is an abstract measurement of elemental changes over a non-spatial continuum. It is denoted by numbers and/or named periods such as hours, days, weeks, months and years. It is an apparently irreversible series of occurrences within this non spatial continuum. It is also used to denote an interval between two relative points on this continuum. Mass Mass refers to the intrinsic property of all material objects to resist changes in their momentum. Weight, on the other hand, refers to the downward force produced when a mass is in a gravitational field. In free fall, (no net gravitational forces) objects lack weight but retain their mass. The Imperial units of mass include the ounce, pound, and ton. The metric units gram and kilogram are units of mass. One device for measuring weight or mass is called a weighing scale or, often, simply a scale. A spring scale measures force but not mass, a balance compares weight, both require a gravitational field to operate. Some of the most accurate instruments for measuring weight or mass are based on load cells with a digital read-out, but require a gravitational field to function and would not work in free fall. Economics The measures used in economics are physical measures, nominal price value measures and real price measures. These measures differ from one another by the variables they measure and by the variables excluded from measurements. Survey research In the field of survey research, measures are taken from individual attitudes, values, and behavior using questionnaires as a measurement instrument. As all other measurements, measurement in survey research is also vulnerable to measurement error, i.e. the departure from the true value of the measurement and the value provided using the measurement instrument. In substantive survey research, measurement error can lead to biased conclusions and wrongly estimated effects. In order to get accurate results, when measurement errors appear, the results need to be corrected for measurement errors. Exactness designation The following rules generally apply for displaying the exactness of measurements: All non-0 digits and any 0s appearing between them are significant for the exactness of any number. For example, the number 12000 has two significant digits, and has implied limits of 11500 and 12500. Additional 0s may be added after a decimal separator to denote a greater exactness, increasing the number of decimals. For example, 1 has implied limits of 0.5 and 1.5 whereas 1.0 has implied limits 0.95 and 1.05. Difficulties Since accurate measurement is essential in many fields, and since all measurements are necessarily approximations, a great deal of effort must be taken to make measurements as accurate as possible. For example, consider the problem of measuring the time it takes an object to fall a distance of one metre (about 39 in). Using physics, it can be shown that, in the gravitational field of the Earth, it should take any object about 0.45 second to fall one metre. However, the following are just some of the sources of error that arise: This computation used for the acceleration of gravity . But this measurement is not exact, but only precise to two significant digits. The Earth's gravitational field varies slightly depending on height above sea level and other factors. The computation of 0.45 seconds involved extracting a square root, a mathematical operation that required rounding off to some number of significant digits, in this case two significant digits. Additionally, other sources of experimental error include: carelessness, determining of the exact time at which the object is released and the exact time it hits the ground, measurement of the height and the measurement of the time both involve some error, air resistance, posture of human participants. Scientific experiments must be carried out with great care to eliminate as much error as possible, and to keep error estimates realistic. Definitions and theories Classical definition In the classical definition, which is standard throughout the physical sciences, measurement is the determination or estimation of ratios of quantities. Quantity and measurement are mutually defined: quantitative attributes are those possible to measure, at least in principle. The classical concept of quantity can be traced back to John Wallis and Isaac Newton, and was foreshadowed in Euclid's Elements. Representational theory In the representational theory, measurement is defined as "the correlation of numbers with entities that are not numbers". The most technically elaborated form of representational theory is also known as additive conjoint measurement. In this form of representational theory, numbers are assigned based on correspondences or similarities between the structure of number systems and the structure of qualitative systems. A property is quantitative if such structural similarities can be established. In weaker forms of representational theory, such as that implicit within the work of Stanley Smith Stevens, numbers need only be assigned according to a rule. The concept of measurement is often misunderstood as merely the assignment of a value, but it is possible to assign a value in a way that is not a measurement in terms of the requirements of additive conjoint measurement. One may assign a value to a person's height, but unless it can be established that there is a correlation between measurements of height and empirical relations, it is not a measurement according to additive conjoint measurement theory. Likewise, computing and assigning arbitrary values, like the "book value" of an asset in accounting, is not a measurement because it does not satisfy the necessary criteria. Three type of representational theory Empirical relation In science, an empirical relationship is a relationship or correlation based solely on observation rather than theory. An empirical relationship requires only confirmatory data irrespective of theoretical basis. The rule of mapping The real world is the Domain of mapping, and the mathematical world is the range. when we map the attribute to mathematical system, we have many choice for mapping and the range. The representation condition of measurement Theory All data are inexact and statistical in nature. Thus the definition of measurement is: "A set of observations that reduce uncertainty where the result is expressed as a quantity." This definition is implied in what scientists actually do when they measure something and report both the mean and statistics of the measurements. In practical terms, one begins with an initial guess as to the expected value of a quantity, and then, using various methods and instruments, reduces the uncertainty in the value. In this view, unlike the positivist representational theory, all measurements are uncertain, so instead of assigning one value, a range of values is assigned to a measurement. This also implies that there is not a clear or neat distinction between estimation and measurement. Quantum mechanics In quantum mechanics, a measurement is an action that determines a particular property (such as position, momentum, or energy) of a quantum system. Quantum measurements are always statistical samples from a probability distribution; the distribution for many quantum phenomena is discrete, not continuous. Quantum measurements alter quantum states and yet repeated measurements on a quantum state are reproducible. The measurement appears to act as a filter, changing the quantum state into one with the single measured quantum value. The unambiguous meaning of the quantum measurement is an unresolved fundamental problem in quantum mechanics; the most common interpretation is that when a measurement is performed, the wavefunction of the quantum system "collapses" to a single, definite value. Biology In biology, there is generally no well established theory of measurement. However, the importance of the theoretical context is emphasized. Moreover, the theoretical context stemming from the theory of evolution leads to articulate the theory of measurement and historicity as a fundamental notion. Among the most developed fields of measurement in biology are the measurement of genetic diversity and species diversity. See also Conversion of units Electrical measurements History of measurement ISO 10012, Measurement management systems Levels of measurement List of humorous units of measurement List of unusual units of measurement Measurement in quantum mechanics Measurement uncertainty NCSL International Observable quantity Orders of magnitude Quantification (science) Standard (metrology) Timeline of temperature and pressure measurement technology Timeline of time measurement technology Weights and measures References External links Schlaudt, Oliver 2020: "measurement". In: Kirchhoff, Thomas (ed.): Online Encyclopedia Philosophy of Nature. Heidelberg: Universitätsbibliothek Heidelberg, measurement. Tal, Era 2020: "Measurement in Science". In: Zalta, Edward N. (ed.): The Stanford Encyclopedia of Philosophy (Fall 2020 ed.), Measurement in Science. A Dictionary of Units of Measurement 'Metrology – in short' 3rd ed., July 2008 Accuracy and precision Metrology
Measurement
Physics,Mathematics
3,524
25,038,967
https://en.wikipedia.org/wiki/Gray%27s%20paradox
Gray's Paradox is a paradox posed in 1936 by British zoologist Sir James Gray. The paradox was to figure out how dolphins can obtain such high speeds and accelerations with what appears to be a small muscle mass. Gray made an estimate of the power a dolphin could exert based on its physiology, and concluded the power was insufficient to overcome the drag forces in water. He hypothesized that Dolphin's skin must have special anti-drag properties. In 2008, researchers from Rensselaer Polytechnic Institute, West Chester University and the University of California, Santa Cruz used digital particle image velocimetry to prove that Gray's assumptions oversimplified the relationship between muscle power and drag force. Timothy Wei, professor and acting dean of Rensselaer's School of Engineering, videotaped two bottlenose dolphins, Primo and Puka, as they swam through a section of water populated with hundreds of thousands of tiny air bubbles. Computer software and force measurement tools developed for aerospace were then used to study the particle-image velocimetry which was captured at 1,000 frames per second (fps). This allowed the team to measure the force exerted by a dolphin. Results showed the dolphin to exert approximately 200 lb of force every time it thrust its tail – 10 times more than Gray hypothesized – and at peak force can exert between 300 and 400 lb. Wei also used this technique to film dolphins as they were doing tail-stands, a trick where the dolphins “walk” on water by holding most of their bodies vertical above the water while supporting themselves with short, powerful thrusts of their tails. In 2009, researchers from the National Chung Hsing University in Taiwan introduced new concepts of “kidnapped airfoils” and “circulating horsepower” to explain the swimming capabilities of the swordfish. Swordfish swim at even higher speeds and accelerations than dolphins. The researchers claim their analysis also "solves the perplexity of dolphin’s Gray paradox". Gray's flawed assumption The prior research efforts to refute Gray's paradox only looked at the drag reducing aspect of dolphin's skin, but never questioned the basic assumption of Gray "that drag cannot be greater than muscle work" which led to paradox in the first place. In 2014, a team of theoretical mechanical engineers from Northwestern University proved the underlying hypothesis of Gray's paradox wrong. They showed mathematically that drag on undulatory swimmers (such as dolphins) can indeed be greater than the muscle power it generates to propel itself forward, without being paradoxical. They introduced the concept of "energy cascade" to show that during steady swimming all of the generated muscle power is dissipated in the wake of the swimmer (through viscous dissipation). A swimmer uses muscle power to undulate its body, which causes it to experience both drag and thrust simultaneously. Muscle power generated should be equated to power needed to deform the body, rather than equating it to the drag power. On the contrary drag power should be equated to thrust power. This is because during steady swimming, drag and thrust are equal in magnitude but opposite in direction. Their findings can be summarized in a simple power balance equation: in which, . It is important to acknowledge the fact that a swimmer does not have to spend energy to overcome drag all through its muscle work; it is also assisted by the thrust force in this task. Their research also shows that defining drag on the body is definitional and many definitions of drag on the swimming body are prevalent in literature. Some of these definitions can give higher value than the muscle power. However, this does not lead to any paradox because higher drag also means higher thrust in the power balance equation, and this does not violate any energy balance principles. References Notes Fish, Frank (2005) A porpoise for power The Journal of Experimental Biology, Classics, 208: 977–978. External links Gray's Paradox on Science Daily Dolphins Zoology Biomechanics Paradoxes
Gray's paradox
Physics,Biology
820
11,460,686
https://en.wikipedia.org/wiki/Rhytidhysteron%20rufulum
Rhytidhysteron rufulum is a saprobic ascomycete able to infect humans. References External links Index Fungorum Fungal citrus diseases Enigmatic Dothideomycetes taxa Fungi described in 1820 Taxa named by Kurt Polycarp Joachim Sprengel Fungus species
Rhytidhysteron rufulum
Biology
61
35,899,529
https://en.wikipedia.org/wiki/Glucuronoxylan
Glucuronoxylans are the primary components of hemicellulose as found in hardwood trees, for example birch. They are hemicellulosic plant cell wall polysaccharides, containing glucuronic acid and xylose as its main constituents. They are linear polymers of β-D-xylopyranosyl units linked by (1→4) glycosidic bonds, with many of the xylose units substituted with 2, 3 or 2,3-linked glucuronate residue, which are often methylated at position 4. Most of the glucuronoxylans have single 4-O-methyl-α-D-glucopyranosyl uronate residues (MeGlcA) attached at position 2. This structural type is usually named as 4-O-methyl-D-glucurono-D-xylan (MGX). Angiosperm (hardwood) glucuronoxylans also have a high rate of substitution (70-80%) by acetyl groups, at position 2 and/or 3 of the β-D-xylopyranosyl, conferring on the xylan its partial solubility in water. References Polysaccharides
Glucuronoxylan
Chemistry,Biology
274
24,328,126
https://en.wikipedia.org/wiki/CIEMAT
The Centre for Energy, Environmental and Technological Research (CIEMAT), until 1986 Junta de Energía Nuclear (JEN), is a Spanish public research institution. History The Centre for Energy, Environmental and Technological Research (CIEMAT) is a Spanish public research institution which specializes in energy and the environment. It is attached to the General Secretariat for Research of the Ministry of Science and Innovation. In September 1948, Francisco Franco, by means of a decree of reserved character, created the Board of Atomic Investigations o Junta de Investigaciones Atómicas (JIA), constituted 8 October 1948 and formed by Jose Maria Otero de Navascués (director-general and president until 1974), Manuel Lora-Tamayo, Armando Durán Miranda and José Ramón Sobredo i Rioboo. In 1951, after finishing the secret phase, it was rebaptized as Board of Nuclear Power or Junta de Energía Nuclear (JEN), under the presidency of General Juan Vigón and with Otero de Navascués as chief of the main directorate (later he would be its president again), and has since carried out research and technological development projects, serving as a reference to technically represent Spain in international forums and to advise public administrations on matters within its areas of research. In 1956, Guillermo Velarde entered the Division of Theoretical Physics of this Meeting, later being named Director of Technology that included the Divisions of Electronics, Theory and Calculation of Reactors, Nuclear Fusion, Engineering and Reactors in Operation. References External links Research institutes in the Community of Madrid Government agencies of Spain Governmental nuclear organizations Nuclear power in Spain Nuclear research institutes Nuclear technology organizations of Spain Complutense University of Madrid
CIEMAT
Engineering
343
23,218,433
https://en.wikipedia.org/wiki/Rhodotus
Rhodotus is a genus in the fungus family Physalacriaceae. There are two species in the genus with the best known, Rhodotus palmatus, called the netted rhodotus, the rosy veincap, or the wrinkled peach. This uncommon species has a circumboreal distribution, and has been collected in eastern North America, northern Africa, Europe, and Asia; declining populations in Europe have led to its appearance in over half of the European fungal Red Lists of threatened species. Typically found growing on the stumps and logs of rotting hardwoods, mature specimens may usually be identified by the pinkish color and the distinctive ridged and veined surface of their rubbery caps; variations in the color and quantity of light received during development lead to variations in the size, shape, and cap color of fruit bodies. The unique characteristics of R. palmatus have made it difficult for taxonomists to agree on how it should be classified, resulting in an elaborate taxonomical history and an extensive synonymy. First named Agaricus palmatus by Bulliard in 1785, it was reclassified into several different genera before becoming Rhodotus in 1926. The familial placement of the genus Rhodotus within the order Agaricales has also been subject to dispute, and the taxon has been transferred variously to the families Amanitaceae, Entolomataceae, and Tricholomataceae. More recently, molecular phylogenetics analysis has helped determine that Rhodotus is most closely related to genera in the Physalacriaceae. History and etymology The type species of genus Rhodotus was originally described as Agaricus palmatus in 1785 by French botanist Jean Bulliard; mycologist Elias Magnus Fries later included it under the same name in his Systema Mycologicum. It was transferred to the then newly described genus Rhodotus in a 1926 publication by French mycologist René Maire. The specific epithet is derived from the Latin palmatus, meaning "shaped like a hand"—possibly a reference to the resemblance of the cap surface to the lines in the palm of a hand. Common names for R. palmatus include the netted rhodotus, the rosy veincap, and the wrinkled peach. Synonymy French botanist Claude Gillet called the species Pleurotus subpalmatus in 1876. A 1986 paper reported that the species Pleurotus pubescens, first described by American mycologist Charles Horton Peck in 1891, was the same as Rhodotus palmatus, making their names synonymous. According to the same publication, another synonym is Lentinula reticeps, described by William Alphonso Murrill in 1915, who thought it to be synonymous with Agaricus reticeps (described by Montagne in 1856), Agaricus reticulatus (Johnson, 1880), Agaricus alveolatus (Cragin, 1885), Pluteus alveolatus (Saccardo, 1887), and Panus meruliiceps (Peck, 1905). Taxonomy The placement of the genus Rhodotus in the order Agaricales is uncertain, and various authors have offered solutions to the taxonomic conundrum. In 1951, Agaricales specialist Rolf Singer placed Rhodotus in the Amanitaceae because of similarities between the tribes Amaniteae and Rhodoteae, such as spore color and ornamentation (modifications of the spore wall that result in surface irregularities), structure of the hyphae and trama, and chlamydospore production during culture growth. In 1953, French mycologists Robert Kühner and Henri Romagnesi placed Rhodotus in the family Tricholomataceae—a traditional "wastebasket taxon"—on the basis of spore color. In 1969, Besson argued for the placement of Rhodotus with the Entolomataceae after studying the ultrastructure of the spores. By 1986, Singer had revised the placement of Rhodotus in his latest edition of The Agaricales in Modern Taxonomy, noting that "It has formerly been inserted in the family Amanitaceae but is obviously closer to tribus Pseudohiatuleae of the Tricholomataceae." Tribe Pseudohiatuleae included such genera as Flammulina, Pseudohiatula, Cyptotrama, and Callistodermatium. In 1988, a proposal was made to split the Tricholomataceae into several new families, including a family, Rhodotaceae, to contain the problematic genus. The use of molecular phylogenetics has helped to clarify the proper taxonomic placement of Rhodotus. Studies of the ribosomal DNA sequences from a wide variety of agaric fungi have corroborated Kühner and Romagnesi's placement of Rhodotus in the Tricholomataceae as then understood. A large scale phylogenetic analysis published in 2005 showed Rhodotus to be in the "core euagarics clade", a name given to a grouping of gilled mushrooms corresponding largely to the suborder Agaricineae as defined by Singer (1986), but also including taxa that were traditionally classified in the Aphyllophorales (e.g., Clavaria, Typhula, Fistulina, Schizophyllum, etc.) and several orders of Gasteromycetes (e.g., Hymenogastrales, Lycoperdales, Nidulariales). These results corroborated a previous study which showed Rhodotus to be part of a clade containing species such as Cyptotrama asprata, Marasmius trullisatus, Flammulina velutipes, Xerula furfuracea, Gloiocephala menieri, and Armillaria tabescens. The genera containing these latter species have been reassigned to the family Physalacriaceae; as of 2009, both Index Fungorum and MycoBank also list Rhodotus as belonging to the Physalacriaceae. Follow up molecular genetics surveys of Physalacriaceae fungi in China identified Rhodotus asperior as the second member of the Rhodotus genus. Characteristics The fruit body of Rhodotus has a cap, and stem without a ring or volva. The cap initially assumes a convex shape before flattening somewhat with age, and typically reaches widths of . The edges of the cap are rolled inwards, and the cap surface typically has a conspicuous network of lightly colored ridges or veins that outline deep and narrow grooves or pits—a condition technically termed sulcate or reticulate. Between the ridges, the surface color is somewhat variable; depending on the lighting conditions experienced by the mushroom during its development, it may range from salmon-orange to pink to red. The texture of the cap surface is gelatinous, and the internal flesh is firm but rubbery, and pinkish in color. The gills have an adnate attachment to the stem, that is, broadly attached to the stem along all or most of the gill width. The gills are thick, packed close to each other, with veins and color similar to, but paler than, the cap. Some of the gills do not extend the full distance from the edge of the cap to the stem. These short gills, called lamellulae, form two to four groups of roughly equal length. The stem is tall and thick (usually slightly larger near the base), and may be attached to the underside of the cap in a central or lateral manner. Like the cap color, stem size is also affected by the type of light received during fruit body maturation. In nature, Rhodotus palmatus is sometimes seen "bleeding" a red- or orange-colored liquid. A similar phenomenon has also been observed when it is grown in laboratory culture on a petri dish: the orange-colored drops that appear on the mat formed by fungal mycelia precede the initial appearance of fruit bodies. The mature fruit body will turn green when exposed to a 10% aqueous solution of iron(II) sulfate (FeSO4), a common mushroom identification test known as iron salts. Microscopic features In deposit, the spore color of Rhodotus palmatus has been described most commonly as pink, but also as cream colored. Viewed microscopically, the spores of Rhodotus have a roughly spherical shape, with dimensions of 6–7.2 by 5.6–6.5 μm; the spore surface is marked with numerous wart-like projections (defined as verrucose), typically 0.5–0.7 μm long. The spores are non-amyloid—unable to take up iodine stain in the chemical test with Melzer's reagent. The spore-bearing cells, the basidia, are club-shaped and 4-spored, with dimensions of 33.6–43.2 by 5.6–8 μm. Although this species lacks cells called pleurocystidia (large sterile cells found on the gill face in some mushrooms), it contains abundant cheilocystidia (large sterile cells found on the gill edge) that are 27.2–48 by 4.8–8 μm in size. Clamp connections are present in the hyphae. The outer cellular layer of the cap cuticle is made of bladder-shaped, thick-walled hyphae, each individually supported by a small stalk that extends down into a "gelatinized zone". Chlamydospores are asexual reproductive units made by some fungi that allow them to exist solely as mycelium, a process which helps them survive over periods unsuitable for growth; Rhodotus was shown experimentally to be capable of producing these structures in 1906. The chlamydospores of Rhodotus are thick-walled cells that develop from single hyphal compartments, and have dimensions of by . Edibility Depending on the source consulted, the edibility of Rhodotus palmatus is typically listed as unknown or inedible. The species has no distinguishable odor, and a "bitter" taste, although one early description referred to the taste as "sweet". Antimicrobial activity As part of a Spanish research study to evaluate the antimicrobial activity of mushrooms, Rhodotus palmatus was one of 204 species screened against a panel of human clinical pathogens and laboratory control strains. Using a standard laboratory method to determine antimicrobial susceptibility, the mushroom was shown to have moderate antibacterial activity against Bacillus subtilis, and weak antifungal activity against both Saccharomyces cerevisiae and Aspergillus fumigatus. Habitat and distribution Rhodotus palmatus is saprobic, meaning it obtains nutrients from decomposing organic matter. It grows scattered or clustered in small groups on rotting hardwoods, such as basswood, maple, and especially elm; in Europe it is known to grow on horse chestnut. The mushroom prefers low-lying logs in areas that are periodically flooded and that receive little sunlight, such as areas shaded by forest canopy. A pioneer species in the fungal colonization of dead wood, it prefers to grow on relatively undecayed substrates. It is often found growing on dark-stained wood, especially the dried-out upper parts of trunks that have lost their bark. R. palmatus tends to fruit in cooler and moister weather, from spring to autumn in the United States, or autumn to winter in Britain and Europe. Described as having a circumboreal distribution, R. palmatus has been reported from Canada, Iran, Hungary, Italy, Poland, Slovakia, Denmark, Sweden, Norway, Germany, the area formerly known as the USSR, Korea, Japan, and New Zealand. In the United States it has been found in Indiana, and elsewhere in eastern North America. Although often described as "rare", a 1997 study suggests that it may be relatively common in Illinois. It has been suggested that an increase in the number of dead elms, a byproduct of Dutch elm disease, has contributed to its resurgence. Light requirements Light at the red end of the visible spectrum has been observed to be required for the development of R. palmatus fruit bodies, contrary to the typical requirement for blue light seen with many other mushroom species. Fruiting occurs in the presence of green, yellow or red light with wavelengths above 500 nm, but only when blue light (under 500 nm) is absent. Consequently, phenotypic variations observed in the field—such as size, shape, and cap color—may be influenced by differing conditions of light color and intensity. For example, specimens grown in the laboratory under green light had fruit bodies with short, straight stems and pale orange, large caps with well-developed ridges and pits, an appearance similar to specimens found in the field that were growing under a canopy of green leaves. Laboratory-grown specimens under amber light had bright orange, small caps with less pronounced reticulations; similarly, field specimens found in the fall, after the leaves had fallen, were more orange to orange-pink in color. Conservation status In the 1980s in Europe, increases in the levels of air pollution, as well as changing land use practices coincided with reports of declines in the populations of certain mushrooms. Consequently, a number of fungal conservation initiatives were started to better understand fungal biodiversity; as of October 2007, 31 European countries have produced fungal Red Lists of threatened species. Rhodotus palmatus is a candidate species in over half of the European fungal Red Lists, and is listed as critically endangered, endangered, or near threatened (or the equivalent) in 12 countries. In the Baltic countries Estonia, Latvia, and Lithuania, it is considered by the Environmental Protection Ministries (a branch of government charged with implementing the Convention on Biological Diversity) to be regionally extinct, reported as "extinct or probably extinct". It was one of 35 fungal species to gain legal protection in Hungary in 2005, making it a fineable offense to pick them. Notes Cited text External links Rhodotus at Index Fungorum Picture of spores Fungi of Africa Fungi of Asia Fungi of Europe Fungi of New Zealand Fungi of North America Fungi of Oceania Inedible fungi Agaricales genera Physalacriaceae Fungi described in 1785 Taxa named by René Maire Fungus species
Rhodotus
Biology
3,010
6,095,467
https://en.wikipedia.org/wiki/Erland%20Samuel%20Bring
Erland Samuel Bring (19 August 1736 – 20 May 1798) was a Swedish mathematician. Bring studied at Lund University between 1750 and 1757. In 1762 he obtained a position of a reader in history and was promoted to professor in 1779. At Lund he wrote eight volumes of mathematical work in the fields of algebra, geometry, analysis and astronomy, including Meletemata quaedam mathematica circa transformationem aequationum algebraicarum (1786). This work describes Bring's contribution to the algebraic solution of equations. Bring had developed an important transformation to simplify a quintic equation to the form (see Bring radical). In 1832–35 the same transformation was independently derived by George Jerrard. However, whereas Jerrard knew from the past work by Paolo Ruffini and Niels Henrik Abel that a general quintic equation can not be solved, this fact was not known to Bring, putting him in a disadvantage. Bring's curve is named after him. References 1736 births 1798 deaths 18th-century Swedish mathematicians Algebraists
Erland Samuel Bring
Mathematics
213
10,825,303
https://en.wikipedia.org/wiki/Leslie%20Kay%20%28engineer%29
Leslie Kay (14 January 1922 – 2 June 2020) was a British–New Zealand electrical engineer, particularly known for the development of ultrasonic devices to assist the blind. Early life and family Kay was born in Chester-le-Street, County Durham, England, on 14 January 1922, the son of a colliery manager. He left school at the age of 14, and accepted an electrical apprenticeship at the local colliery managed by his father, and took night classes in electrical engineering. In 1940, Kay joined the Royal Air Force, training as a pilot, but was later posted as an aircraft engineer because of his engineering background. His role included modifying aircraft, recalibrating their instruments, and test-flying the planes to ensure their airworthiness. In 1944, Kay married Nora Waters, and the couple went on to have three children. Career England After World War II, Kay studied for a Bachelor of Engineering degree at the Newcastle campus of Durham University, graduating in 1948. Later he joined the Admiralty as a civilian scientist based at the Isle of Portland. He was involved in the development of transmitting underwater sonar for the identification of submarines, mines and torpedoes, undertaking research both on land and at sea. He also took part in naval operations, and was in a submarine off Port Said during the Suez Crisis. After discovering that details of the technology that he helped to develop had been passed to the Soviet Union, Kay chose to move to an academic post at the University of Birmingham, where he established the Department of Electrical Engineering. At Birmingham, Kay initially continued researching underwater ultrasonic technology, but was inspired to investigate air sonar to assist blind people to navigate after watching blind children learning to swim. This led to his study of the way that bats navigate, and the development of devices for blind people. Kay was awarded a PhD by Birmingham on the basis of eight papers on echolocation by humans and animals that he either authored or co-authored. New Zealand In 1965, Kay and his family migrated to New Zealand, where he took up a post at the University of Canterbury in Christchurch. He was appointed to a personal chair in 1982. Kay served as a member of the University Grants Committee, head of the Department of Electrical and Electronic Engineering, and dean of the School of Engineering at Canterbury. At Canterbury, Kay continued his work improving devices for the blind, as well as applying ultrasonic technology to applications in medicine, robotics, diving and fishing. He developed an international reputation for his work, particularly for the sonic torch, allowing blind people to avoid obstacles, sonic spectacles, and the Trisensor Aid, allowing blind children to be trained in spatial awareness. Kay became a naturalised New Zealander in 1979. When he retired from the University of Canterbury in 1986, Kay was conferred the title of professor emeritus. He continued his research independently, establishing Bay Advanced Technologies, a research business in Russell, to further refine devices for the blind. In 1999, he received the Saatchi and Saatchi Prize for innovation, recognising his lifetime's contribution to the field. Honours and awards In 1971, Kay was elected a Fellow of the Royal Society of New Zealand, the first engineer to be so honoured. He was also awarded fellowships of the New Zealand Institution of Engineers (now Engineering New Zealand), the Institution of Electrical Engineers and the Institution of Electronic and Radio Engineers (both now part of the Institution of Engineering and Technology). In the 1988 New Year Honours, Kay was appointed an Officer of the Order of the British Empire, for services to electrical and electronic engineering. Later life and death Kay's wife, Nora, died in 2004, and he retired from active research in 2006. In retirement, Kay lived with his daughter in Southland, where he died on 2 June 2020. References 1922 births 2020 deaths People from Chester-le-Street Royal Air Force personnel of World War II English electrical engineers Civil servants in the Admiralty People of the Cold War Alumni of the University of Birmingham Academics of the University of Birmingham British emigrants to New Zealand New Zealand electrical engineers Naturalised citizens of New Zealand Academic staff of the University of Canterbury Fellows of the Royal Society of New Zealand Fellows of the Institution of Engineering and Technology New Zealand Officers of the Order of the British Empire Alumni of King's College, Newcastle
Leslie Kay (engineer)
Engineering
866
14,390,885
https://en.wikipedia.org/wiki/Effective%20molarity
In chemistry, the effective molarity (denoted EM) is defined as the ratio between the first-order rate constant of an intramolecular reaction and the second-order rate constant of the corresponding intermolecular reaction (kinetic effective molarity) or the ratio between the equilibrium constant of an intramolecular reaction and the equilibrium constant of the corresponding intermolecular reaction (thermodynamic effective molarity). EM has the dimension of concentration. High EM values always indicate greater ease of intramolecular processes over the corresponding intermolecular ones. Effective molarities can be used to get a deeper understanding of the effects of intramolecularity on reaction courses. See also Cyclic compound Intramolecular reaction Macrocycle Polymerization References Physical organic chemistry
Effective molarity
Chemistry
164
947,875
https://en.wikipedia.org/wiki/Johns%20Manville
Johns Manville is an American company based in Denver, Colorado, that manufactures insulation, roofing materials and engineered products. For much of the 20th century, the then-titled Johns-Manville Corporation was the global leader in the manufacture of asbestos-containing products, including asbestos pipe insulation, asbestos shingles, asbestos roofing materials and asbestos cement pipe. The stock of Johns-Manville Corporation had been included in the Dow Jones Industrial Average from January 29, 1930 to August 27, 1982, when it was replaced by American Express. In 1981, Johns-Manville Corporation was renamed simply Manville. In 1982, facing unprecedented liability for asbestos injury claims, the company voluntarily filed for bankruptcy under Chapter 11 of the U.S. Bankruptcy Code. Berkshire Hathaway bought the company in 2001, then chairman and CEO Jerry Henry retired in 2004. At that point, Steve Hochhauser became chairman, president and CEO. Todd Raba succeeded him in the summer of 2007; he came from MidAmerican Energy Holdings, another Berkshire Hathaway company. In November 2012, Mary Rhinehart was named president and CEO, and she added the title of chairman in 2014. In September 2020, Bob Wamboldt became CEO and president, while Rhinehart remained as chairman. Today, Johns Manville is a manufacturer and marketer of products for building insulation, mechanical insulation, commercial roofing and roof insulation, as well as fibers and non-woven materials for commercial, industrial and residential applications. The company serves markets that include aerospace, automotive and transportation, air handling, appliance, HVAC, pipe and equipment, filtration, waterproofing, building, flooring, interiors and wind energy. Johns Manville has annual sales over $4.5 billion. The company employs 8,000 people and operates 46 manufacturing facilities in North America and Europe. History Early years The present-day Johns Manville company traces its origins to two early manufacturers of construction materials. At the age of 21, Henry Ward Johns had already patented roofing and insulation products. In 1858, he founded the H.W. Johns Manufacturing Company in New York City. In 1885, the Manville Covering Company was established in Wisconsin by Charles B. Manville, whose grandson was the much-married socialite Tommy Manville. In 1901, the H.W. Johns Manufacturing Company and the Manville Covering Company merged to form the H.W. Johns-Manville Company. In 1926, the firm was renamed to Johns-Manville Corporation. During the 1930s, industrialist Lewis H. Brown was president of the company. In 1949, the Canadian branch of the company was involved in the Asbestos Strike at its mines in Asbestos, Quebec. In 1958, Johns-Manville bought Glass Fibers, Inc., based in Toledo, Ohio, from Randolph Barnard. This purchase propelled the company's insulation division. At that time, Dominick Labino was working for Glass Fibers; Barnard and Labino both joined Johns-Manville. Glass Fibers had several plants in Waterville and Defiance, which are still in operation under Johns Manville, Beginning just after World War II, sculptor Beverly Bender spent thirty-two years working in the art department of Johns-Manville, creating animal sculpture in her free time. Asbestos litigation and bankruptcy Starting as early as 1929, Johns-Manville employees began claiming disability from lung diseases. The claims settled out of court, with a secrecy order. In 1943, Samac Laboratory in New York confirmed the link between asbestos and cancer, but Johns-Manville suppressed the report. From approximately 1930 to 1950, attorney Vandiver Brown handled involvement in such lawsuits. Files and testimony alleged that "[Johns-Manville] maintained a policy into the 1970s of not telling its employes that their physical examinations showed signs of asbestosis". In 1943, Johns-Manville suppressed a report confirming the link between asbestos and cancer. During the 1960s, 1970s and 1980s, the company faced thousands of individual and class action lawsuits based on asbestos-related injuries such as asbestosis, lung cancer and malignant mesothelioma. Many new settlements included offering $600 for asbestosis, while the FAIR Act called for $12,000 for this condition level. As a result, the company voluntarily filed for chapter 11 bankruptcy protection in 1982. At that time, it was the largest company in United States history to have done so. The filing shocked financial analysts, but a few, such as Gary J. Aguirre, had predicted the filing and had forced the company to post a bond to guarantee payment to their clients. The bankruptcy was resolved by the formation of the Manville Trust to pay asbestos tort claimants in an orderly fashion by giving the trust the lion's share of the equity in the company. The bankruptcy took over five years to process and resulted in protracted litigation. The Manville Trust is still in operation today. Post-bankruptcy The company emerged from Chapter 11 in 1988 as the Manville Corporation. In 1997, the company changed its name back to Johns Manville (but without the hyphen), and this is the name under which it does business today. In 2001, Johns Manville became a wholly owned subsidiary of Berkshire Hathaway (, ). In 2012, Johns Manville appointed a new CEO, Mary Rhinehart. She was the CFO for Johns Manville and had been with the company for over 33 years. In 2020, Bob Wamboldt became the president and CEO, while Rhinehart remained as chairman. Manville, New Jersey The town of Manville, New Jersey, is named for the company. It had a large manufacturing plant in the borough. References External links About the class action suit in the Asbestos Hazards Handbook (London Hazards Centre) Johns Manville 150 year commemoration publication Home Insulation site Building Materials site Energy Tax Credit Insulation Manville Trust Glassmaking companies of the United States Asbestos Companies based in New York City Manufacturing companies based in Denver Manville, New Jersey Manufacturing companies established in 1858 Companies that filed for Chapter 11 bankruptcy in 1982 1858 establishments in New York (state) Former components of the Dow Jones Industrial Average Berkshire Hathaway Superfund sites in Illinois
Johns Manville
Environmental_science
1,258
1,121,587
https://en.wikipedia.org/wiki/Schur%20multiplier
In mathematical group theory, the Schur multiplier or Schur multiplicator is the second homology group of a group G. It was introduced by in his work on projective representations. Examples and properties The Schur multiplier of a finite group G is a finite abelian group whose exponent divides the order of G. If a Sylow p-subgroup of G is cyclic for some p, then the order of is not divisible by p. In particular, if all Sylow p-subgroups of G are cyclic, then is trivial. For instance, the Schur multiplier of the nonabelian group of order 6 is the trivial group since every Sylow subgroup is cyclic. The Schur multiplier of the elementary abelian group of order 16 is an elementary abelian group of order 64, showing that the multiplier can be strictly larger than the group itself. The Schur multiplier of the quaternion group is trivial, but the Schur multiplier of dihedral 2-groups has order 2. The Schur multipliers of the finite simple groups are given at the list of finite simple groups. The covering groups of the alternating and symmetric groups are of considerable recent interest. Relation to projective representations Schur's original motivation for studying the multiplier was to classify projective representations of a group, and the modern formulation of his definition is the second cohomology group . A projective representation is much like a group representation except that instead of a homomorphism into the general linear group , one takes a homomorphism into the projective general linear group . In other words, a projective representation is a representation modulo the center. showed that every finite group G has associated to it at least one finite group C, called a Schur cover, with the property that every projective representation of G can be lifted to an ordinary representation of C. The Schur cover is also known as a covering group or Darstellungsgruppe. The Schur covers of the finite simple groups are known, and each is an example of a quasisimple group. The Schur cover of a perfect group is uniquely determined up to isomorphism, but the Schur cover of a general finite group is only determined up to isoclinism. Relation to central extensions The study of such covering groups led naturally to the study of central and stem extensions. A central extension of a group G is an extension where is a subgroup of the center of C. A stem extension of a group G is an extension where is a subgroup of the intersection of the center of C and the derived subgroup of C; this is more restrictive than central. If the group G is finite and one considers only stem extensions, then there is a largest size for such a group C, and for every C of that size the subgroup K is isomorphic to the Schur multiplier of G. If the finite group G is moreover perfect, then C is unique up to isomorphism and is itself perfect. Such C are often called universal perfect central extensions of G, or covering group (as it is a discrete analog of the universal covering space in topology). If the finite group G is not perfect, then its Schur covering groups (all such C of maximal order) are only isoclinic. It is also called more briefly a universal central extension, but note that there is no largest central extension, as the direct product of G and an abelian group form a central extension of G of arbitrary size. Stem extensions have the nice property that any lift of a generating set of G is a generating set of C. If the group G is presented in terms of a free group F on a set of generators, and a normal subgroup R generated by a set of relations on the generators, so that , then the covering group itself can be presented in terms of F but with a smaller normal subgroup S, that is, . Since the relations of G specify elements of K when considered as part of C, one must have . In fact if G is perfect, this is all that is needed: C ≅ [F,F]/[F,R] and M(G) ≅ K ≅ R/[F,R]. Because of this simplicity, expositions such as handle the perfect case first. The general case for the Schur multiplier is similar but ensures the extension is a stem extension by restricting to the derived subgroup of F: M(G) ≅ (R ∩ [F, F])/[F, R]. These are all slightly later results of Schur, who also gave a number of useful criteria for calculating them more explicitly. Relation to efficient presentations In combinatorial group theory, a group often originates from a presentation. One important theme in this area of mathematics is to study presentations with as few relations as possible, such as one relator groups like Baumslag–Solitar groups. These groups are infinite groups with two generators and one relation, and an old result of Schreier shows that in any presentation with more generators than relations, the resulting group is infinite. The borderline case is thus quite interesting: finite groups with the same number of generators as relations are said to have a deficiency zero. For a group to have deficiency zero, the group must have a trivial Schur multiplier because the minimum number of generators of the Schur multiplier is always less than or equal to the difference between the number of relations and the number of generators, which is the negative deficiency. An efficient group is one where the Schur multiplier requires this number of generators. A fairly recent topic of research is to find efficient presentations for all finite simple groups with trivial Schur multipliers. Such presentations are in some sense nice because they are usually short, but they are difficult to find and to work with because they are ill-suited to standard methods such as coset enumeration. Relation to topology In topology, groups can often be described as finitely presented groups and a fundamental question is to calculate their integral homology . In particular, the second homology plays a special role and this led Heinz Hopf to find an effective method for calculating it. The method in is also known as Hopf's integral homology formula and is identical to Schur's formula for the Schur multiplier of a finite group: where and F is a free group. The same formula also holds when G is a perfect group. The recognition that these formulas were the same led Samuel Eilenberg and Saunders Mac Lane to the creation of cohomology of groups. In general, where the star denotes the algebraic dual group. Moreover, when G is finite, there is an unnatural isomorphism The Hopf formula for has been generalised to higher dimensions. For one approach and references see the paper by Everaert, Gran and Van der Linden listed below. A perfect group is one whose first integral homology vanishes. A superperfect group is one whose first two integral homology groups vanish. The Schur covers of finite perfect groups are superperfect. An acyclic group is a group all of whose reduced integral homology vanishes. Applications The second algebraic K-group K2(R) of a commutative ring R can be identified with the second homology group H2(E(R), Z) of the group E(R) of (infinite) elementary matrices with entries in R. See also Quasisimple group The references from Clair Miller give another view of the Schur Multiplier as the kernel of a morphism κ: G ∧ G → G induced by the commutator map. Notes References Errata Group theory Homological algebra Issai Schur
Schur multiplier
Mathematics
1,582
8,563,704
https://en.wikipedia.org/wiki/Antistatic%20device
An antistatic device is any device that reduces, dampens, or otherwise inhibits electrostatic discharge, or ESD, which is the buildup or discharge of static electricity. ESD can damage electrical components such as computer hard drives, and even ignite flammable liquids and gases. Many methods exist for neutralizing static electricity, varying in use and effectiveness depending on the application. Antistatic agents are chemical compounds that can be added to an object, or the packaging of an object, to help deter the buildup or discharge of static electricity. For the neutralization of static charge in a larger area, such as a factory floor, semiconductor cleanroom or workshop, antistatic systems may utilize electron emission effects such as corona discharge or photoemission that introduce ions into the area that combine with and neutralize any electrically charged object. In many situations, sufficient ESD protection can be achieved with electrical grounding. Symbology Various symbols can be found on products, indicating that the product is electrostatically sensitive, as with sensitive electrical components, or that it offers antistatic protection, as with antistatic bags. Reach symbol ANSI/ESD standard S8.1-2007 is most commonly seen on applications related to electronics. Several variations consist of a triangle with a reaching hand depicted inside of it using negative space. Versions of the symbol will often have the hand being crossed out as a warning for the component being protected, indicating that it is ESD sensitive and is not to be touched unless antistatic precautions are taken. Another version of the symbol has the triangle surrounded by an arc. This variant is in reference to the antistatic protective device, such as an antistatic wrist strap, rather than the component being protected. It usually does not feature the hand being crossed out, indicating that it makes contact with the component safe. Circle Another common symbol takes the form of a bold circle being intersected by three arrows. Originating from a U.S. military standard, it has been adopted industry-wide. It is intended as a depiction of a device or component being breached by static charges, indicated by the arrows. Examples Types of antistatic devices include: Antistatic bag An antistatic bag is a bag used for storing or shipping electronic components which may be prone to damage caused by ESD. Ionizing bar An ionizing bar, sometimes referred to as a static bar, is a type of industrial equipment used for removing static electricity from a production line to dissipate static cling and other such phenomena that would disrupt the line. It is important in the manufacturing and printing industries, although it can be used in other applications as well. Ionizing bars are most commonly suspended above a conveyor belt or other apparatus in a production line where the product can pass below it; the distance is usually calibrated for the specific application. The bar works by emitting an ionized corona onto the products below it. If then a product on the line has a positive or negative static charge, as it passes through the ionized aura created by the bar, it will attract the correspondingly charged positive or negative ions and become electrically neutral. Antistatic garments Antistatic garments or antistatic clothing can be used to prevent damage to electrical components or to prevent fires and explosions when working with flammable liquids and gases. Antistatic garments are used in many industries such as electronics, communications, telecommunications and defense applications. Antistatic garments have conductive threads in them, creating a wearable version of a Faraday cage. Antistatic garments attempt to shield ESD sensitive devices from harmful static charges from clothing such as wool, silk, and synthetic fabrics on people working with them. For these garments to work properly, they must also be connected to ground with a strap. Most garments are not conductive enough to provide personal grounding, so antistatic wrist and foot straps are also worn. There are three types of static control garments that are compliant to the ANSI/ESD S20.20-2014 standards: 1) static control garment, 2) groundable static control garment, 3) groundable static control garment system. Antistatic mat An antistatic floor mat or ground mat is one of a number of antistatic devices designed to help eliminate static electricity. It does this by having a controlled low resistance: a metal mat would keep parts grounded but would short out exposed parts; an insulating mat would provide no ground reference and so would not provide grounding. Typical resistance is on the order of 105 to 108 ohms between points on the mat and to ground. The mat would need to be grounded (earthed). This is usually accomplished by plugging into the grounded line in an electrical outlet. It is important to discharge at a slow rate, therefore a resistor should be used in grounding the mat. The resistor, as well as allowing high-voltage charges to leak through to ground, also prevents a shock hazard when working with low-voltage parts. Some ground mats allow one to connect an antistatic wrist strap to them. Versions are designed for placement on both the floor and desk. Antistatic wrist strap An antistatic wrist strap, ESD wrist strap, or ground bracelet is an antistatic device used to safely ground a person working on very sensitive electronic equipment, to prevent the buildup of static electricity on their body, which can result in ESD. It is used in the electronics industry when handling electronic devices which can be damaged by ESD, and also sometimes by people working around explosives, to prevent electric sparks which could set off an explosion. It consists of an elastic band of fabric with fine conductive fibers woven into it, attached to a wire with a clip on the end to connect it to a ground conductor. The fibers are usually made of carbon or carbon-filled rubber, and the strap is bound with a stainless steel clasp or plate. They are usually used in conjunction with an antistatic mat on the workbench, or a special static-dissipating plastic laminate on the workbench surface. The wrist strap is usually worn on the nondominant hand (the left wrist for a right-handed person). It is connected to ground through a coiled retractable cable and 1 megohm resistor, which allows high-voltage charges to leak through but prevents a shock hazard when working with low-voltage parts. Where higher voltages are present, extra resistance (0.75 megohm per 250 V) is added in the path to ground to protect the wearer from excessive currents; this typically takes the form of a 4 megohm resistor in the coiled cable (or, more commonly, a 2 megohm resistor at each end). Wrist straps designed for industrial use usually connect to ground connections built into the workplace, via either a standard 4 mm plug or 10 mm press stud, whereas straps designed for consumer use often have a crocodile clip for the ground connection. In addition to wrist straps, ankle and heel straps are used in industry to bleed away accumulated charge from a body. These devices are usually not tethered to earth ground, but instead incorporate high resistance in their construction, and work by dissipating electrical charge to special floor tiles. Such straps are used when workers need to be mobile in a work area and a grounding cable would get in the way. They are used particularly in an operating theatre, where oxygen or explosive anesthetic gases are used. Some wrist straps are "wireless" or "dissipative", and claim to protect against ESD without needing a ground wire, typically by air ionization or corona discharge. These are widely regarded as ineffective, if not fraudulent, and examples have been tested and shown not to work. Professional ESD standards all require wired wrist straps. See also Electrostatic-sensitive device Antistatic agent Electrostatics Bleeder resistor References Electrostatics Digital electronics Electrical safety
Antistatic device
Engineering
1,598
4,418,574
https://en.wikipedia.org/wiki/Macropore
In soil, macropores are defined as cavities that are larger than 75 μm. Functionally, pores of this size host preferential soil solution flow and rapid transport of solutes and colloids. Macropores increase the hydraulic conductivity of soil, allowing water to infiltrate and drain quickly, and shallow groundwater to move relatively rapidly via lateral flow. In soil, macropores are created by plant roots, soil cracks, soil fauna, and by aggregation of soil particles into peds. Macropores can also be found in soil between larger individual mineral particles such as sand or gravel. Macropores may be defined differently in other contexts. Within the context of porous solids (i.e., not porous aggregations such as soil), colloid and surface chemists define macropores as cavities that are larger than 50 nm. Formation of soil macropores Primary particles (sand, silt and clay) in soil are bound together by various agents and under different processes to form soil aggregates (peds). Spaces of different shapes and sizes exist within and between these soil aggregates. The larger spaces between aggregates are called macropores. Macropores can be formed under the influence of physical processes such as wet/dry and freeze/thaw cycles, which result in cracks and fissures of soils. They can also be formed under biological processes where plant roots and soil organisms play an important role in their formation. Macropores created by biological activities are also called biopores. For example, plant roots create large spaces between soil aggregates with their growth and decay. Soil fauna, especially burrowing species such as earthworms, contributes to the formation of macropores with their movement and activities in soils. In general, the formation of macropore is negatively related to soil depth as these physical and biological processes diminish with depth. Importance of soil macropores As an important part of soil structure, macropores are vital to the provision of many soil ecosystem services. They allow free movement of water and air, influence transport of chemicals and provide habitats for soil organisms. Therefore, understanding the importance of soil macropore is also critical to achieving sustainable management of our soil resources. Water and air movement Water can move freely under the influence of gravity in soil macropores when compared to micropores (much smaller pores in soils) where water is held by capillary forces. Water also tends to move along paths of the least resistance. Connected macropores create these paths and result in the so-called preferential flows in soils. Such attributes of macropores will allow fast movement of water into and across soils, that can significantly improve soil infiltration rate and permeability. These in turn can help to reduce surface runoff, soil erosion and prevent flooding. It also contributes to groundwater recharge that replenish water resources. On the other hand, these pores will be filled with air when they do not hold water. An extended network of macropores helps to improve gas exchange between soil and the atmosphere, especially when these macropores are connected to soil surface. Soil gases such as carbon dioxide and oxygen are important elements of soil respiration. Oxygen is essential to the growth of plant roots and soil organisms while the release of carbon dioxide through respiration is an integral part of the global carbon cycling. Optimal water and air movement through soils not only provide essential elements to sustain life but are also fundamental to various soil processes such as nutrient cycling. Solute and pollutant transport As macropores facilitate water movement in soils, they also inevitably influence the transport of chemicals which are dissolved in water. As a result, macropores can play a significant role in affecting the cycling of soil nutrients and the distribution of soil pollutants. For instance, while preferential flow paths consist of macropores enhance the drainage of soil water, the dissolved nutrients can be carried away rapidly and lead to an uneven distribution of water as well as chemicals in the soils. When excess chemicals or pollutants are released into groundwater, they can cause water pollution in the receiving water bodies. This can be a concern especially to some land uses such as agricultural activities, as it leads to issues regarding the effectiveness of irrigation and fertilization as well as impacts of environmental pollution. For example, excessive nitrate converted from nitrogen fertilizers can be washed into groundwater under heavy rainfall or irrigation. Subsequently, a high level of nitrate in drinking water can cause health concerns. Habitats for soil organisms Being large pores in soils, macropores allow easy movement of water and air that they provide favourable spaces for plant root growth and habitats for soil organisms.  Consequently, these pores, with various residing soil organisms such as earthworms and larvae, also become important locations of soil bio-chemical processes that affect the overall soil quality. Characteristics of macropore network Soil macropores are not uniform but have an irregular geometry. They vary in shapes, sizes, and even surface roughness. When connected together, they form specific networks in soils. Therefore, the characteristics of these macropore networks can have significant influences on their functions in soils, especially in relation to water movement, aeration, and plant root growth. Connectivity The interconnectedness of soil macropores affects the capability of soil to conduct water and thus controls its water infiltration and hydraulic conductivity. Higher connectivity of soil macropores is usually associated with higher soil permeability. Connection of macropores with soil surface and groundwater also contributes to water infiltration into soils and replenishment of groundwater. The connectivity of soil macropores influences the vertical and lateral movement of both water and solutes in soils. Continuity Interconnected soil macropores may not create continuous paths, especially across the soil boundaries. The existence of dead-end pores can block or slow down water and air movement. Therefore, the continuity of soil macropores is also an influential factor in soil processes. For example, higher continuousness of macropores can result in higher gas exchange between soil and the atmosphere while lead to better soil aeration. Continued connection of macropores will also provide extended spaces that plants can easily grow their roots into, without sacrificing aboveground biomass by allocating resources for their roots to search for new spaces in discontinued areas. Tortuosity While soil macropore can be connected continuously to form long channels between two points in a soil, these channels are mostly sinuous rather than straight. Tortuosity is basically a ratio between the actual path length and the shortest distance between two points. In essence, tortuosity of macropore paths indicates their resistance to water flow. The more sinuous the paths, the higher the resistance. This will then affect the speed of water movement and distribution in soils. Management Soil macropores are a vital part of soil structure and their conservation is critical to sustainable management of our soil resources. This is particularly true to soils that are constantly subject to human disturbance, such as tilled agricultural fields where the shape and size of macropores can be altered by tillage. Soil macropores are easily affected by soil compaction. Compacted soils, for example in forest landings, usually have a low macropore proportion (macro-porosity) with impeded water movement. Organic matter can be incorporated into disturbed soils to improve their macro-porosity and related soil functions See also Characterisation of pore space in soil Nanoporous materials References Hydrology Soil physics Porous media
Macropore
Physics,Chemistry,Materials_science,Engineering,Environmental_science
1,529
50,669,213
https://en.wikipedia.org/wiki/NGC%20141
NGC 141 is a lenticular galaxy in the constellation of Pisces. Discovered by Albert Marth on August 29, 1864, it is about 525 million light-years away and is approximately 100,000 light-years across. References External links Barred spiral galaxies 0141 Astronomical objects discovered in 1864 Pisces (constellation)
NGC 141
Astronomy
67
10,976,022
https://en.wikipedia.org/wiki/Euclidean%20shortest%20path
The Euclidean shortest path problem is a problem in computational geometry: given a set of polyhedral obstacles in a Euclidean space, and two points, find the shortest path between the points that does not intersect any of the obstacles. Two dimensions In two dimensions, the problem can be solved in polynomial time in a model of computation allowing addition and comparisons of real numbers, despite theoretical difficulties involving the numerical precision needed to perform such calculations. These algorithms are based on two different principles, either performing a shortest path algorithm such as Dijkstra's algorithm on a visibility graph derived from the obstacles or (in an approach called the continuous Dijkstra method) propagating a wavefront from one of the points until it meets the other. Higher dimensions In three (and higher) dimensions the problem is NP-hard in the general case, but there exist efficient approximation algorithms that run in polynomial time based on the idea of finding a suitable sample of points on the obstacle edges and performing a visibility graph calculation using these sample points. There are many results on computing shortest paths which stays on a polyhedral surface. Given two points s and t, say on the surface of a convex polyhedron, the problem is to compute a shortest path that never leaves the surface and connects s with t. This is a generalization of the problem from 2-dimension but it is much easier than the 3-dimensional problem. Variants There are variations of this problem, where the obstacles are weighted, i.e., one can go through an obstacle, but it incurs an extra cost to go through an obstacle. The standard problem is the special case where the obstacles have infinite weight. This is termed as the weighted region problem in the literature. See also Shortest path problem, in a graph of edges and vertices Any-angle path planning, in a grid space Notes References . . . . . . . . . . . External links Implementation of Euclidean Shortest Path algorithm in Digital Geometric Kernel software Geometric algorithms Computational geometry
Euclidean shortest path
Mathematics
398
57,131,412
https://en.wikipedia.org/wiki/Outline%20of%20bridges
The following outline is provided as an overview of and topical guide to bridges: Bridges – a structure built to span physical obstacles without closing the way underneath such as a body of water, valley, or road, for the purpose of providing passage over the obstacle. What type of thing is a bridge? Bridges can be described as all of the following: A structure – An arrangement and organization of interrelated elements in a material object or system, or the object or system so organized. A thoroughfare – A road connecting one location to another. Types of bridges Beam Bridge Truss Bridge Truss arch bridge Cantilever Bridge Stressed ribbon bridge Arch Bridge Tied Arch Bridge Through arch bridge Skew arch Suspension Bridge Cable-stayed bridge Simple suspension bridge Inca rope bridge Tubular bridge Extradosed bridge Moveable Bridge Drawbridge (British English definition) – the bridge deck is hinged on one end Bascule bridge – a drawbridge hinged on pins with a counterweight to facilitate raising ; road or rail Rolling bascule bridge – an unhinged drawbridge lifted by the rolling of a large gear segment along a horizontal rack Folding bridge – a drawbridge with multiple sections that collapse together horizontally Curling bridge – a drawbridge with transverse divisions between multiple sections that curl vertically Fan Bridge - a drawbridge with longitudinal divisions between multiple bascule sections that rise to various angles of elevation, forming a fan arrangement. Vertical-lift bridge – the bridge deck is lifted by counterweighted cables mounted on towers ; road or rail Table bridge – a lift bridge with the lifting mechanism mounted underneath it Retractable bridge (Thrust bridge) – the bridge deck is retracted to one side Submersible bridge – also called a ducking bridge, the bridge deck is lowered into the water Tilt bridge – the bridge deck, which is curved and pivoted at each end, is lifted at an angle Swing bridge – the bridge deck rotates around a fixed point, usually at the centre, but may resemble a gate in its operation ; road or rail Transporter bridge – a structure high above carries a suspended, ferry-like structure Jet bridge – a passenger bridge to an airplane. One end is mobile with height, yaw, and tilt adjustments on the outboard end Guthrie rolling bridge Vlotbrug, a design of retractable floating bridge in the Netherlands Locks are implicitly bridges as well allowing ship traffic to flow when open and at least foot traffic on top when closed Rigid-frame bridge Side-spar cable-stayed bridge Segmental bridge Multi-Level Bridges Viaduct Vierendeel bridge Toll bridge Footbridge Clapper bridge Moon bridge Step-stone bridge Zig-zag bridge Plank Boardwalk Joist Multi-way bridge Three-Way Bridge Four-Way Bridge Five-Way Bridge Trestle bridge Coal trestle Transporter bridge Log bridge Packhorse bridge Aqueduct Military Bridges AM 50 Armoured vehicle-launched bridge Bailey bridge Callender-Hamilton bridge Mabey Logistic Support Bridge Medium Girder Bridge Pontoon bridge History of bridges History of bridges General bridges concepts Bending The behavior of a slender structural element subjected to an external load applied perpendicularly to a longitudinal axis of the element. Compression (physics) The application of balanced inward ("pushing") forces to different points on a material or structure, that is, forces with no net sum or torque directed so as to reduce its size in one or more directions. Shear stress The component of stress coplanar with a material cross section. Span (engineering) The distance between two intermediate supports for a structure. Tension (physics) The pulling force transmitted axially by the means of a string, cable, chain, or similar one-dimensional continuous object, or by each end of a rod, truss member, or similar three-dimensional object; tension might also be described as the action-reaction pair of forces acting at each end of said elements. Torsion (mechanics) The twisting of an object due to an applied torque. Torque The rate of change of angular momentum of an object. Bridges companies Alabama Department of Transportation (ALDOT) Alaska Department of Transportation and Public Facilities (DOT&PF) Arizona Department of Transportation (ADOT) Arkansas State Highway and Transportation Department (AHTD) California Department of Transportation (Caltrans) Colorado Department of Transportation (CDOT) Connecticut Department of Transportation (CONNDOT) Delaware Department of Transportation (DelDOT) Florida Department of Transportation (FDOT) Georgia Department of Transportation (GDOT) Hawaii Department of Transportation (HDOT) Idaho Transportation Department (ITD) Illinois Department of Transportation (IDOT) Indiana Department of Transportation (INDOT) Iowa Department of Transportation (Iowa DOT) Kansas Department of Transportation (KDOT) Kentucky Transportation Cabinet (KYTC) Louisiana Department of Transportation and Development (DOTD) Maine Department of Transportation (MaineDOT) Maryland Department of Transportation (MDOT) Massachusetts Department of Transportation (MassDOT) Michigan Department of Transportation (MDOT) Minnesota Department of Transportation (Mn/DOT) Mississippi Department of Transportation (MDOT) Missouri Department of Transportation (MoDOT) Montana Department of Transportation (MDT) Nebraska Department of Transportation (NDOT) Nevada Department of Transportation (NDOT) New Hampshire Department of Transportation (NHDOT) New Jersey Department of Transportation (NJDOT) New Mexico Department of Transportation (NMDOT) New York New York State Bridge Authority New York State Department of Transportation (NYSDOT) New York State Thruway Authority (NYSTA) North Carolina Department of Transportation (NCDOT) North Dakota Department of Transportation (NDDOT) Ohio Department of Transportation (ODOT) Oklahoma Department of Transportation (ODOT) Oregon Department of Transportation (ODOT) Pennsylvania Department of Transportation (PennDOT) Puerto Rico Department of Transportation and Public Works (DTOP) Rhode Island Department of Transportation (RIDOT) South Carolina Department of Transportation (SCDOT) South Dakota Department of Transportation (SDDOT) Tennessee Department of Transportation (TDOT) Texas Department of Transportation (TxDOT) Utah Department of Transportation (UDOT) Vermont Agency of Transportation (VTrans) Virginia Department of Transportation (VDOT) Washington State Department of Transportation (WSDOT) West Virginia Department of Transportation (WVDOT) Wisconsin Department of Transportation (WisDOT) Wyoming Department of Transportation (WYDOT) Notable bridges Akashi Kaikyō Bridge Alcantara Bridge Brooklyn Bridge Chapel Bridge Charles Bridge Chengyang Bridge Chesapeake Bay Bridge Gateshead Millennium Bridge George Washington Bridge Golden Gate Bridge Great Belt Bridge Hangzhou Bay Bridge Mackinac Bridge Millau Viaduct Ponte Vecchio Rainbow Bridge (Niagara Falls) Rialto Bridge Royal Gorge Bridge Seri Wawasan Bridge Seven Mile Bridge Stari Most Sunshine Skyway Bridge Sydney Harbour Bridge Tacoma Narrows Bridges The Confederation Bridge The Helix Bridge Tower Bridge Verrazano-Narrows Bridge Tsing Ma Bridge See also List of bridges References External links Bridges Bridges Bridges
Outline of bridges
Engineering
1,380
77,925,756
https://en.wikipedia.org/wiki/Nancy%20Millis%20Medal%20for%20Women%20in%20Science
The Nancy Millis Medal for Women in Science, also known as the Nancy Millis Medal, is an annual award conferred by the Australian Academy of Science. It is named in honour of Nancy Millis (1922–2012) and recognises women scientists, with eight to 15 years's experience after completing their PhD, for their outstanding contribution to research and leadership. Winners The medal was first awarded in 2014 and annually since: 2014: Emma Johnston 2015: Tamara Davis 2016: Elena Belousova 2017: Kerrie Ann Wilson 2018: Marie-Liesse Asselin-Labat 2019: Jacqueline Batley 2020: Kate Schroder and Nicole Bell 2021: Angela Moles and Cathryn Trott 2022: Vanessa Peterson 2023: Renae Ryan 2024: Anita Ho-Baillie References External links Australian Academy of Science Awards Science awards honoring women Awards established in 2014 Australian science and technology awards
Nancy Millis Medal for Women in Science
Technology
183
2,971,205
https://en.wikipedia.org/wiki/Van%20Deemter%20equation
The van Deemter equation in chromatography, named for Jan van Deemter, relates the variance per unit length of a separation column to the linear mobile phase velocity by considering physical, kinetic, and thermodynamic properties of a separation. These properties include pathways within the column, diffusion (axial and longitudinal), and mass transfer kinetics between stationary and mobile phases. In liquid chromatography, the mobile phase velocity is taken as the exit velocity, that is, the ratio of the flow rate in ml/second to the cross-sectional area of the ‘column-exit flow path.’ For a packed column, the cross-sectional area of the column exit flow path is usually taken as 0.6 times the cross-sectional area of the column. Alternatively, the linear velocity can be taken as the ratio of the column length to the dead time. If the mobile phase is a gas, then the pressure correction must be applied. The variance per unit length of the column is taken as the ratio of the column length to the column efficiency in theoretical plates. The van Deemter equation is a hyperbolic function that predicts that there is an optimum velocity at which there will be the minimum variance per unit column length and, thence, a maximum efficiency. The van Deemter equation was the result of the first application of rate theory to the chromatography elution process. Van Deemter equation The van Deemter equation relates height equivalent to a theoretical plate (HETP) of a chromatographic column to the various flow and kinetic parameters which cause peak broadening, as follows: Where HETP = a measure of the resolving power of the column [m] A = Eddy-diffusion parameter, related to channeling through a non-ideal packing [m] B = diffusion coefficient of the eluting particles in the longitudinal direction, resulting in dispersion [m2 s−1] C = Resistance to mass transfer coefficient of the analyte between mobile and stationary phase [s] u = speed [m s−1] In open tubular capillaries, the A term will be zero as the lack of packing means channeling does not occur. In packed columns, however, multiple distinct routes ("channels") exist through the column packing, which results in band spreading. In the latter case, A will not be zero. The form of the Van Deemter equation is such that HETP achieves a minimum value at a particular flow velocity. At this flow rate, the resolving power of the column is maximized, although in practice, the elution time is likely to be impractical. Differentiating the van Deemter equation with respect to velocity, setting the resulting expression equal to zero, and solving for the optimum velocity yields the following: Plate count The plate height given as: with the column length and the number of theoretical plates can be estimated from a chromatogram by analysis of the retention time for each component and its standard deviation as a measure for peak width, provided that the elution curve represents a Gaussian curve. In this case the plate count is given by: By using the more practical peak width at half height the equation is: or with the width at the base of the peak: Expanded van Deemter The Van Deemter equation can be further expanded to: Where: H is plate height λ is particle shape (with regard to the packing) dp is particle diameter γ, ω, and R are constants Dm is the diffusion coefficient of the mobile phase dc is the capillary diameter df is the film thickness Ds is the diffusion coefficient of the stationary phase. u is the linear velocity Rodrigues equation The Rodrigues equation, named for Alírio Rodrigues, is an extension of the Van Deemter equation used to describe the efficiency of a bed of permeable (large-pore) particles. The equation is: where and is the intraparticular Péclet number. See also Resolution (chromatography) Jan van Deemter References Chromatography Equations
Van Deemter equation
Chemistry,Mathematics
834
78,543,490
https://en.wikipedia.org/wiki/Cloud-init
cloud-init is a software tool developed by Canonical. It is used for doing the initial setup of virtual machines in cloud computing. References External links Canonical (company)
Cloud-init
Technology
36
70,611,659
https://en.wikipedia.org/wiki/Ormanc%C4%B1k%2C%20Savur
Ormancık () is a neighbourhood in the municipality and district of Savur, Mardin Province of Turkey. The village is populated by Kurds of the Dereverî tribe and had a population of 31 in 2021. History On 21 January 1994 it was reportedly attacked with grenades by the PKK. Nineteen people, composed of nine women, six children and four village guards - were killed in what Human Rights Watch described as a "massacre." There is speculation that the event was a chemical attack. References Kurdish settlements in Mardin Province Neighbourhoods in Savur District Massacres in Turkey Massacres in 1994 1994 murders in Turkey Kurdistan Workers' Party attacks Chemical weapons attacks
Ormancık, Savur
Chemistry
139
7,387,672
https://en.wikipedia.org/wiki/Yuri%20Filipchenko
Yuri Aleksandrovich Filipchenko, sometimes Philipchenko (; 1882 — 1930) was a Russian entomologist who coined the terms microevolution and macroevolution, as well as the mentor of geneticist Theodosius Dobzhansky. Though he himself was an orthogeneticist, he was one of the first scientists to incorporate the laws of Mendel into evolutionary theory and thus had a great influence on The Modern Synthesis. He established a genetics laboratory in Leningrad undertaking experimental work with Drosophila melanogaster. Theodosius Dobzhansky worked with him from 1924. Filipchenko is also known for his work in Soviet eugenics, though his work in the subject would later result in his public denunciation due to the rise of Stalinism and increased criticisms that eugenics represented bourgeois science. Biography Early life and education Yuri Filipchenko was born on February 13, 1882, in Zlyn' in Bolkhovsky District of the Russian Empire. His father was Aleksandrovich Efimovich, a landowner and agriculturalist. Filipchenko also had a brother by the name of Aleksandr Aleksandrovich, who would later become a parasitologist and physician. He received his secondary education at the Second Saint Petersburg Classical Gymnasium. In 1897, Filipchenko read Darwin’s On the Origin of Species and Sexual Selection for the first time. Two years later, he would read Carl Nägeli’s Mechanisch-physiologische Theorie der Abstamungslehre. These two works would later have a powerful formative influence on Filipchenko and helped to steer him towards a career in zoology. Filipchenko graduated from Second Saint Petersburg in 1900, but due to a variety of financial difficulties that were further complicated by his father's death, he entered the Military Medical Academy. However, Filipchenko soon transferred to the natural science division at Saint Petersburg State University only a year after entering the academy. Filipchenko was arrested in December 1905 due to being present at a meeting of the Soviet Workers' Deputies, but was released shortly afterwards. However, Filipchenko was arrested later the same month after helping to organize workers in the Nevsky District of Saint Petersburg, serving four months in prison during which he studied both philosophy and for government examinations. Though he would later join the Schlisselburg Committee, which assisted with the plight of political prisoners, and the Socialist Revolutionary Party, Filipchenko stepped away from politics after 1906 to focus his attention on scientific pursuits. After graduating from Saint Petersburg State University's Zoology Department in 1906, Filipchenko was accepted to Saint Petersburg State University's Zoology and Comparative Anatomy Master's program in 1910. He pursued comparative embryology for his candidate's thesis due to his interest in the presentation and evolution of physical characteristics in animals. By engaging in a project that allowed him to compare the embryonic development in higher-level taxa (i.e. class, orders, etc.), Filipchenko gained a broader perspective on inheritance that would later inform his ideas on macroevolution. Career Filipchenko created the first department of genetics in Russia at Saint Petersburg State University in 1919, which would, by 1921, become the Bureau of Eugenics at the Russian Academy of Sciences in Saint Petersburg. In later years, the Bureau would be renamed the Bureau of Genetics and Eugenics in 1925 and finally the Laboratory of Genetics in 1930, but regardless of its name, the work of the institution would go on to form the foundation of the Institute of Genetics at the USSR Academy of Science. However, in the wake of the first five-year plan, Filipchenko became publicly castigated for his work in orthogenesis and in eugenics and was relieved of his position at Saint Petersburg State University in 1930. His Laboratory of Genetics and Experimental Zoology was disbanded shortly afterward. Personal life Filipchenko was married to Nadezhda Pavlovna, with whom he had a son by the name of Gleb, who was a physicist. Both Nadezhda and Gleb were killed during the blockade of Leningrad in World War II. Death Filipchenko developed a severe headache whilst working at Peterhof, and concerned about his health, traveled to Leningrad to be taken care of by his brother Aleksandr. While in Leningrad, Filipchenko contracted streptococcal meningitis and later died at midnight between May 19 and May 20, 1930. His head was donated to Bekhterev’s Brain Institute for research, while the rest of his remains were buried in Smolensky Cemetery. Scientific career Variabilität und Variation In his 1927 German text Variabilität und Variation, Filipchenko introduced the idea of two separate forms of evolution: evolution within a species, or microevolution, and evolution that occurs in higher taxonomic categories, which he termed macroevolution. While microevolution was governed by a system of inheritance dictated by genetics, Filipchenko based macroevolution on cytoplasmic variability rather than genetic inheritance. Views on Darwin Though evolution was embraced by many Russian biologists in Filipchenko's day, there did exist elements of opposition to Darwin's ideas, most commonly in the form of "direct evolution," or orthogenesis. While Filipchenko self-identified as a Darwinist, he only did so in the sense that he believed in the idea of evolution. He did not subscribe to the belief that Darwin's concept of natural selection was as integral to the process of evolution as Darwin espoused, instead positing that evolution was not governed by the principles of Lamarck or natural selection, but rather was intrinsic to life itself. Filipchenko believed that evolution in animals and plants was an inherent developmental process rather than a change induced over successive generations, a process that an organism's environment can affect, but only indirectly. Involvement in Eugenics Filipchenko's investigations into genetics, craniometry, the inheritance of quantitative characters, and neurology eventually introduced him to ideas on eugenics that were being developed by his contemporaries in the United States and Europe. These ideas on eugenics proved so powerful to Filipchenko that he himself began to write papers and give lectures on the subject in 1918. Filipchenko would later go on to form the Russian Eugenics Society in Moscow in 1920, as well as the Bureau of Eugenics in February 1921, an independent eugenics research institution in Petrograd. Ultimately, Filipchenko, along with Nikolai Koltsov, would become the main leaders of the Russian eugenics movement. Filipchenko was drawn to eugenics due to both its potential to be used as a "civic religion" and its promise of a better future for the Soviets, but also due to the immense amount of funding directed towards eugenics due to the Soviet government's interest in the subject. Eugenics seemed to be the practical application of genetics relative to human health, and since this fact dovetailed with the Soviet penchant for scientific social planning, and so Soviet institutions like the Commissariat of Public Health poured funding into the subject. Filipchenko and his Bureau of Eugenics created charts of the pedigrees of various Soviet academics and intellectuals in an attempt to ascertain the location of "race" within an individual. But Filipchenko was staunchly against Bolshevik ideas regarding the sterilization of undesirables and mass insemination of women by men with exceptional genetics, stating that such acts were "crude assaults on the human person" and that the best way to create a "desirable breed" was through positive selection. In Filipchenko's eyes, eugenic progress could only be achieved through education rather than legislative or scientific methods. However, by 1925, the appeal of Soviet eugenics had waned due to issues outside of just the negative aspects of the subject. A great controversy arose regarding the compatibility of genetics, and by extension eugenics, with Marxist science. Filipchenko, in an attempt to defend eugenics’ relevance to Marxist dialectic, argued against Lamarckism, the other theory on inheritance that some Soviet scientists had argued was more compatible with the tenets of Marxism, by stating that if it were true, then the negative qualities that Lamarckism associated with poverty and the lower class would have prevented them from rising up against the bourgeoisie in the first place. Despite eugenics surviving the conflict between genetics and Lamarckism, Filipchenko's work in eugenics was effectively cut short with the emergence of the Great Break (USSR) in 1929. During this period, eugenics was referred to as a “bourgeois doctrine,” and as such the USSR would become the first country to officially ban the subject. Filipchenko's work in the subject would later be one of the key reasons for his dismissal from Saint Petersburg in 1930. Impact Filipchenko was the first professor in Russia to introduce genetics at the collegiate level due to his annual course on inheritance Petersburg University, which he started teaching in 1913. He was also the first to publish a textbook on the subject of inheritance and genetics in Russia, which was called Nasledstvennost'''. His articles and textbooks on inheritance were some of the first entry points for Russian biologists like Dobzhansky to modern genetics, and it is for this reason that Soviet botanist and historian Peter Zhukovsky once called Filipchenko "the teacher of our youth." Published works During his career, Filipchenko published more than 100 works in Russian, 20 works in German, and 4 works in French, often under the name "J.A. Philiptschenko." Below are a few of the articles he put out in his lifetime. Razvitie izotomy (The development of isotomes; St. Petersburg, 1912) Izmenchivost’ i evoliutsiia (Variation and evolution; Petrograd and Moscow, 1915; 2nd ed., Petersburg, 1921) Proiskhozhdenie domashnykh zhivotnykh (The origin of domesticated animals; Petrograd, 1916; 2nd ed., Leningrad, 1924) Nasledstvennost’ (Heredity; Moscow, 1917; 2nd ed., 1924; 3rd ed., 1926) Chto takoe evgenika? (What is eugenics?; Petrograd. 1921) Kak nasleduetsia razlichnye osobennosti cheloveka (How various human traits are inherited; Petrograd, 1921) Izmenchivost i metody ee izucheniia (Variation and methods for its study: Petrograd, 1923;2nd ed. Leningrad 1926; 3rd ed., 1927; 4th ed., Moscow and Leningrad 1929) Obshchedostupnaia biologiia (Biology for the general reader; Petrograd, 1923; 15th ed., 1930) Evoliutsionnaia ideia v biologii (The evolutionary idea in biology; Moscow, 1923; 2nd ed., 1926; 3rd ed., 1977) Puti uluchsheniia chelovecheskogo roda (evgenika) (Ways of improving the human race [eugenics]; Leningrad, 1924) "Frensis Gal’ton i Gregor Mendel” (Francis Galton and Gregor Mendel; Moscow, 1925) Besedy o zhivykh sushchestvakh (Conversations about living substances; Leningrad, 1925) Nasledstvenny li priobretennye priznaki? (Are acquired characteristics inherited?: Leningrad. 1925) Variabilitat und variation (Berlin, 1927) I, Rasteniia (Plants; Leningrad, 1927) II, Zhivotnye (Animals; Leningrad, 1928) Genetika i ee znachenie dlia zhivotnovodstva (Genetics and its significance for animal breeding; Moscow and Leningrad, 1931) Eksperimental naia zoologiia (Experimental zoology; Leningrad and Moscow, 1932) Genetika Miagkikh pshenits'' (The genetics of soft wheats; Moscow and Leningrad, 1934) References People from Oryol Oblast Russian entomologists 1882 births 1930 deaths Soviet entomologists Orthogenesis Russian scientists
Yuri Filipchenko
Biology
2,538
4,582,891
https://en.wikipedia.org/wiki/CER-200
CER ( – Digital Electronic Computer) model 200 is an early digital computer developed by Mihajlo Pupin Institute (Serbia) in 1966. See also CER Computers Mihajlo Pupin Institute History of computer hardware in the SFRY References One-of-a-kind computers CER computers
CER-200
Technology
64
1,764,804
https://en.wikipedia.org/wiki/Aminal
In organic chemistry, an aminal or aminoacetal is a functional group or type of organic compound that has two amine groups attached to the same carbon atom: . (As is customary in organic chemistry, R can represent hydrogen or an alkyl group). A common aminal is bis(dimethylamino)methane, a colorless liquid that is prepared by the reaction of dimethylamine and formaldehyde: Aminals are encountered in, for instance, the Fischer indole synthesis. Several examples exist in nature. Hexahydro-1,3,5-triazine (), an intermediate in the condensation of formaldehyde and ammonia, tends to degrade to hexamethylene tetraamine. Cyclic aminals can be obtained by the condensation of a diamine and an aldehyde. Imidazolidines are one class of these cyclic aminals. See also Acetal Hemiaminal References Functional groups
Aminal
Chemistry
201
1,119,342
https://en.wikipedia.org/wiki/Dynamic%20nuclear%20polarization
Dynamic nuclear polarization (DNP) is one of several hyperpolarization methods developed to enhance the sensitivity of nuclear magnetic resonance (NMR) spectroscopy. While an essential analytical tool with applications in several fields, NMR’s low sensitivity poses major limitations to analyzing samples with low concentrations and limited masses and volumes. This low sensitivity is due to the relatively low nuclear gyromagnetic ratios (γn) of NMR active nuclei (1H, 13C, 15N, etc.) as well as the low natural abundance of certain nuclei. Several techniques have been developed to address this limitation, including hardware adjustments to NMR instruments and equipment (e.g., NMR tubes), improvements to data processing methods, and polarization transfer methods to NMR active nuclei in a sample—under which DNP falls. Overhauser et al. were the first to hypothesize and describe the DNP effect in 1953; later that year, Carver and Slichter observed the effect in experiments using metallic lithium. DNP involves transferring the polarization of electron spins to neighboring nuclear spins using microwave irradiation at or near electron paramagnetic resonance (EPR) transitions. It is based on two fundamental concepts: first, that the electronic gyromagnetic moment (γe) is several orders of magnitude larger than γn (about 658 times more; see below), and second, that the relaxation of electron spins is much faster than nuclear spins. , where is the Boltzmann equilibrium spin polarization. Note that the alignment of electron spins at a given magnetic field and temperature is described by the Boltzmann distribution under thermal equilibrium. A larger gyromagnetic moment corresponds to a larger Boltzmann distribution of populations in spin states; through DNP, the larger population distribution in the electronic spin reservoir is transferred to the neighboring nuclear spin reservoir, leading to stronger NMR signal intensities. The larger γ and faster relaxation of electron spins also help shorten T1 relaxation times of nearby nuclei, corresponding to stronger signal intensities. Under ideal conditions (full saturation of electron spins and dipolar coupling without leakage to nuclear spins), the NMR signal enhancement for protons can at most be 659. This corresponds to a time-saving factor of 434,000 for a solution-phase NMR experiment. In general, the DNP enhancement parameter η is defined as: where I is the signal intensity of the nuclear spins when the electron spins are saturated and I0 is the signal intensity of the nuclear spins when the electron spins are in equilibrium. DNP methods typically fall under one of two categories: continuous wave DNP (CW-DNP) and pulsed DNP. As their names suggest, these methods differ in whether the sample is continuously irradiated or pulsed with microwaves. When electron spin polarization deviates from its thermal equilibrium value, polarization transfers between electrons and nuclei can occur spontaneously through electron-nuclear cross relaxation or spin-state mixing among electrons and nuclei. For example, polarization transfer is spontaneous after a homolysis chemical reaction. On the other hand, when the electron spin system is in a thermal equilibrium, the polarization transfer requires continuous microwave irradiation at a frequency close to the corresponding EPR frequency. It is also possible that electrons are aligned to a higher degree of order by other preparations of electron spin order such as chemical reactions (known as chemical-induced DNP or CIDNP), optical pumping, and spin injection. A polarizing agent (PA)—either an endogenous or exogenous paramagnetic system to the sample—is required as part of the DNP experimental setup. Typically, PAs are stable free radicals that are dissolved in solution or doped in solids; they provide a source of unpaired electrons that can be polarized by microwave radiation near the EPR transitions. DNP can also be induced using unpaired electrons produced by radiation damage in solids. Some common PAs are shown. Described below are the four different mechanisms by which the DNP effect operates: the Overhauser effect (OE), the solid effect (SE), the cross effect (CE), and thermal mixing (TM). The DNP effect is present in solids and liquids and has been utilized successfully in solid-state and solution-phase NMR experiments. For solution-phase NMR experiments, only the OE mechanism is relevant, whereas for solid-state NMR any of the four mechanisms can be employed depending on the specific experimental conditions utilized. The first DNP experiments were performed in the early 1950s at low magnetic fields but until recently the technique was of limited applicability for high-frequency, high-field NMR spectroscopy because of the lack of microwave (or terahertz) sources operating at the appropriate frequency. Today, such sources are available as turn-key instruments, making DNP a valuable and indispensable method especially in the field of structure determination by high-resolution solid-state NMR spectroscopy. Mechanisms Overhauser effect DNP was first realized using the concept of the Overhauser effect, which is the perturbation of nuclear spin level populations observed in metals and free radicals when electron spin transitions are saturated by microwave irradiation. This effect relies on stochastic interactions between an electron and a nucleus. The "dynamic" initially meant to highlight the time-dependent and random interactions in this polarization transfer process. The DNP phenomenon was theoretically predicted by Albert Overhauser in 1953 and initially drew some criticism from Norman Ramsey, Felix Bloch, and other renowned physicists of the time on the grounds of being "thermodynamically improbable". The experimental confirmation by Carver and Slichter as well as an apologetic letter from Ramsey both reached Overhauser in the same year. The so-called electron-nucleus cross-relaxation, which is responsible for the DNP phenomenon is caused by rotational and translational modulation of the electron-nucleus hyperfine coupling. The theory of this process is based essentially on the second-order time-dependent perturbation theory solution of the von Neumann equation for the spin density matrix. While the Overhauser effect relies on time-dependent electron-nuclear interactions, the remaining polarizing mechanisms rely on time-independent electron-nuclear and electron-electron interactions. Solid effect The simplest spin system exhibiting the SE DNP mechanism is an electron-nucleus spin pair. The Hamiltonian of the system can be written as: These terms are referring respectively to the electron and nucleus Zeeman interaction with the external magnetic field, and the hyperfine interaction. S and I are the electron and nuclear spin operators in the Zeeman basis (spin considered for simplicity), ωe and ωn are the electron and nuclear Larmor frequencies, and A and B are the secular and pseudo-secular parts of the hyperfine interaction. For simplicity we will only consider the case of |A|,|B|<<|ωn|. In such a case A has little effect on the evolution of the spin system. During DNP a MW irradiation is applied at a frequency ωMW and intensity ω1, resulting in a rotating frame Hamiltonian given by where The MW irradiation can excite the electron single quantum transitions ("allowed transitions") when ωMW is close to ωe, resulting in a loss of the electron polarization. In addition, due to the small state mixing caused by the B term of the hyperfine interaction, it is possible to irradiate on the electron-nucleus zero quantum or double quantum ("forbidden") transitions around ωMW = ωe ± ωn, resulting in polarization transfer between the electrons and the nuclei. The effective MW irradiation on these transitions is approximately given by Bω1/2ωn. Static sample case In a simple picture of an electron-nucleus two-spin system, the solid effect occurs when a transition involving an electron-nucleus mutual flip (called zero quantum or double quantum) is excited by a microwave irradiation, in the presence of relaxation. This kind of transition is in general weakly allowed, meaning that the transition moment for the above microwave excitation results from a second-order effect of the electron-nuclear interactions and thus requires stronger microwave power to be significant, and its intensity is decreased by an increase of the external magnetic field B0. As a result, the DNP enhancement from the solid effect scales as B0−2 when all the relaxation parameters are kept constant. Once this transition is excited and the relaxation is acting, the magnetization is spread over the "bulk" nuclei (the major part of the detected nuclei in an NMR experiment) via the nuclear dipole network. This polarizing mechanism is optimal when the exciting microwave frequency shifts up or down by the nuclear Larmor frequency from the electron Larmor frequency in the discussed two-spin system. The direction of frequency shifts corresponds to the sign of DNP enhancements. Solid effect exist in most cases but is more easily observed if the linewidth of the EPR spectrum of involved unpaired electrons is smaller than the nuclear Larmor frequency of the corresponding nuclei. Magic angle spinning case In the case of magic angle spinning DNP (MAS-DNP), the mechanism is different but to understand it, a two spins system can still be used. The polarization process of the nucleus still occurs when the microwave irradiation excites the double quantum or zero quantum transition, but due to the fact that the sample is spinning, this condition is only met for a short time at each rotor cycle (which makes it periodical). The DNP process in that case happens step by step and not continuously as in the static case. Cross effect Static case The cross effect requires two unpaired electrons as the source of high polarization. Without special condition, such a three spins system can only generate a solid effect type of polarization. However, when the resonance frequency of each electron is separated by the nuclear Larmor frequency, and when the two electrons are dipolar coupled, another mechanism occurs: the cross-effect. In that case, the DNP process is the result of irradiation of an allowed transition (called single quantum) as a result the strength of microwave irradiation is less demanded than that in the solid effect. In practice, the correct EPR frequency separation is accomplished through random orientation of paramagnetic species with g-anisotropy. Since the "frequency" distance between the two electrons should be equal to the Larmor frequency of the targeted nucleus, cross-effect can only occur if the inhomogeneously broadened EPR lineshape has a linewidth broader than the nuclear Larmor frequency. Therefore, as this linewidth is proportional to external magnetic field B0, the overall DNP efficiency (or the enhancement of nuclear polarization) scales as B0−1. This remains true as long as the relaxation times remain constant. Usually going to higher field leads to longer nuclear relaxation times and this may partially compensate for the line broadening reduction. In practice, in a glassy sample, the probability of having two dipolarly coupled electrons separated by the Larmor frequency is very scarce. Nonetheless, this mechanism is so efficient that it can be experimentally observed alone or in addition to the solid-effect. Magic angle spinning case As in the static case, the MAS-DNP mechanism of cross effect is deeply modified due to the time dependent energy level. By taking a simple three spin system, it has been demonstrated that the cross-effect mechanism is different in the Static and MAS case. The cross effect is the result of very fast multi-step process involving EPR single quantum transition, electron dipolar anti-crossing and cross effect degeneracy conditions. In the most simple case the MAS-DNP mechanism can be explained by the combination of a single quantum transition followed by the cross-effect degeneracy condition, or by the electron-dipolar anti-crossing followed by the cross-effect degeneracy condition. This in turn change dramatically the CE dependence over the static magnetic field which does not scale like B0−1 and makes it much more efficient than the solid effect. Thermal mixing Thermal mixing is an energy exchange phenomenon between the electron spin ensemble and the nuclear spin, which can be thought of as using multiple electron spins to provide hyper-nuclear polarization. Note that the electron spin ensemble acts as a whole because of stronger inter-electron interactions. The strong interactions lead to a homogeneously broadened EPR lineshape of the involved paramagnetic species. The linewidth is optimized for polarization transfer from electrons to nuclei, when it is close to the nuclear Larmor frequency. The optimization is related to an embedded three-spin (electron-electron-nucleus) process that mutually flips the coupled three spins under the energy conservation (mainly) of the Zeeman interactions. Due to the inhomogeneous component of the associated EPR lineshape, the DNP enhancement by this mechanism also scales as B0−1. DNP-NMR enhancement curves Many types of solid materials can exhibit more than one mechanism for DNP. Some examples are carbonaceous materials such bituminous coal and charcoal (wood or cellulose heated at high temperatures above their decomposition point which leaves a residual solid char). To separate out the mechanisms of DNP and to characterize the electron-nuclear interactions occurring in such solids a DNP enhancement curve can be made. A typical enhancement curve is obtained by measuring the maximum intensity of the NMR FID of the 1H nuclei, for example, in the presence of continuous microwave irradiation as a function of the microwave frequency offset. Carbonaceous materials such as cellulose char contain large numbers of stable free electrons delocalized in large polycyclic aromatic hydrocarbons. Such electrons can give large polarization enhancements to nearby protons via proton-proton spin-diffusion if they are not so close together that the electron-nuclear dipolar interaction does not broaden the proton resonance beyond detection. For small isolated clusters, the free electrons are fixed and give rise to solid-state enhancements (SS). The maximal proton solid-state enhancement is observed at microwave offsets of ω ≈ ωe ± ωH, where ωe and ωH are the electron and nuclear Larmor frequencies, respectively. For larger and more densely concentrated aromatic clusters, the free electrons can undergo rapid electron exchange interactions. These electrons give rise to an Overhauser enhancement centered at a microwave offset of ωe – ωH = 0. The cellulose char also exhibits electrons undergoing thermal mixing effects (TM). While the enhancement curve reveals the types electron-nuclear spin interactions in a material, it is not quantitative and the relative abundance of the different types of nuclei cannot be determined directly from the curve. DNP-NMR DNP can be performed to enhance NMR signals but also to introduce an inherent spatial dependence: the magnetization enhancement takes place in the vicinity of the irradiated electrons and propagates throughout the sample. Spatial selectivity can finally be obtained using magnetic resonance imaging (MRI) techniques, so that signals from similar parts can be separated based on their location in the sample. DNP has triggered enthusiasm in the NMR community because it can enhance sensitivity in solid-state NMR. In DNP, a large electronic spin polarization is transferred onto the nuclear spins of interest using a microwave source. There are two main DNP approaches for solids. If the material does not contain suitable unpaired electrons, exogenous DNP is applied: the material is impregnated by a solution containing a specific radical. When possible, endogenous DNP is performed using the electrons in transition metal ions (metal-ion dynamic nuclear polarization, MIDNP) or conduction electrons. The experiments usually need to be performed at low temperatures with magic angle spinning. It is important to note that DNP was only performed ex situ as it usually requires low temperature to lower electronic relaxation. References Further reading Review articles Books Carson Jeffries, "Dynamic Nuclear Orientation", New York, Interscience Publishers, 1963 Anatole Abragam and Maurice Goldman, "Nuclear Magnetism: Order and Disorder", New York : Oxford University Press, 1982 Tom Wenckebach, "Essentials of Dynamic Nuclear Polarization", Spindrift Publications, The Netherlands, 2016 Special issues Dynamic Nuclear Polarization: New Experimental and Methodology Approaches and Applications in Physics, Chemistry, Biology and Medicine, Appl. Magn. Reson., 2008. 34(3–4) High field dynamic nuclear polarization – the renaissance, Phys. Chem. Chem. Phys., 2010. 12(22) Blogs The DNP-NMR blog (Link) Chemical physics Nuclear magnetic resonance
Dynamic nuclear polarization
Physics,Chemistry
3,471
8,088,302
https://en.wikipedia.org/wiki/Residual%20chemical%20shift%20anisotropy
Residual chemical shift anisotropy (RCSA) is the difference between the chemical shift anisotropy (CSA) of aligned and non-aligned molecules. It is normally three orders of magnitude smaller than the static CSA, with values on the order of parts-per-billion (ppb). RCSA is useful for structural determination and it is among the new developments in NMR spectroscopy. See also Residual dipolar coupling References Further reading Nuclear magnetic resonance spectroscopy Nuclear chemistry Nuclear physics Asymmetry
Residual chemical shift anisotropy
Physics,Chemistry,Astronomy
103
22,495,354
https://en.wikipedia.org/wiki/SN%202005gl
SN 2005gl was a supernova in the barred-spiral galaxy NGC 266. It was discovered using CCD frames taken October 5, 2005, from the 60 cm automated telescope at the Puckett Observatory in Georgia, US, and reported by Tim Puckett in collaboration with Peter Ceravolo. It was independently identified by Yasuo Sano in Japan. The supernova was located 29.8 east and 16.7 north of the galactic core. Based upon its spectrum, this was classified as a Type IIn core-collapse supernova. It has a redshift of z = 0.016, which is the same as the host galaxy. Using archived images from the Hubble Space Telescope, a candidate progenitor star was identified. This is believed to have been a luminous blue variable (LBV), similar to Eta Carinae, with an absolute magnitude of −10.3 and a surface temperature of about 13,000 K. There was a small probability that the source was instead located in a compact cluster of stars, but the association with the LBV has since been reliably established. References External links Light curves and spectra on the Open Supernova Catalog Supernovae Luminous blue variables Pisces (constellation)
SN 2005gl
Chemistry,Astronomy
255
48,825,442
https://en.wikipedia.org/wiki/Acoustic%20trauma
Acoustic trauma is the sustainment of an injury to the eardrum as a result of a very loud noise. Its scope usually covers loud noises with a short duration, such as an explosion, gunshot or a burst of loud shouting. Quieter sounds that are concentrated in a narrow frequency may also cause damage to specific frequency receptors. The range of severity can vary from pain to hearing loss. Acute acoustic trauma can be treated by combining hyperbaric oxygen therapy (HBO) with corticosteroids. Acute noise exposure causes inflammation and lower oxygen supply in the inner ear. Corticosteroids hinder the inflammatory reaction and HBO provides an adequate oxygen supply. This therapy has been shown to be effective when initiated within three days after acoustic trauma. Therefore, this condition is considered an ENT emergency. Signs and Symptoms Hazardous noise causes injury to the hearing mechanisms in the inner ear. Acoustic trauma may result in sensorineural HL (SNHL) that is either temporary (temporary threshold shift, TTS) or permanent (permanent threshold shift, PTS). A TTS will resolve with time, while the time frame for hearing recovery is unique in every case, any SNHL that persists beyond eight weeks after injury is most likely permanent and should be considered PTS. Hearing loss Tinnitus (ringing in the ear) Aural fullness (ear fullness) Recruitment (ear pain with loud noise) Difficulty localizing sounds Difficult in hearing a noisy background Vertigo Causes Acoustic trauma is an injury to the inner ear that's often caused by exposure to a high-decibel noise. This injury can occur after exposure to a single, loud noise or from exposure to noises at significant decibels over a longer period of time. Many cases have included a period of reduced hearing after exposure to loud sounds. Examples include after a concert or a visit to a discotheque or having worked with noisy equipment. This kind of hearing impairment is often temporary. After some recovery time, the acoustic trauma often will stop. Threshold of Hearing (Decibel, dB) 0-30: Faint 40-60: Moderate to quiet 70-90: Very noisy 100-120: Pain 130-180: Intolerable Pathophysiology Acoustic trauma occurs when a continuous transient sounds transfers enough energy to a cochlea to result in necrosis of the outer hair cells (OHC), inner hair cells (IHC), and cause glutamate excitotoxicity of first-order afferent neurons of the spiral ganglion (cochlear synaptopathy). This can occur when an impact or impulse sound like an explosion occurs abruptly. When excessive, this force can lead to cellular metabolic overload, cell damage and cell death. The force of that transient sound exceeds the elastic limit of the tissues. The organ of Corti can be sheared off the basilar membrane when the sound coming through the ear canal, middle ear and cochlea exceeds 132 dB. If the sound is more intense than 184 dB, the eardrum is ruptured. 184 dB and above usually comes from military sound exposures, such as with the explosion of an IED (improvised explosive device). When a person has a shock wave, not only is the eardrum ruptured, but also has ossicular discontinuities. The explosion or blast if powerful can cause traumatic brain injury. As a result, a person could have an auditory processing disability. Lung injures can develop as well as some injuries to the viscera. Once exposure to damaging noise levels is discontinued, further significant progression of hearing loss stops. Individual susceptibility to noise-induced hearing loss varies greatly, but the reason that some people are more resistant to it while others are susceptible is not well understood. Diagnosis The diagnosis is based on what environmental factors of that loud noise that was exposed. Audiometry will be used to detect signs of acoustic trauma. In this test, there are different sounds of varying loudness and of different tones that are exposed to more carefully assess what can be heard and what can't be heard. Treatment/Prevention There are various treatment methods available depending on how severe the acoustic trauma is. Acoustic trauma cannot be reversed as of today. The goal of treatment is to protect the ear from further damage. Below are possible preventive measures and treatment methods that could help in cases of acoustic trauma Hyperbaric Oxygen therapy: Only when the case is extremely serious. Corticosteroids drugs: anti-inflammatory drug. Eardrum repair Using technological assistance for hearing loss such as a hearing aids Ear protection Using earplugs, earmuffs and other kinds of devices to protect the hearing. Segregating from events and environments where the noise is fairly louder than usual. Prognosis Each episode of acoustic trauma results in permanent damage within the inner ear, even though the majority of patients, the symptoms will disappear and an audiogram will show normal hearing within a few hours to a few days. In some cases, the changes seen in the audiogram will only partially improve or remain permanent. One of the signs and symptoms of acoustic trauma is tinnitus and this may persist for a long time. In some cases, tinnitus may become a permanent condition. There is no specific study done on Life Expectancy or statistical information for the prognosis of acoustic trauma. Overall, depending on how powerful the noise was and how and what degree of the severity, the prognosis is quite difficult to predict. Epidemiology The prevalence depends on the environmental factors. Acoustic trauma is quite common during military service and during hunting activities, where it is mainly associated with gun sports and particularly accidental shots. Of teenagers, 20-50 percent experience exposure to noise levels high enough to cause acute acoustic trauma. Hearing loss due to noise is the second most common sensorineural hearing loss, after age-related hearing loss (presbycusis). Of more than 28 million Americans with some degree of hearing impairment, as many as 10 million have hearing loss caused by in part by excessive noise exposure in the workplace or during recreational activities. See also Tinnitus Hearing loss Noise-induced hearing loss Safe listening Hearing protection devices Acoustic epidemiology References External links Hearing Diseases of the ear and mastoid process Audiology Occupational hazards Acoustics
Acoustic trauma
Physics
1,284
51,778,056
https://en.wikipedia.org/wiki/Amanita%20virgineoides
Amanita virgineoides, known as the false virgin's lepidella, is a species of fungus in the genus Amanita. Description The basidiocarps are medium-sized to large. The cap is wide, convex to applanate, sometimes concave, and white, covered with white, conical to pyramid volval remnants high and wide. The cap margin is smooth and appendiculate, and the context is white and unchanging. The gills are free to subfree and white to cream; the short gills are attenuate. The stipe is × , subcylindric or slightly attenuate upwards, white, covered with white floccose squamules; the context is white; the stipe's basal bulb is wide, ventricose, ovoid to subglobose, with its upper part covered with white, verrucose to granular volval remnants. The annulus is white; its upper surface bears fine, radial striations; and its lower surface, verrucose to conical warts. The annulus is often broken during expansion of the cap. The spores measure 8–10 × 6–7.5 μm and are broadly ellipsoid to ellipsoid and amyloid. Clamps are common at bases of basidia. See also List of Amanita species References virgineoides Fungus species
Amanita virgineoides
Biology
293
30,469,350
https://en.wikipedia.org/wiki/Textbook%20of%20Biochemistry
Textbook of Biochemistry, first published in 1928, is scientific textbook authored by Alexander Thomas Cameron. The textbook became a standard of its field, and, by 1948, had gone through six editions, in addition to one Chinese and two Spanish editions. Publication Textbook of Biochemistry consists entirely of lecture manuscripts given by the author, Alexander Thomas Cameron, over several years. Cameron had lectured at the University of Manitoba since 1909, but was never a fluent speaker. To compensate for this, he would write out his lectures in full. Cameron was encouraged by students and friends to submit his lecture manuscripts for publication. The textbook's first edition was published with a preface by Swale Vincent, Professor of Physiology at the University of London. Structure Textbook of Biochemistry is divided into the following chapters: Introduction Introduction to the concept of biochemistry, and a review of catalytic reactions and pH. Food-Stuffs, Their Derivatives and Related Substances. Ideas regarding carbohydrates, lipids, and proteins. The Chemistry of Digestion, the Circulation, and the Excreto. The importance of bacterial and chemical activity in organisms. Intermediate Metabolism The chemistry of tissues, intracellular synthesis, products of metabolism, and vitamins. The Chemistry of Reproduction; The Chemical Controlling Agencies of the Organism. The agents governing metabolic processes. Quantitative Metabolism. Addenda. A review of the present status of immunological biochemistry, and applications of biochemistry in industry. Reception Treat B. Johnson, writing for the Journal of Chemical Education, acknowledged the difficulty of concisely covering the rapidly growing field of biochemistry, but concluded that Cameron has "done quite well." He described Textbook of Biochemistry as "not a book that follows the ordinary logical procedure usually associated with such texts," and complements Cameron on a "dogmatic treatment which is really stimulating." The British Medical Journal also gave a favourable review, writing that "the busy medical student will find this book a concise account of the facts with which he is expected to become familiar." However, it also observed that the book contains several statements that are "definitely not in agreement with the facts as at present known." The reviewer contradicts, for example, the book's assertions that urinal ammonia is formed in the kidneys from urea, and that pepsin does not attack the CO-NH links in proteins. Textbook of Biochemistry, being the first concise and authoritative work in its field, became a standard text. By 1948, it had gone through six editions, in addition to one Chinese and two Spanish editions. References External links Full text of A Textbook of Biochemistry (1st edition) at Internet Archive Biochemistry textbooks Chemistry books 1928 non-fiction books 1928 in biology
Textbook of Biochemistry
Chemistry
544
21,215,479
https://en.wikipedia.org/wiki/FIPS%20199
FIPS 199 (Federal Information Processing Standard Publication 199, Standards for Security Categorization of Federal Information and Information Systems) is a United States Federal Government standard that establishes security categories of information systems used by the Federal Government, one component of risk assessment. FIPS 199 and FIPS 200 are mandatory security standards as required by FISMA. FIPS 199 requires Federal agencies to assess their information systems in each of the confidentiality, integrity, and availability categories, rating each system as low, moderate, or high impact in each category. The most severe rating from any category becomes the information system's overall security categorization. External links https://doi.org/10.6028/NIST.FIPS.199 https://csrc.nist.gov/publications/detail/fips/199/final. NIST link for FIPS 199. Computer security standards
FIPS 199
Technology,Engineering
183
10,468,961
https://en.wikipedia.org/wiki/Elliptic%20boundary%20value%20problem
In mathematics, an elliptic boundary value problem is a special kind of boundary value problem which can be thought of as the steady state of an evolution problem. For example, the Dirichlet problem for the Laplacian gives the eventual distribution of heat in a room several hours after the heating is turned on. Differential equations describe a large class of natural phenomena, from the heat equation describing the evolution of heat in (for instance) a metal plate, to the Navier-Stokes equation describing the movement of fluids, including Einstein's equations describing the physical universe in a relativistic way. Although all these equations are boundary value problems, they are further subdivided into categories. This is necessary because each category must be analyzed using different techniques. The present article deals with the category of boundary value problems known as linear elliptic problems. Boundary value problems and partial differential equations specify relations between two or more quantities. For instance, in the heat equation, the rate of change of temperature at a point is related to the difference of temperature between that point and the nearby points so that, over time, the heat flows from hotter points to cooler points. Boundary value problems can involve space, time and other quantities such as temperature, velocity, pressure, magnetic field, etc. Some problems do not involve time. For instance, if one hangs a clothesline between the house and a tree, then in the absence of wind, the clothesline will not move and will adopt a gentle hanging curved shape known as the catenary. This curved shape can be computed as the solution of a differential equation relating position, tension, angle and gravity, but since the shape does not change over time, there is no time variable. Elliptic boundary value problems are a class of problems which do not involve the time variable, and instead only depend on space variables. The main example In two dimensions, let be the coordinates. We will use the subscript notation for the first and second partial derivatives of with respect to , and a similar notation for . We define the gradient , the Laplace operator and the divergence . Note from the definitions that . The main example for boundary value problems is the Laplace operator, where is a region in the plane and is the boundary of that region. The function is known data and the solution is what must be computed. The solution can be interpreted as the stationary or limit distribution of heat in a metal plate shaped like with its boundary kept at zero degrees. The function represents the intensity of heat generation at each point in the plate. After waiting for a long time, the temperature distribution in the metal plate will approach . Second-order linear problems In general, a boundary-value problem (BVP) consists of a partial differential equation (PDE) subject to a boundary condition. For now, second-order PDEs subject to a Dirichlet boundary condition will be considered. Let be an open, bounded subset of , denote its boundary as and the variables as . Representing the PDE as a partial differential operator acting on an unknown function of results in a BVP of the form where is a given function and and the operator is either of the form: or for given coeficient functions . The PDE is said to be in divergence form in case of the former and in nondivergence form in case of the latter. If the functions are continuously differentiable then both cases are equivalent for In matrix notation, we can let be an matrix valued function of and be a -dimensional column vector-valued function of , and then we may write (the divergence form as) One may assume, without loss of generality, that the matrix is symmetric (that is, for all , . We make that assumption in the rest of this article. We say that the operator is elliptic if, for some constant , any of the following equivalent conditions hold: (see eigenvalue). . . If the second-order partial differential operator is elliptic, then the associated BVP is called an elliptic boundary-value problem. Boundary conditions The above BVP is a particular example of a Dirichlet problem. The Neumann problem is and where is the derivative of in the direction of the outwards pointing normal of . In general, if is any trace operator, one can construct the boundary value problem and . In the rest of this article, we assume that is elliptic and that the boundary condition is the Dirichlet condition . Sobolev spaces The analysis of elliptic boundary value problems requires some fairly sophisticated tools of functional analysis. We require the space , the Sobolev space of "once-differentiable" functions on , such that both the function and its partial derivatives , are all square integrable. That is: There is a subtlety here in that the partial derivatives must be defined "in the weak sense" (see the article on Sobolev spaces for details.) The space is a Hilbert space, which accounts for much of the ease with which these problems are analyzed. Unless otherwise noted, all derivatives in this article are to be interpreted in the weak, Sobolev sense. We use the term "strong derivative" to refer to the classical derivative of calculus. We also specify that the spaces , consist of functions that are times strongly differentiable, and that the th derivative is continuous. Weak or variational formulation The first step to cast the boundary value problem as in the language of Sobolev spaces is to rephrase it in its weak form. Consider the Laplace problem . Multiply each side of the equation by a "test function" and integrate by parts using Green's theorem to obtain . We will be solving the Dirichlet problem, so that . For technical reasons, it is useful to assume that is taken from the same space of functions as is so we also assume that . This gets rid of the term, yielding (*) where and . If is a general elliptic operator, the same reasoning leads to the bilinear form . We do not discuss the Neumann problem but note that it is analyzed in a similar way. Continuous and coercive bilinear forms The map is defined on the Sobolev space of functions which are once differentiable and zero on the boundary , provided we impose some conditions on and . There are many possible choices, but for the purpose of this article, we will assume that is continuously differentiable on for is continuous on for is continuous on and is bounded. The reader may verify that the map is furthermore bilinear and continuous, and that the map is linear in , and continuous if (for instance) is square integrable. We say that the map is coercive if there is an for all , This is trivially true for the Laplacian (with ) and is also true for an elliptic operator if we assume and . (Recall that when is elliptic.) Existence and uniqueness of the weak solution One may show, via the Lax–Milgram lemma, that whenever is coercive and is continuous, then there exists a unique solution to the weak problem (*). If further is symmetric (i.e., ), one can show the same result using the Riesz representation theorem instead. This relies on the fact that forms an inner product on , which itself depends on Poincaré's inequality. Strong solutions We have shown that there is a which solves the weak system, but we do not know if this solves the strong system Even more vexing is that we are not even sure that is twice differentiable, rendering the expressions in apparently meaningless. There are many ways to remedy the situation, the main one being regularity. Regularity A regularity theorem for a linear elliptic boundary value problem of the second order takes the form Theorem If (some condition), then the solution is in , the space of "twice differentiable" functions whose second derivatives are square integrable. There is no known simple condition necessary and sufficient for the conclusion of the theorem to hold, but the following conditions are known to be sufficient: The boundary of is , or is convex. It may be tempting to infer that if is piecewise then is indeed in , but that is unfortunately false. Almost everywhere solutions In the case that then the second derivatives of are defined almost everywhere, and in that case almost everywhere. Strong solutions One may further prove that if the boundary of is a smooth manifold and is infinitely differentiable in the strong sense, then is also infinitely differentiable in the strong sense. In this case, with the strong definition of the derivative. The proof of this relies upon an improved regularity theorem that says that if is and , , then , together with a Sobolev imbedding theorem saying that functions in are also in whenever . Numerical solutions While in exceptional circumstances, it is possible to solve elliptic problems explicitly, in general it is an impossible task. The natural solution is to approximate the elliptic problem with a simpler one and to solve this simpler problem on a computer. Because of the good properties we have enumerated (as well as many we have not), there are extremely efficient numerical solvers for linear elliptic boundary value problems (see finite element method, finite difference method and spectral method for examples.) Eigenvalues and eigensolutions Another Sobolev imbedding theorem states that the inclusion is a compact linear map. Equipped with the spectral theorem for compact linear operators, one obtains the following result. Theorem Assume that is coercive, continuous and symmetric. The map from to is a compact linear map. It has a basis of eigenvectors and matching eigenvalues such that as , , whenever and for all Series solutions and the importance of eigensolutions If one has computed the eigenvalues and eigenvectors, then one may find the "explicit" solution of , via the formula where (See Fourier series.) The series converges in . Implemented on a computer using numerical approximations, this is known as the spectral method. An example Consider the problem on (Dirichlet conditions). The reader may verify that the eigenvectors are exactly , with eigenvalues The Fourier coefficients of can be looked up in a table, getting . Therefore, yielding the solution Maximum principle There are many variants of the maximum principle. We give a simple one. Theorem. (Weak maximum principle.) Let , and assume that . Say that in . Then . In other words, the maximum is attained on the boundary. A strong maximum principle would conclude that for all unless is constant. Notes References Mathematical analysis Partial differential equations Boundary value problems
Elliptic boundary value problem
Mathematics
2,153
32,191,953
https://en.wikipedia.org/wiki/Mr.%20Louie
Mr. Louie is a former self-elevating drilling barge (jackup rig) converted into an oil platform. It was the first self-elevating drilling barge classed by the American Bureau of Shipping. As an oil platform, it operates at the Saltpond Oil Field, offshore Ghana. Description Mr. Louie weighs 6200 tons. Its minimal operational water depth is . It has five tugs which pulled her around, and twelve legs for standing on the seabed. It has rings welded onto its cylindrical legs to provide a positive jack connection. Its footing equivalent diameter is , and approximate footing load is . History Mr Louie was designed by Emile Brinkmann between 1956 and 1958. The drilling barge was built by Universal Drilling Co. It was launched in 1958 and delivered in 1959. In 1958, Mr. Louie became the first self-elevating drilling barge classed by the American Bureau of Shipping. In 1959, it was leased to Reading & Bates (now part of Transocean). The rig was valued by the leasing contract at US$4.75 million. This transaction was later challenged by the United States tax authorities as a sale agreement. In 1965, the barge was sold pursuant to contractual option to Reading & Bates. Mr. Louie first drilled at the Gulf of Mexico, where it drilled more than 40 wells. Later it was transferred to the North Sea. In 1963, while drilling on the German Bight, a pocket of very high pressure carbon dioxide struck the well, causing a blowout. The blowout created a wide and deep crater called Figge-Maar. In May 1964, Mr. Louie drilled the first offshore hole in the North Sea, off of Juist island. In June, it made the first North Sea gas discovery. Later it was used for natural gas exploration in the UK section of the North Sea. In 1967, Mr. Louie was a part of the unique action for that time when for the first time in the North Sea, it went to dock for reparations and maintenance and was replaced by another rig (Orion) during the drilling. After the structural repairs and maintenance work at Bremerhaven, Mr. Louie continued drilling at the North Sea for the Gas Council – Amoco group. After the North Sea, Mr. Louie was moved to West Africa. In 1969, it passed through Gibraltar. Temporary moorings were needed and their setting into the rocky floor of Gibraltar Bay required the use of the Edwardian air lock diving-bell plant to work at depth. Between 1977 and 1978 it drilled six appraisal wells at the Saltpond Oil Field in offshore Ghana. After completing the drilling in 1978, Mr. Louie was converted into an oil platform at this field. It was officially renamed APG-1. See also Sea Gem Sea Quest References 1958 ships Jack-up rigs Oil platforms Transocean
Mr. Louie
Chemistry,Engineering
580
52,264,448
https://en.wikipedia.org/wiki/NGC%20357
NGC 357 is a barred lenticular or spiral galaxy in the constellation Cetus. It was discovered on September 10, 1785, by William Herschel. It was described by Dreyer as "faint, small, irregularly round, suddenly brighter middle, 14th magnitude star 20 arcsec to northeast." See also List of NGC objects (1–1000) References External links SEDS 0357 3768 17850910 Cetus Discoveries by William Herschel Barred lenticular galaxies
NGC 357
Astronomy
98
11,819,900
https://en.wikipedia.org/wiki/Postia%20tephroleuca
Postia tephroleuca, also known as greyling bracket, is a species of fungus in the family Fomitopsidaceae infecting broad-leaved trees, typically beech and plane. References Fungi described in 1821 Fungal plant pathogens and diseases Fomitopsidaceae Taxa named by Elias Magnus Fries Fungus species
Postia tephroleuca
Biology
68
10,403,265
https://en.wikipedia.org/wiki/Moritz%20Wagner%20%28naturalist%29
Moritz Wagner (Bayreuth, 3 October 1813 – Munich, 31 May 1887) was a German explorer, collector, geographer and natural historian. Wagner devoted three years (1836–1839) to the exploration of Algiers: it was here that he made important observations in natural history, which he later supplemented and developed: that geographical isolation could play a key role in speciation. From 1852 to 1855, together with Carl Scherzer, Wagner travelled through North and Central America and the Caribbean. In May 1843, Wagner toured the Lake Sevan region of Armenia with Armenian writer Khachatur Abovian. He committed suicide in Munich, aged 73. His brother Rudolf was a physiologist and anatomist. Wagner's significance in evolutionary biology Wagner's early career was as a geographer, and he published a number of geographical books about North Africa, the Middle East, and Tropical America. He was also a keen naturalist and collector, and it is for this work he is best known among biologists. Ernst Mayr, the evolutionist and historian of biology, has given an account of Wagner's significance.p562–565. However, others disagree with this account. During his three years in Algeria, he (amongst other activities) studied the flightless beetles Pimelia and Melasoma. In these genera, a number of species are each confined to a stretch of the north coast between rivers which descend from the Atlas Mountains to the Mediterranean. As soon as one crosses a river, a different but closely related species appears. Wagner made similar observations in the Caucasus and in the Andean valleys, leading him to conclude, after the Origin of Species had been published: "... an incipient species will only [arise] when a few individuals transgress the limiting borders of their range... the formation of a new race will never succeed... without a long continued separation of the colonists from the other members of their species." This was an early description of the process of geographic speciation by means of the founder effect. Another formulation of this idea came later: "Organisms which never leave their ancient area of distribution will never change". Wagner's idea met with a mixed reception. "Unfortunately, Wagner combined [his idea] with some peculiar ideas on variation and selection" (Mayr). The leading evolutionists (Darwin, Wallace, Weismann) attacked Wagner's idea of geographic speciation, and it suffered a long decline until in 1942 it was reintroduced by Mayr. The importance of geographic speciation became one of the core ideas of the evolutionary synthesis. Criticism Some modern experts such as Ernst Mayr, Jerry Coyne and H. Allen Orr, argue that Wagner pioneered the idea of geographical speciation, and that Darwin had not appreciated it. However, Wagner's "migration theory" was based on a rather simple, Lamarckian idea of evolution. Wagner argued in letters to Darwin that the latter had missed a vital geographic component in understanding the evolution of new species. Darwin at first responded in a friendly way to these letters, and agreed that geographic isolation was important (although not the only cause of speciation), and pointed out that he had in fact dealt with geographic speciation in The Origin of Species. Wagner in his later articles totally rejected the importance of natural selection. He again pointed out the importance of intercrossing in preventing divergence, and thus for geographic separation in allowing divergence. Wagner argued that Darwin had not understood this, although these ideas are present in The Origin of Species. Darwin found Wagner's increasingly hysterical tone and one-sided argument upsetting, and wrote across his copy of Wagner's 1875 paper "most wretched rubbish." As well as Darwin, the Reverend J.T. Gulick also found Wagner's theories overstated. Gulick was apparently responding to David Starr Jordan, who approved of Wagner's geographic speciation ideas in a paper which is often cited as providing early support of geographical speciation. Jordan later wrote a brief note of correction agreeing with some of Gulick's criticisms: "Mr. Gulick corrects certain erroneous assumptions on the part of Dr. Moritz Wagner. Mr. Gulick says: Separate generation is a necessary condition for divergent evolution but not for the transformation of all the survivors of a species in one way. Separation does not necessarily imply any external barriers or even the occupation of separate districts. Diversity of natural selection is not necessary to diversity of evolution. Difference of external conditions is not necessary to diversity of evolution. Separation and variation—that is, variation not overwhelmed by crossing—is all that is necessary to secure divergence of type in the descendants of one stock, though external conditions remain the same and though the separation is other than geological. ... All of this is in general accord with my own experience." In a later paper Gulick says that "Moritz Wagner, in his 'Law of the Migration of Organisms,' was the first to insist on the importance of geographical isolation as a factor in evolution, but when he asserted that without geographical isolation natural selection could have no effect in producing new species he went beyond what could be sustained by facts". Mayr's formulation has been argued to have cleared up issues which Wagner had left unresolved: "A new species develops if a population which has become isolated from its parental species acquires during this period of isolation characters which promote or guarantee isolation when the external barriers break down". The zoological taxonomist Bernhard Rensch was also significant in keeping geographical speciation on the evolutionary menu. He identified geographical separation as the most frequent initial step towards cladogenesis (phylogenetic branching). However, a variety of species concepts compete with Mayr's isolation concept of species today, and so Mayr's account can no longer be accepted to be the gold standard (disambiguation). The importance of Wagner's insight is highly debatable today, as it is clear that geographical isolation is not the only mechanism which causes species-splitting. Furthermore, it is generally accepted that natural selection is the most important cause of speciation, even when the geographical milieu is in isolation. There is room for debate as to whether Charles Darwin had reached a similar conclusion at the same time. The Origin of Species was published nearly twenty years after Wagner's first account, but more relevant is the evidence of his notebooks. The evidence of Darwin's notebooks (which were not published until the mid-20th century) shows a "clear description of reproductive isolation, maintained by ethological [behavioural] isolating mechanisms"p266; the same ideas are also present in The Origin of Species, but are often not recognized as such by modern biologists. On the other hand, there is no single example in the notebooks quite so clear as Wagner's flightless beetles. Much of the good in Wagner's ideas is masked by his other, mistaken, beliefs, but his inferences about geographical speciation were important insights gained by observation of insects in their natural habitats. "It took more than 60 years after 1859 until the leading specialists... [agreed] that this geographical approach was the way to solve the problem of speciation... a new species may evolve when a population acquires isolating mechanisms while isolated from its parent population.". But again, see Sulloway's article. Speciation isn't just about geography, it is more important that it requires splitting that endures in spite of geographic overlap. Legacy Moritz Wagner is commemorated in the scientific name of a species of venomous snake, Montivipera wagneri. Publications Reisen in der Regentschaft Algier in den Jahren 1836, 1837 und 1838. 3 Bde. Leipzig 1841. Der kaukasus und das Land der Kosaken. 2 Bde. Leipzig 1847. Reise nach Kolchis. Leipzig 1850. Reise nach dem Ararat und dem Hochlande Armeniens. Stuttgart 1848. Reise nach Persien und dem Lande der Kurden. 2 Bde. Leipzig 1851. Die Republik Costa-Rica. Leipzig 1856. Über die hydrogaphischen Verhältnisse und das Vorkommen der Süßwasserfische in den Staaten Panama und Ecuador. Abhandlungen der königlich bayerischen Akademie der Wissenschaften, II Classe 11 (I Abt.) Reisen in Nordamerika in den Jahren 1852 und 1853. (with Carl Scherzer) 3 vols, Gotha 1861. Die Darwinsche Theorie und das Migrationsgesetz der Organismen. Leipzig 1868. English edition: Wagner M. 1873. The Darwinian theory and the law of the migration of organisms. Translated by J.L. Laird, London. Google Books: Naturwissenschaftliche Reisen im tropischen Amerika. Stuttgart 1870. Über den Einfluß der geographischen Isolierung und Kolonienbildung auf die morphologischen Veränderungen der Organismen. München 1871. Die Entstehung der Arten durch räumliche Sonderung. [The origin of species by spatial separation] Gesammelte Aufsätze. Benno Schwalbe, Basel 1889. References External links Short biography Discussion of Wagner's views on species and speciation, and links to publications Short biography 1813 births 1887 deaths 19th-century German explorers German naturalists Proto-evolutionary biologists Suicides in Germany Burials at the Alter Nordfriedhof (Munich) German explorers of Africa Explorers of the Caucasus
Moritz Wagner (naturalist)
Biology
1,983
65,101,829
https://en.wikipedia.org/wiki/Green%20grabbing
Green grabbing or green colonialism is the foreign land grabbing and appropriation of resources for environmental purposes, resulting in a pattern of unjust development. The purposes of green grabbing are varied; it can be done for ecotourism, conservation of biodiversity or ecosystem services, for carbon emission trading, or for biofuel production. It involves governments, NGOs, and corporations, often working in alliances. Green grabs can result in local residents' displacement from land where they live or make their livelihoods. It is considered to be a subtype of green imperialism. Definition and purpose "Green grabbing" was first coined in 2008 by journalist John Vidal, in a piece that appeared in The Guardian called "The great green land grab". Social anthropologist Melissa Leach notes that it "builds on well-known histories of colonial and neo-colonial resource alienation in the name of the environment". Green grabbing is a more specific form of land grabbing, in which the motive of the land grab is for environmental reasons. Green grabbing can be done for conservation of biodiversity or ecosystem services, carbon emission trading, or for ecotourism. Conservation groups might encourage members of the public to donate money to "adopt" an acre of land, which goes towards land acquisition. Companies who engage in carbon emission trading might employ a green grab to plant trees—the resulting carbon offset can then be sold or traded. One program, Reducing emissions from deforestation and forest degradation (REDD+), compensates companies and countries for conserving forests, though the definition of forest also includes forest plantations consisting of a single tree species (monoculture). Green grabbing can also be done for the production of biofuels. Biofuel production efforts, led by the US and European Union, have been a main driver of land grabbing in general. The International Land Coalition states that 59% of land grabs between 2000 and 2010 were because of biofuels. Occurrence and affected parties Green grabbing primarily affects smallholders, and leads to various forms of injustice, conflict, and displacement. Confiscation of land by both local and foreign companies, as well as by rural elites and government bodies, in the name of environmental reasons, often worsens existing vulnerabilities and inequalities in these communities. Areas most vulnerable to green grabs are those in poor economic conditions, developing countries, or on indigenous land. Indebted governments may be especially vulnerable to green grabs, as they may agree to privatize and sell public assets to avoid bankruptcy. Green grabs involve large tracts of land consisting of thousands or millions of hectares. Green grabs have occurred in Africa, Latin America, and Southeast Asia. Environmental activists and critics have also warned that the Green New Deal and COP26 could exacerbate green colonialism. The indigenous Sámi community of northern Scandinavia, as well as Norwegian and Swedish activists, have accused the Norwegian government of green colonialism because of the construction of wind farms on Sámi land. Actors Modern green grabs are often enacted through alliances between national elites, government agencies, and private actors. Examples can include international environmental policy institutions, multi-national corporations, and non-governmental organizations (NGOs). These varied actors align to achieve common goals; for example, ecotourism initiatives can result in the alignment of tourism companies, conservation groups, and governments. Conservation groups can also align with military or paramilitary groups to accomplish shared aims. Actors can also include entrepreneurs trying to profit from eco-capitalism, such as companies developing forest carbon offset projects, biochar companies, and pharmaceutical businesses. Energy Green grabbing has been prominent in the energy sector. Often, as countries and governments enter transnational climate agreements such as the Paris Agreement or the Kyoto Protocol, they commit to reaching certain sustainability targets. To fulfill these quotes on initiatives such as renewable energy implementation, indigenous or public lands are seized without consideration for local communities. Confiscated lands may be used for solar energy, wind farms, and biofuel. Under the pretense of environmental preservation, green grabbing borrows from historical stories of colonial resource appropriation. This phenomenon involves a diverse array of participants, including entrepreneurs, activists, and most significantly NGOs. Social anthropologists James Fairhead, Melissa Leach, and Ian Scoones note that conservation initiatives often involve partnerships between international environmental organizations, NGOs, national elites, and multinational corporations. Examples include cases like Rio Tinto's activities in Madagascar, where land acquisition for environmental purposes overlaps with mineral extraction, and collaborations between tourist operators, conservation agencies, and governments to promote ecotourism in countries like Colombia, Tanzania, and South Africa. These collaborations underscore the complex dynamics underlying conservation schemes and the blurring of boundaries between environmental protection and profit-driven exploitation. Wind Greece The drive for wind parks, in post-crisis Greece, has given rise to green grabbing. The argument supporting green energy as a remedy for the nation's economic and environmental problems has gained popularity despite Greece's economic difficulties. The negative socio-ecological effects of wind park growth, such as land expropriation, environmental damage, and the escalation of socioeconomic inequality, are frequently ignored in this narrative. The wind energy industry is dominated by multinational businesses, which promotes wealth accumulation and green grabbing at the expense of regional communities and ecosystems. In a case study of Greece's wind park development, Christina Zoi details that "Neoliberalisation has instigated green grabbing (land, financial and other resources) with adverse implications on local stock-breeders and farmers, domestic and small business electricity consumers, conservation and local biodiversity. These cannot be considered as negligible even under the face of accelerating climate change and its consequences." Mexico The development of the Bíi Hioxo wind park involved not only the physical occupation of the land but also the manipulation of narratives surrounding climate change mitigation and the green economy to legitimize the project. The tactics used to suppress resistance, such as portraying wind energy as a solution to energy and climate crises, reflect a form of greenwashing aimed at pacifying opposition and advancing industrial expansion. Furthermore, the involvement of powerful actors such as Gas Natural Fenosa and local elites highlights how green grabbing operates through alliances between state and corporate interests, leading to the dispossession of local communities and the exploitation of natural resources for profit. Solar Morocco Morocco's solar projects, such as the Ouarzazate Solar Power Station, which employs concentrated solar-thermal power (CSP) technology, diverts water resources away from drinking and agriculture in an already semi-arid region. The construction of the Ouarzazate plant, funded through public-private partnerships and loans from international financial institutions, has resulted in annual deficits and added to Morocco's public debt. The $9 billion project's debt, incurred through loans from international financial institutions like the World Bank and the European Investment Bank, is backed by Moroccan government guarantees. On the local scale, those most affected included pastoralists who did not receive proper compensation for using their property and were not consulted about how the project might affect water supplies. India The Indian government's solar energy initiatives, like the Jawaharlal Nehru National Solar Mission (JNNSM), aim to ramp up solar energy capacity to mitigate climate change and reduce poverty. Yet, the pursuit of solar energy projects often involves the dispossession of agropastoralists from their lands, which are essential for grazing, fodder, and fuelwood collection. These lands, categorized as government-owned "marginal" or "wastelands", are transformed into solar parks through coercive state policies, denying agropastoralists access to vital resources. Agropastoralist communities often encounter difficulties in accessing necessary energy resources, including traditional fuel such as firewood and modern options like solar-generated electricity. This dual deprivation contributes to the marginalization experienced by rural populations. ICDP in Madagascar The Integrated Conservation and Development Projects (ICDP) in Madagascar were mostly managed by NGOs supported by the state government. Neoliberalism led to decentralized conservation efforts from the 1990s until the mid 2000s. At that point, there ceased being monetary compensation from the government in favor of conservation efforts being contracted out to North American organizations. The internal division between high status and high paid jobs of North American workers in comparison to the low wage work of Madagascarans as the enforcers of unpopular fortress conservation through the creation of nature reserves. The Malagasy people within Madagascar see the conservation efforts as attempts at green grabbing and neocolonialism. North American NGOs have responded to the claims as ungrounded, placing the lack of acceptance of the reserves system as a failure in the education and understanding of sustainability of residents. In 2009, the presidential administration of Marc Ravalomanana considered a deal with Daewoo Logistics, a South Korean company, to lease 1.3 million hectares of arable land to grow maize and palm oil. This potential deal was seen as another attempt at colonialism, as the land was to be used by and for foreign nations while a large portion of land, up to 10 percent, was being allotted for conservation reserve. Protest against the negotiations was responded to with military action, leading to the removal of Ravalomanana. The deal was not put into effect and the resistance and protest of Madagascarans led to closure of multiple national parks and reserves, allowing the residents to continue their use of the land. Implications Green grabbing can result in the expulsion of indigenous or peasant communities from the land they live on. In other cases, the use, authority, and management of the resources is restructured, potentially alienating local residents. Evictions due to palm oil biofuel has resulted in the displacement of millions of people in Indonesia, Papua New Guinea, Malaysia, and India. The practice has been criticized in Brazil, where the government referred to one land acquisition NGO as eco-colonialist. A shaman of the Yanomami tribe published a statement through Survival International saying, "Now you want to buy pieces of rainforest, or to plant biofuels. These are useless. The forest cannot be bought; it is our life and we have always protected it. Without the forest, there is only sickness." The head of the Forest Peoples Programme Simon Colchester said, "Conservation has immeasurably worsened the lives of indigenous peoples throughout Africa," noting that it resulted in forced expulsion, loss of livelihoods, and violation of human rights. See also Fortress conservation Antarctic Treaty System References Ecological economics Ecotourism Environmental economics Indigenous rights Nature conservation Neocolonialism Commodification Environmental controversies Environmental racism
Green grabbing
Environmental_science
2,151
44,661,721
https://en.wikipedia.org/wiki/Sprague%20Electric
Sprague Electric Company was an electronic component maker founded by Robert C. Sprague in 1926. Sprague was best known for making a large line of capacitors used in a wide variety of electrical and electronic in commercial, industrial and military/space applications. Other products include resistive components, magnetic components (transformers and coils), filter assemblies, semiconductors and integrated circuits. History overview Sprague used $25,000 of his savings to open Sprague Specialties Company at his home in Quincy, MA, in 1926. One of his first products was the mini condenser (an old name for capacitor). Mini condensers were commonly used in radio applications for noise filtering, signal coupling and tone controls. Early capacitors were two pieces of metal foil wrapped between wax paper or any other type of suitable insulation material. The type of insulating material determined the capacitor's capacitance and maximum voltage. Capacitors are also useful in high power applications like motors, and soon Sprague turned his attention those areas as well. Sprague found a sustainable product line in capacitors. The increase in the types of radios using AC created demand for many different types of capacitors. By 1929, Sprague Specialties Company needed a bigger facility, and in 1930 Sprague purchased a plant on Beaver Street in North Adams, MA, in Berkshire County. When local residents heard the company was expanding, Sprague received all kinds of incentives from the banks and other businesses to relocate there. Sprague chose the area because he wanted to open a shop where his father Frank had grown up. By the mid-1930s Sprague had become a recommended source for capacitors by radio manufacturers, radio repair and many electrical applications. As the size of the company grew there was a desire from the manufacturing workers to form an organized labor union. The Wagner Act of 1935 prohibited company unions. In 1937 the company agreed with the workers to form an independent union, the Independent Condenser Workers Union. From 1936–1944, the Sprague Specialties Company sales increased seven-fold, however expansion put a damper to profits. For many years the company sustained losses. Robert felt strongly connected to his company and to the people of North Adams, and always tried to consider the company's effects as the largest employer in town. By 1942 the Sprague Specialties Company had relocated to the abandoned Arnold Print Works Facility on Marshall St. This became the main facility, and eventually consisted of 26 buildings that were interconnected by tunnels and bridges. Former employees remember the complex layout and interesting ways to get from one department to another. Previous to the Sprague Specialties Company, the Arnold Print Works had been the largest employer in North Adams, operating in the area from 1860–1942. In 1942, the company's name was changed from Sprague Specialty Products to Sprague Electric. The Sprague Electric name would remain until its last owner Penn Central started to sell off pieces of its Sprague division in the early 1990s. The Sprague Log (1938–1985) Beginning in 1938, as CEO, Sprague tried to bridge the gap between the employees and the business, with the publication of the Sprague Log. Sprague used this newsletter to bring management and workers closer, and to maintain morale after forcing workers to take a 10% pay cut that same year. The publication was divided into two sections. Part 1 discussed the company accomplishments, achievements and the loyalty of Sprague employees, often spotlighting individuals. Part 2 contained employee announcements: births and weddings, social activities, and other family events. Sprague Electric: Early years (1942–1960) After the Japanese attack on Pearl Harbor on December 7, 1941 and the declaration of war that followed, US manufacturing stopped commercial production and switched to wartime activities. Sprague Electric's participation in the US war effort improved its reputation, future contracts, and sales, and propelled the Sprague name to the forefront of the growing American electronic business. One of Sprague Electric's biggest contributions to the war effort was in the manufacturing of the variable timing proximity fuse. The proximity fuse was a small transmitter (and in some cases a receiver) built on a bomb or artillery shell that would detonate the bomb or shell before impact, causing greater destruction. Sprague Electric continued to make capacitor and resistive components to meet military requirements of quality and reliability. Robert C. Sprague was also a member of War Production Board for the Advisory Committee on Capacitors (1942-1945). During the Second World War, Sprague invented the tantalum capacitor. The use of tantalum allows capacitors to achieve high values of electron storage or capacitance along with higher operating voltages, shrinking the capacitor to a fraction of the size of a more conventional design. Vishay Intertechnology, which currently owns the Sprague capacitor brand, calls their tantalum capacitors the Sprague tantalum. Post World War II After the Second World War, Sprague Electric retooled for the commercial and industrial products market and eventually Sprague capacitors and resistive products became a widely-known brand name. Radio and television manufacturers like RCA, Zenith and Philco continued to use Sprague Electric products. Sprague Electric products were also found in stores selling electronic parts, and the electronics servicing business. In 1946 Howard W. Sams (SAMS Publishing) introduced their Photofact Servicing manuals, which were a valuable resource for the service of consumer electronics. Sprague Electric capacitors were listed as a recommended replacement part. Sprague Electric flourished during the Cold War and the Space Race because of their reputation and experience in the building of military components. However, by 1954 most of Sprague Electric's sales and profits were from the TV and radio markets; military products sales were second. Sales reached almost $50 million. Also in 1954, the company built new capacitor plants in North Carolina and in the US territory of Puerto Rico. It was the beginning of Robert C. Sprague's dream to make Sprague Electric into a major corporation; this expansion would continue into the late 1960s. Sales of Sprague Electric products remained steady from 1954-1958 at just below $50 million. The company continued to expand its product base by opening a semiconductor plant in New Hampshire in 1957 and a magnetics plant in California. As the company grew, union membership (Independent Condenser Workers Union #2) grew as well. By November 1956 straight hourly workers wages were tied to the Bureau of Labor Statistics Consumer Price Index. Additionally, workers received better benefits. The Sprague Log recorded in this in a "Special Negotiations Supplement." Sprague Electric: Expansion, growth and difficulties (1960–1971) By 1959 Sprague Electric achieved $50 million in sales. Robert C. Sprague continued as Chairman of the Board and his brother Julian as President. Contemporary advances in the integrated circuit and thin film technologies, Sprague saw a need to move to support and design products around these new technologies. Thin film products and integrated circuits lead to more compact circuit designs and smaller products. Sprague understood this as the future trend in electronics; he opened more plants in the United States and developed a worldwide network of sales offices. By 1960 Sprague Electric had manufacturing plants in North Carolina, New Hampshire, Vermont, Wisconsin, Virginia, Maryland and California. Many were involved in making capacitors that used thin film technology. This proved to be a very important product for Sprague Electric. These plants also produced magnetic products (transformers, inductors etc.). With advances in transistor and integrated circuit technology (later computer chips) resistance to noise interference became a factor. Magnetics played a role in reducing noise interference. Sprague got into the semiconductor business in the late 1950s, somewhat later than the already established semiconductor firms. Fairchild Camera (directed by Robert Noyce) had marketed the first commercial integrated circuit as early as 1963. Sprague wanted to be an early participant into this young product. They set up a group at the New Hampshire facility where thin film capacitors were made. In 1965 Sprague Electric acquired Micro Tech (Sunnyvale, CA), a manufacturer of semiconductor equipment for fabrication. By 1966 Sprague opened a brand new facility in Worcester, MA dedicated to semiconductor and integrated circuit fabrication. The new factory was headed by Dr. John L. Sprague, the youngest son of Robert C. Sprague. John Sprague was a graduate of Stanford University, and specialized in semiconductor development. Eventually the Micro Tech was moved to the same area. However Sprague's plans for Micro Tech never blossomed, and they wound up making capacitors. Robert C. Sprague's heavy investment in the semiconductor business reduced income for the company. Sales of Sprague Electric products were $100 million by 1966 and the workforce increased to over 12,000. As the company grew, management was reorganized, and more expansion occurred, in the form of external partnerships. However, this rapid expansion served to keep profits down. Additionally, expansion did not have much impact on wages, benefits and working conditions. As one local historian put it, Robert Sprague's view of his employees was "paternalistic". In March 1970 a major labor strike started and affected all areas of the company. The strike lasted 10 weeks and was ultimately settled by a federal mediator. While Robert C. Sprague and the Union representatives shook hands after the settlement, the results had a negative effect on future of the company, its management and employees. Sprague Electric: Later years (1970–1978) With the end of the 1970 strike, Robert C. Sprague retired as chief executive and was succeeded by Neal W. Welch. Although the workers got some of what they demanded, the strike and the new contract would devastate the company. Sprague Electric made cuts to minimize costs, including reducing the labor force and shuttering some of its North Adams operations. Employee morale plummeted which was reflected in the rapid decline of the company's newsletter the Sprague Log during this period. The paper was only published twice a year, then not all until 1978; in that year, the company was sold to General Cable, which later was taken over by Penn Central in 1981. Closing of the North Adams facility and its future In the 1980s, Sprague Electric was a division of Penn Central. In 1981, John L. Sprague, the younger son of Robert C. Sprague, was named chief executive. John Sprague tried to bring the employees and management closer together. The Sprague Log increased its frequency of publication, and again emphasized the need to work together. During his leadership, sales of Sprague Electric products still grew steadily but not the company's profits. Capacitor products from overseas as well as other electrical and electronic components were cutting into sales from US manufacturers. Also by the 1980s, many electronic assembly plants were overseas, and there was more inclination to buy local or from areas closer to assembly. This was an area Sprague Electric could not compete. Even though sales of Sprague products reached $500 million in the mid-1980s, the Sprague division continued to reorganize. In 1985, it was announced that the Sprague division headquarters would move to Lexington, Massachusetts, and the North Adams plant would close down. As a company Penn Central focused on profits; it viewed Sprague Electric performance as unsatisfactory, and gradually closed or sold off operations. Sprague was spun off from Penn Central in 1987 under the holding company Sprague Technologies. In 1990, Sprague sold its semiconductor unit to Sanken Electric of Japan. Many of the capacitor products were sold in 1993 to Vishay, a leading manufacturer of components used in electronics for industrial and military/space applications. After Sprague Electric's permanent closing in North Adams, the population of North Adams dropped by 4,000 and the unemployment climbed to 14%. The biggest employer was gone and the site was rusting and decaying. Removal and cleanup of the industrial waste, including carcinogenic polychlorinated biphenyls (PCBs) were also lingering problems. In 1996, 17 homes were demolished on Alton Place, Avon and West Main Streets in the Braytonville area due to the vaporizing of a toxic trichloroethylene (TCE) plume of groundwater seeping west from the Brown Street site. Massachusetts Museum of Contemporary Art In 1986, Thomas Krens, director of Williams College Museum of Art, was looking for an exhibit space for large-scale contemporary art works. The mayor of North Adams, John Barrett III, suggested the abandoned Sprague Electric facility. In 1999, after years of demolition, cleanup, restoration, and construction, the Massachusetts Museum of Contemporary Art (MASS MoCA) opened. Twenty-five of the 26 original buildings were restored, including many of the interconnecting bridges and tunnels. The Marshall Street site was listed in the National Historic Register. MASS MoCA is the largest contemporary art museum in the United States. Its location is a multiuse site, with the museum, offices, businesses, and recreation and special activity areas. Legacy of Sprague Capacitors Starting in the early 1950s, Sprague produced its "Black Beauty" line of capacitors. These paper capacitors were similar to Robert Sprague's original patent. Instead of using wax coating on the outer body to keep moisture out (moisture renders capacitors useless), Sprague applied a plastic resin material to encapsulate the device and provide better and longer-lasting resistance to moisture. In the late 1960s, capacitors developed quickly as better materials, such as mylar, were used in their production. Capacitors became more reliable, smaller, and able to withstand higher voltages. Sprague Electric's "Orange Drop" capacitors were well received by manufacturers and designers. They set the standard for "modern" capacitors in appearance and performance. The revival of interest in vacuum tube amplifiers brought the mystique of having the right electronic components for top performance. Sprague Electric components had a long history of name recognition, quality, and brand loyalty. Additionally, the Sprague "Bumble Bee" capacitors that were found in the original Gibson Les Pauls are commonly sought after by guitarists today in search of that original Les Paul tone. , some designers and restorers use only Sprague/Vishay Orange Drops or other Sprague capacitors. After the breakup of Sprague Electric, the Orange Drop Capacitor line was continued by SBE Inc until 2012, when the Orange Drop product line was sold to Cornell Dubilier. Sprague capacitors were used in a prototype Apple-1 computer. See also Apollo 11 goodwill messages References Further reading External links The Sprague Log, Preserving a Company Newsletter A complete set of the Sprague Log from the Massachusetts College of Liberal Arts (MCLA) library. Company Town Paul W. Marino, a local historian, details the 1970 strike. The Sprague Electric Company's Long Goodbye Steve Melito, a former Sprague Electric employee looks at Sprague Electric's impact on the town and the factory closing. Electronics manufacturing American companies established in 1926 Manufacturing companies established in 1926 Manufacturing companies based in Massachusetts Capacitor manufacturers History of radio in the United States Sprague family
Sprague Electric
Engineering
3,235
1,256,648
https://en.wikipedia.org/wiki/Sigmatropic%20reaction
In organic chemistry, a sigmatropic reaction () is a pericyclic reaction wherein the net result is one sigma bond (σ-bond) is changed to another σ-bond in an intramolecular reaction. In this type of rearrangement reaction, a substituent moves from one part of a π-system to another part with simultaneous rearrangement of the π-system. True sigmatropic reactions are usually uncatalyzed, although Lewis acid catalysis is possible. Sigmatropic reactions often have transition-metal catalysts that form intermediates in analogous reactions. The most well-known of the sigmatropic rearrangements are the [3,3] Cope rearrangement, Claisen rearrangement, Carroll rearrangement, and the Fischer indole synthesis. Overview of sigmatropic shifts Woodward–Hoffman sigmatropic shift nomenclature Sigmatropic rearrangements are concisely described by an order term [i,j], which is defined as the migration of a σ-bond adjacent to one or more π systems to a new position (i−1) and (j−1) atoms removed from the original location of the σ-bond. When the sum of i and j is an even number, this is an indication of the involvement of a neutral, all C atom chain. An odd number is an indication of the involvement of a charged C atom or of a heteroatom lone pair replacing a CC double bond. Thus, [1,5] and [3,3] shifts become [1,4] and [2,3] shifts with heteroatoms, while preserving symmetry considerations. Hydrogens are omitted in the third example for clarity. A convenient means of determining the order of a given sigmatropic rearrangement is to number the atoms of the bond being broken as atom 1, and then count the atoms in each direction from the broken bond to the atoms that form the new σ-bond in the product, numbering consecutively. The numbers that correspond to the atoms forming the new bond are then separated by a comma and placed within brackets to create the sigmatropic reaction order descriptor. In the case of hydrogen atom migrations, a similar technique may be applied. When determining the order of a sigmatropic shift involving a hydrogen atom migration it is critical to count across all atoms involved in the reaction rather than only across the closest atoms. For example, the following hydrogen atom migration is of order [1,5], attained by counting counterclockwise through the π system, rather than the [1,3] order designation through the ring CH2 group that would mistakenly result if counted clockwise. As a general approach, one can simply draw the transition state of the reaction. For a sigmatropic reaction, the transition state will consist of two fragments, joined by the forming and breaking σ-bonds. The sigmatropic reaction is named as a [i,j]-sigmatropic rearrangement (i ≤ j) if these two fragments consist of i and j atoms. This is illustrated below, with the relevant fragments shown in color. Suprafacial and antarafacial shifts In principle, all sigmatropic shifts can occur with either a retention or inversion of the geometry of the migrating group, depending upon whether the original bonding lobe of the migrating atom or its other lobe is used to form the new bond. In cases of stereochemical retention, the migrating group translates without rotation into the bonding position, while in the case of stereochemical inversion the migrating group both rotates and translates to reach its bonded conformation. However, another stereochemical transition effect equally capable of producing inversion or retention products is whether the migrating group remains on the original face of the π system after rebonding or instead transfers to the opposite face of the π system. If the migrating group remains on the same face of the π system, the shift is known as suprafacial, while if the migrating group transfers to the opposite face is called an antarafacial shift, which are impossible for transformations that occur within small- or medium-sized rings. Classes of sigmatropic rearrangements [1,3] shifts Thermal hydride shifts In a thermal [1,3] hydride shift, a hydride moves three atoms. The Woodward–Hoffmann rules dictate that it would proceed in an antarafacial shift. Although such a shift is symmetry allowed, the Mobius topology required in the transition state prohibits such a shift because it is geometrically impossible, which accounts for the fact that enols do not isomerize without an acid or base catalyst. Thermal alkyl shifts Thermal alkyl [1,3] shifts, similar to [1,3] hydride shifts, must proceed antarafacially. Here the geometry of the transition state is prohibitive, but an alkyl group, due to the nature of its orbitals, can invert its geometry, form a new bond with the back lobe of its sp3 orbital, and therefore proceed via a suprafacial shift. These reactions are still not common in open-chain compounds because of the highly ordered nature of the transition state, which is more readily achieved in cyclic molecules. Photochemical [1,3] shifts Photochemical [1,3] shifts should proceed through suprafacial shifts; however, most are non-concerted because they proceed through a triplet state (i.e., have a diradical mechanism, to which the Woodward-Hoffmann rules do not apply). [1,5] shifts A [1,5] shift involves the shift of 1 substituent (hydride, alkyl, or aryl) down 5 atoms of a π system. Hydrogen has been shown to shift in both cyclic and open-chain compounds at temperatures at or above 200 ˚C. These reactions are predicted to proceed suprafacially, via a Hückel-topology transition state. Photoirradiation would require an antarafacial shift of hydrogen. Although rare, there are examples where antarafacial shifts are favored: In contrast to hydrogen [1,5] shifts, there have never been any observed [1,5] alkyl shifts in an open-chain compound. Several studies have, however, been done to determine rate preferences for [1,5] alkyl shifts in cyclic systems: carbonyl and carboxyl> hydride> phenyl and vinyl>> alkyl. Alkyl groups undergo [1,5] shifts very poorly, usually requiring high temperatures, however, for cyclohexadiene, the temperature for alkyl shifts isn't much higher than that for carbonyls, the best migratory group. A study showed that this is because alkyl shifts on cyclohexadienes proceed through a different mechanism. First the ring opens, followed by a [1,7] shift, and then the ring reforms electrocyclically: This same mechanistic process is seen below, without the final electrocyclic ring-closing reaction, in the interconversion of lumisterol to vitamin D2. [1,7] shifts [1,7] sigmatropic shifts are predicted by the Woodward–Hoffmann rules to proceed in an antarafacial fashion, via a Mobius topology transition state. An antarafacial [1,7] shift is observed in the conversion of lumisterol to vitamin D2, where following an electrocyclic ring opening to previtamin D2, a methyl hydrogen shifts. Bicyclic nonatrienes also undergo [1,7] shifts in a so-called walk rearrangement, which is the shift of divalent group, as part of a three-membered ring, in a bicyclic molecule. [3,3] shifts [3,3] sigmatropic shifts are well studied sigmatropic rearrangements. The Woodward–Hoffman rules predict that these six-electron reactions would proceed suprafacially, via a Hückel topology transition state. Claisen rearrangement Discovered in 1912 by Rainer Ludwig Claisen, the Claisen rearrangement is the first recorded example of a [3,3]-sigmatropic rearrangement. This rearrangement is a useful carbon-carbon bond-forming reaction. An example of Claisen rearrangement is the [3,3] rearrangement of an allyl vinyl ether, which upon heating yields a γ,δ-unsaturated carbonyl. The formation of a carbonyl group makes this reaction, unlike other sigmatropic rearrangements, inherently irreversible. Aromatic Claisen rearrangement The ortho-Claisen rearrangement involves the [3,3] shift of an allyl phenyl ether to an intermediate which quickly tautomerizes to an ortho-substituted phenol. When both the ortho positions on the benzene ring are blocked, a second [3,3] rearrangement will occur. This para-Claisen rearrangement ends with the tautomerization to a tri-substituted phenol. Cope rearrangement The Cope rearrangement is an extensively studied organic reaction involving the [3,3] sigmatropic rearrangement of 1,5-dienes. It was developed by Arthur C. Cope. For example, 3,4-dimethyl-1,5-hexadiene heated to 300 °C yields 2,6-octadiene. Oxy-Cope rearrangement In the oxy-Cope rearrangement, a hydroxyl group is added at C3 forming an enal or enone after keto-enol tautomerism of the intermediate enol: Carroll rearrangement The Carroll rearrangement is a rearrangement reaction in organic chemistry and involves the transformation of a β-keto allyl ester into a α-allyl-β-ketocarboxylic acid. This organic reaction is accompanied by decarboxylation and the final product is a γ,δ-allylketone. The Carroll rearrangement is an adaptation of the Claisen rearrangement and effectively a decarboxylative allylation. Fischer indole synthesis The Fischer indole synthesis is a chemical reaction that produces the aromatic heterocycle indole from a (substituted) phenylhydrazine and an aldehyde or ketone under acidic conditions. The reaction was discovered in 1883 by Hermann Emil Fischer. The choice of acid catalyst is very important. Brønsted acids such as HCl, H2SO4, polyphosphoric acid and p-toluenesulfonic acid have been used successfully. Lewis acids such as boron trifluoride, zinc chloride, iron(III) chloride, and aluminium chloride are also useful catalysts. Several reviews have been published. [5,5] Shifts Similar to [3,3] shifts, the Woodward-Hoffman rules predict that [5,5] sigmatropic shifts would proceed suprafacially, Hückel topology transition state. These reactions are rarer than [3,3] sigmatropic shifts, but this is mainly a function of the fact that molecules that can undergo [5,5] shifts are rarer than molecules that can undergo [3,3] shifts. [2,3] shifts An example of a 2,3-sigmatropic rearrangement is the 2,3-Wittig rearrangement: Walk rearrangements The migration of a divalent group, such as O, S, N–R, or C–R2, which is part of a three-membered ring in a bicyclic molecule, is commonly referred to as a walk rearrangement. This can be formally characterized according to the Woodward-Hofmann rules as being a (1, n) sigmatropic shift. An example of such a rearrangement is the shift of substituents on tropilidenes (1,3,5-cycloheptatrienes). When heated, the pi-system goes through an electrocyclic ring closing to form bicycle[4,1,0]heptadiene (norcaradiene). Thereafter follows a [1,5] alkyl shift and an electrocyclic ring opening. Proceeding through a [1,5] shift, the walk rearrangement of norcaradienes is expected to proceed suprafacially with a retention of stereochemistry. Experimental observations, however, show that the 1,5-shifts of norcaradienes proceed antarafacially. Theoretical calculations found the [1,5] shift to be a diradical process, but without involving any diradical minima on the potential energy surface. See also 2,3-sigmatropic rearrangement NIH shift Frontier molecular orbital theory References Reaction mechanisms Rearrangement reactions
Sigmatropic reaction
Chemistry
2,717
54,147,053
https://en.wikipedia.org/wiki/NR%20Vulpeculae
NR Vulpeculae is a red supergiant and irregular variable star in the constellation Vulpecula. It has an apparent magnitude varying between 9.13 and 9.61, which is too faint to be seen to the naked eye. Characteristics It has an spectral classification of M1Ia, meaning that it is a supergiant star of higher luminosity and spectral type M. Levesque et al. (2005) published a difference spectral type of K3I, meaning that it is a K-type supergiant star. NR Vulpeculae has expanded to 920 times the Sun's size and is currently emitting 200,000 times its luminosity. If placed in the Solar System, its photosphere would reach beyond Mars' orbit. It has a cool surface temperature of around 4,000K, giving it the typical orange color of a K-type star. NR Vulpeculae is also a slow irregular variable, with an apparent magnitude ranging from 9.13 to 9.61. It is considered a likely member of the Vulpecula OB1 stellar association. Notes References Vulpecula M-type supergiants K-type supergiants J19501193+2455240 IRAS catalogue objects 339034 BD+24 390 Slow irregular variables Vulpeculae, NR
NR Vulpeculae
Astronomy
283
33,485,966
https://en.wikipedia.org/wiki/List%20of%20dopaminergic%20drugs
This is a list of dopaminergic drugs. These are pharmaceutical drugs, naturally occurring compounds and other chemicals that influence the function of the neurotransmitter dopamine. Dopamine receptor ligands Dopamine receptors are a class of G protein-coupled receptors that are prominent in the vertebrate central nervous system (CNS) and are implicated in many neurological processes, including motivational and incentive salience, cognition, memory, learning, and fine motor control, as well as modulation of neuroendocrine signaling. Abnormal dopamine receptor signaling and dopaminergic nerve function is implicated in several neuropsychiatric disorders. Dopamine receptors are therefore common drug targets. Dopamine receptors activate different effectors through not only G-protein coupling, but also signaling through different protein (dopamine receptor-interacting proteins) interactions. Agonists Adamantanes: Amantadine • Memantine • Rimantadine Aminotetralins: 7-OH-DPAT • 8-OH-PBZI • Rotigotine • UH-232 Benzazepines: 6-Br-APB • Fenoldopam • SKF-38,393 • SKF-77,434 • SKF-81,297 • SKF-82,958 • SKF-83,959 Ergolines: Bromocriptine • Cabergoline • Dihydroergocryptine • Lisuride • Lysergic acid diethylamide (LSD) • Pergolide Dihydrexidine derivatives: 2-OH-NPA • A-86,929 • Ciladopa • Dihydrexidine • Dinapsoline • Dinoxyline • Doxanthrine Others: A-68,930 • A-77,636 • A-412,997 • ABT-670 • ABT-724 • Aplindore • Apomorphine • Aripiprazole • Azodopa • Bifeprunox • BP-897 • CY-208,243 • Dizocilpine • Etilevodopa • Flibanserin • Ketamine • Melevodopa • Modafinil • Pardoprunox • Phencyclidine • PD-128,907 • PD-168,077 • PF-219,061 • Piribedil • Pramipexole • Propylnorapomorphine • Pukateine • Quinagolide • Quinelorane • Quinpirole • RDS-127 • Ro10-5824 • Ropinirole • Rotigotine • Roxindole • Salvinorin A • SKF-89,145 • Sumanirole • Terguride • Umespirone • WAY-100,635 Antagonists Typical antipsychotics: Acepromazine • Azaperone • Benperidol • Bromperidol • Clopenthixol • Chlorpromazine • Chlorprothixene • Droperidol • Flupentixol • Fluphenazine • Fluspirilene • Haloperidol • Loxapine • Mesoridazine • Methotrimeprazine • Nemonapride • Penfluridol • Perazine • Periciazine • Perphenazine • Pimozide • Prochlorperazine • Promazine • Sulforidazine • Sulpiride • Sultopride • Thioridazine • Thiothixene • Trifluoperazine • Triflupromazine • Trifluperidol • Zuclopenthixol Atypical antipsychotics: Amisulpride • Asenapine • Blonanserin • Cariprazine • Carpipramine • Clocapramine • Clozapine • Gevotroline • Iloperidone • Lurasidone • Melperone • Molindone • Mosapramine • Ocaperidone • Olanzapine • Paliperidone • Perospirone • Piquindone • Quetiapine • Remoxipride • Risperidone • Sertindole • Tiospirone • Ziprasidone • Zotepine Antiemetics: AS-8112 • Alizapride • Bromopride • Clebopride • Domperidone • Metoclopramide • Thiethylperazine Others: Amoxapine • Buspirone • Butaclamol • Ecopipam • N-Ethoxycarbonyl-2-ethoxy-1,2-dihydroquinoline (EEDQ) • Eticlopride • Fananserin • L-745,870 • Nafadotride • Nuciferine • PNU-99,194 • Raclopride • Sarizotan • SB-277,011-A • SCH-23,390 • SKF-83,566 • SKF-83,959 • Sonepiprazole • Spiperone • Spiroxatrine • Stepholidine • Tetrahydropalmatine • Tiapride • UH-232 • Yohimbine Reuptake inhibitors Dopamine transporter (DAT) inhibitors Piperazines: DBL-583 • GBR-12,935 • Nefazodone • Vanoxerine Piperidines: 1-(1-(1-Benzothiophen-2-yl)cyclohexyl)piperidine (BTCP) • Desoxypipradrol • Dextromethylphenidate • Difemetorex • Ethylphenidate • Methylnaphthidate • Isopropylphenidate • Methylphenidate • Phencyclidine • Pipradrol Pyrrolidines: Diphenylprolinol • Methylenedioxypyrovalerone (MDPV) • Naphyrone • Prolintane • Pyrovalerone Tropanes: 3β-(4'-Chlorophenyl)-2β-(3'-phenylisoxazol-5'-yl)tropane (β-CPPIT) • Altropane • Brasofensine • WIN 35428 (β-CFT) • Cocaine • Dichloropane • Difluoropine • N-(2'-Fluoroethyl-)-3β-(4'-chlorophenyl)-2β-(3'-phenylisoxazol-5'-yl)nortropane (FE-β-CPPIT) • N-(3'-Fluoropropyl-)-3β-(4'-chlorophenyl)-2β-(3'-phenylisoxazol-5'-yl)nortropane (FP-β-CPPIT) • Ioflupane (123I) • Iometopane • RTI-112 • RTI-113 • RTI-121 • RTI-126 • RTI-150 • RTI-177 • RTI-229 • RTI-336 • Tenocyclidine • Tesofensine • Troparil • Tropoxane • 2β-Propanoyl-3β-(4-tolyl)-tropane (WF-11) • 2β-Propanoyl-3β-(2-naphthyl)-tropane (WF-23) • 2-Propanoyl-3-(4-isopropylphenyl)-tropane (WF-31) • 2α-(Propanoyl)-3β-(2-(6-methoxynaphthyl))-tropane (WF-33) Others: Adrafinil • Armodafinil • Amfonelic acid • Amineptine • Benzatropine (benztropine) • Bromantane • 2-Butyl-3-(p-tolyl)quinuclidine (BTQ) • BTS-74,398 • Bupropion (amfebutamone) • Ciclazindol • Diclofensine • Dimethocaine • Diphenylpyraline • Dizocilpine • DOV-102,677 • DOV-21,947 • DOV-216,303 • Etybenzatropine (ethylbenztropine) • EXP-561 • Fencamine • Fencamfamine • Fezolamine • GYKI-52,895 • Hydrafinil • Indatraline • Ketamine • Lefetamine • Levophacetoperane • LR-5182 • Manifaxine • Mazindol • Medifoxamine • Mesocarb • Modafinil • Nefopam • Nomifensine • NS-2359 • O-2172 • Pridefrine • Propylamphetamine • Radafaxine • SEP-225,289 • SEP-227,162 • Sertraline • Sibutramine • Tametraline • Tripelennamine Vesicular monoamine transporter (VMAT) inhibitors Deserpidine • Deutetrabenazine • Ibogaine • Reserpine • Tetrabenazine • Valbenazine Dopamine releasing agent Morpholines: Fenbutrazate • Morazone • Phendimetrazine • Phenmetrazine Oxazolines: 4-Methylaminorex (4-MAR, 4-MAX) • Aminorex • Clominorex • Cyclazodone • Fenozolone • Fluminorex • Pemoline • Thozalinone Phenethylamines (also amphetamines, cathinones, phentermines, etc.): 2-Hydroxyphenethylamine (2-OH-PEA) • 4-Chlorophenylisobutylamine (4-CAB) • 4-Methylamphetamine (4-MA) • 4-Methylmethamphetamine (4-MMA) • Alfetamine • Amfecloral • Amfepentorex • Amfepramone • Amphetamine (dextroamphetamine, levoamphetamine) • Amphetaminil • β-Methylphenethylamine (β-Me-PEA) • Benzodioxolylbutanamine (BDB) • Benzodioxolylhydroxybutanamine (BOH) • Benzphetamine • Buphedrone • Butylone • Cathine • Cathinone • Clobenzorex • Clortermine • D-Deprenyl • Dimethoxyamphetamine (DMA) • Dimethoxymethamphetamine (DMMA) • Dimethylamphetamine • Dimethylcathinone (dimethylpropion, metamfepramone) • Ethcathinone (ethylpropion) • Ethylamphetamine • Ethylbenzodioxolylbutanamine (EBDB) • Ethylone • Famprofazone • Fenethylline • Fenproporex • Flephedrone • Fludorex • Furfenorex • Hordenine • Lophophine (homomyristicylamine) • Mefenorex • Mephedrone • Methamphetamine (desoxyephedrine, methedrine; dextromethamphetamine, levomethamphetamine) • Methcathinone (methylpropion) • Methedrone • Methoxymethylenedioxyamphetamine (MMDA) • Methoxymethylenedioxymethamphetamine (MMDMA) • Methylbenzodioxolylbutanamine (MBDB) • Methylenedioxyamphetamine (MDA, tenamfetamine) • Methylenedioxyethylamphetamine (MDEA) • Methylenedioxyhydroxyamphetamine (MDOH) • Methylenedioxymethamphetamine (MDMA) • Methylenedioxymethylphenethylamine (MDMPEA, homarylamine) • Methylenedioxyphenethylamine (MDPEA, homopiperonylamine) • Methylone • Ortetamine • Parabromoamphetamine (PBA) • Parachloroamphetamine (PCA) • Parafluoroamphetamine (PFA) • Parafluoromethamphetamine (PFMA) • Parahydroxyamphetamine (PHA) • Paraiodoamphetamine (PIA) • Paredrine (norpholedrine, oxamphetamine) • Phenethylamine (PEA) • Pholedrine • Phenpromethamine • Prenylamine • Propylamphetamine • Tiflorex (flutiorex) • Tyramine (TRA) • Xylopropamine • Zylofuramine Piperazines: 2,5-Dimethoxy-4-bromobenzylpiperazine (2C-B-BZP) • Benzylpiperazine (BZP) • Methoxyphenylpiperazine (MeOPP, paraperazine) • Methylbenzylpiperazine (MBZP) • Methylenedioxybenzylpiperazine (MDBZP, piperonylpiperazine) Others: 2-Amino-1,2-dihydronaphthalene (2-ADN) • 2-Aminoindane (2-AI) • 2-Aminotetralin (2-AT) • 4-Benzylpiperidine (4-BP) • 5-Iodo-2-aminoindane (5-IAI) • Clofenciclan • Cyclopentamine • Cypenamine • Cyprodenate • Feprosidnine • Gilutensin • Heptaminol • Hexacyclonate • Indanylaminopropane (IAP) • Indanorex • Isometheptene • Methylhexanamine • Naphthylaminopropane (NAP) • Octodrine • Phthalimidopropiophenone • Propylhexedrine (levopropylhexedrine) • Tuaminoheptane (tuamine) Activity enhancers Benzofuranylpropylaminopentane (BPAP) • Desmethylselegiline • Indolylpropylaminopentane (IPAP) • Phenethylamine • Phenylpropylaminopentane (PPAP) • Selegiline (L-deprenyl) • Tryptamine • Tyramine Phenylalanine hydroxylase inhibitors 3,4-Dihydroxystyrene Tyrosine hydroxylase inhibitors 3-Iodotyrosine • Aquayamycin • Bulbocapnine • Metirosine • Oudenone Aromatic L-amino acid decarboxylase inhibitors (DOPA decarboxylase inhibitors) Benserazide • Carbidopa • Genistein • Methyldopa Degradation inhibitors Monoamine oxidase inhibitors (MAOIs) Nonselective: Benmoxin • Caroxazone • Echinopsidine • Furazolidone • Hydralazine • Indantadol • Iproclozide • Iproniazid • Isocarboxazid • Isoniazid • Linezolid • Mebanazine • Metfendrazine • Nialamide • Octamoxin • Paraxazone • Phenelzine • Pheniprazine • Phenoxypropazine • Pivalylbenzhydrazine • Procarbazine • Safrazine • Tranylcypromine MAO-A selective: Amiflamine • Bazinaprine • Befloxatone • Befol • Brofaromine • Cimoxatone • Clorgiline • Esuprone • Harmala alkaloids (harmine, harmaline, tetrahydroharmine, harman, norharman, etc.) • Methylene blue • Metralindole • Minaprine • Moclobemide • Pirlindole • Sercloremine • Tetrindole • Toloxatone • Tyrima MAO-B selective: D-Deprenyl • Selegiline (L-deprenyl) • Ladostigil • Lazabemide • Milacemide • Mofegiline • Pargyline • Rasagiline • Safinamide Catechol-O-methyl transferase (COMT) inhibitors Entacapone • Nitecapone • Opicapone • Tolcapone Dopamine beta hydroxylase inhibitors Bupicomide • Disulfiram • Dopastin • Fusaric acid • Nepicastat • Phenopicolinic acid • Tropolone Others Precursors L-Phenylalanine → L-tyrosine → L-DOPA (levodopa) Cofactors Ferrous iron (Fe2+) • Tetrahydrobiopterin • Vitamin B3 (niacin, nicotinamide → NADPH) • Vitamin B6 (pyridoxine, pyridoxamine, pyridoxal → pyridoxal phosphate) • Vitamin B9 (folic acid → tetrahydrofolic acid) • Vitamin C (ascorbic acid) • Zinc (Zn2+) Neurotoxins MPP+ • MPTP • Oxidopamine (6-hydroxydopamine) Levodopa prodrugs Etilevodopa • Foslevodopa • Melevodopa • XP-21279 Photoswitchable ligands A photoswitchable agonist of D1-like receptors (azodopa) has been described that allows reversible control of dopaminergic transmission in wildtype animals. References Dopaminergics
List of dopaminergic drugs
Chemistry
3,930
1,326,932
https://en.wikipedia.org/wiki/Beamforming
Beamforming or spatial filtering is a signal processing technique used in sensor arrays for directional signal transmission or reception. This is achieved by combining elements in an antenna array in such a way that signals at particular angles experience constructive interference while others experience destructive interference. Beamforming can be used at both the transmitting and receiving ends in order to achieve spatial selectivity. The improvement compared with omnidirectional reception/transmission is known as the directivity of the array. Beamforming can be used for radio or sound waves. It has found numerous applications in radar, sonar, seismology, wireless communications, radio astronomy, acoustics and biomedicine. Adaptive beamforming is used to detect and estimate the signal of interest at the output of a sensor array by means of optimal (e.g. least-squares) spatial filtering and interference rejection. Techniques To change the directionality of the array when transmitting, a beamformer controls the phase and relative amplitude of the signal at each transmitter, in order to create a pattern of constructive and destructive interference in the wavefront. When receiving, information from different sensors is combined in a way where the expected pattern of radiation is preferentially observed. For example, in sonar, to send a sharp pulse of underwater sound towards a ship in the distance, simply simultaneously transmitting that sharp pulse from every sonar projector in an array fails because the ship will first hear the pulse from the speaker that happens to be nearest the ship, then later pulses from speakers that happen to be further from the ship. The beamforming technique involves sending the pulse from each projector at slightly different times (the projector closest to the ship last), so that every pulse hits the ship at exactly the same time, producing the effect of a single strong pulse from a single powerful projector. The same technique can be carried out in air using loudspeakers, or in radar/radio using antennas. In passive sonar, and in reception in active sonar, the beamforming technique involves combining delayed signals from each hydrophone at slightly different times (the hydrophone closest to the target will be combined after the longest delay), so that every signal reaches the output at exactly the same time, making one loud signal, as if the signal came from a single, very sensitive hydrophone. Receive beamforming can also be used with microphones or radar antennas. With narrowband systems the time delay is equivalent to a "phase shift", so in this case the array of antennas, each one shifted a slightly different amount, is called a phased array. A narrow band system, typical of radars, is one where the bandwidth is only a small fraction of the center frequency. With wideband systems this approximation no longer holds, which is typical in sonars. In the receive beamformer the signal from each antenna may be amplified by a different "weight." Different weighting patterns (e.g., Dolph–Chebyshev) can be used to achieve the desired sensitivity patterns. A main lobe is produced together with nulls and sidelobes. As well as controlling the main lobe width (beamwidth) and the sidelobe levels, the position of a null can be controlled. This is useful to ignore noise or jammers in one particular direction, while listening for events in other directions. A similar result can be obtained on transmission. For the full mathematics on directing beams using amplitude and phase shifts, see the mathematical section in phased array. Beamforming techniques can be broadly divided into two categories: conventional (fixed or switched beam) beamformers adaptive beamformers or phased array Desired signal maximization mode Interference signal minimization or cancellation mode Conventional beamformers, such as the Butler matrix, use a fixed set of weightings and time-delays (or phasings) to combine the signals from the sensors in the array, primarily using only information about the location of the sensors in space and the wave directions of interest. In contrast, adaptive beamforming techniques (e.g., MUSIC, SAMV) generally combine this information with properties of the signals actually received by the array, typically to improve rejection of unwanted signals from other directions. This process may be carried out in either the time or the frequency domain. As the name indicates, an adaptive beamformer is able to automatically adapt its response to different situations. Some criterion has to be set up to allow the adaptation to proceed such as minimizing the total noise output. Because of the variation of noise with frequency, in wide band systems it may be desirable to carry out the process in the frequency domain. Beamforming can be computationally intensive. Sonar phased array has a data rate low enough that it can be processed in real time in software, which is flexible enough to transmit or receive in several directions at once. In contrast, radar phased array has a data rate so high that it usually requires dedicated hardware processing, which is hard-wired to transmit or receive in only one direction at a time. However, newer field programmable gate arrays are fast enough to handle radar data in real time, and can be quickly re-programmed like software, blurring the hardware/software distinction. Sonar beamforming requirements Sonar beamforming utilizes a similar technique to electromagnetic beamforming, but varies considerably in implementation details. Sonar applications vary from 1 Hz to as high as 2 MHz, and array elements may be few and large, or number in the hundreds yet very small. This will shift sonar beamforming design efforts significantly between demands of such system components as the "front end" (transducers, pre-amplifiers and digitizers) and the actual beamformer computational hardware downstream. High frequency, focused beam, multi-element imaging-search sonars and acoustic cameras often implement fifth-order spatial processing that places strains equivalent to Aegis radar demands on the processors. Many sonar systems, such as on torpedoes, are made up of arrays of up to 100 elements that must accomplish beam steering over a 100 degree field of view and work in both active and passive modes. Sonar arrays are used both actively and passively in 1-, 2-, and 3-dimensional arrays. 1-dimensional "line" arrays are usually in multi-element passive systems towed behind ships and in single- or multi-element side-scan sonar. 2-dimensional "planar" arrays are common in active/passive ship hull mounted sonars and some side-scan sonar. 3-dimensional spherical and cylindrical arrays are used in 'sonar domes' in the modern submarine and ships. Sonar differs from radar in that in some applications such as wide-area-search all directions often need to be listened to, and in some applications broadcast to, simultaneously. Thus a multibeam system is needed. In a narrowband sonar receiver, the phases for each beam can be manipulated entirely by signal processing software, as compared to present radar systems that use hardware to 'listen' in a single direction at a time. Sonar also uses beamforming to compensate for the significant problem of the slower propagation speed of sound as compared to that of electromagnetic radiation. In side-look-sonars, the speed of the towing system or vehicle carrying the sonar is moving at sufficient speed to move the sonar out of the field of the returning sound "ping". In addition to focusing algorithms intended to improve reception, many side scan sonars also employ beam steering to look forward and backward to "catch" incoming pulses that would have been missed by a single sidelooking beam. Schemes A conventional beamformer can be a simple beamformer also known as delay-and-sum beamformer. All the weights of the antenna elements can have equal magnitudes. The beamformer is steered to a specified direction only by selecting appropriate phases for each antenna. If the noise is uncorrelated and there are no directional interferences, the signal-to-noise ratio of a beamformer with antennas receiving a signal of power , (where is Noise variance or Noise power), is: A null-steering beamformer is optimized to have zero response in the direction of one or more interferers. A frequency-domain beamformer treats each frequency bin as a narrowband signal, for which the filters are complex coefficients (that is, gains and phase shifts), separately optimized for each frequency. Evolved Beamformer The delay-and-sum beamforming technique uses multiple microphones to localize sound sources. One disadvantage of this technique is that adjustments of the position or of the number of microphones changes the performance of the beamformer nonlinearly. Additionally, due to the number of combinations possible, it is computationally hard to find the best configuration. One of the techniques to solve this problem is the use of genetic algorithms. Such algorithm searches for the microphone array configuration that provides the highest signal-to-noise ratio for each steered orientation. Experiments showed that such algorithm could find the best configuration of a constrained search space comprising ~33 million solutions in a matter of seconds instead of days. History in wireless communication standards Beamforming techniques used in cellular phone standards have advanced through the generations to make use of more complex systems to achieve higher density cells, with higher throughput. Passive mode: (almost) non-standardized solutions Wideband code division multiple access (WCDMA) supports direction of arrival (DOA) based beamforming Active mode: mandatory standardized solutions 2G – Transmit antenna selection as an elementary beamforming 3G – WCDMA: transmit antenna array (TxAA) beamforming 3G evolution – LTE/UMB: multiple-input multiple-output (MIMO) precoding based beamforming with partial space-division multiple access (SDMA) Beyond 3G (4G, 5G...) – More advanced beamforming solutions to support SDMA such as closed-loop beamforming and multi-dimensional beamforming are expected An increasing number of consumer 802.11ac Wi-Fi devices with MIMO capability can support beamforming to boost data communication rates. Digital, analog, and hybrid To receive (but not transmit), there is a distinction between analog and digital beamforming. For example, if there are 100 sensor elements, the "digital beamforming" approach entails that each of the 100 signals passes through an analog-to-digital converter to create 100 digital data streams. Then these data streams are added up digitally, with appropriate scale-factors or phase-shifts, to get the composite signals. By contrast, the "analog beamforming" approach entails taking the 100 analog signals, scaling or phase-shifting them using analog methods, summing them, and then usually digitizing the single output data stream. Digital beamforming has the advantage that the digital data streams (100 in this example) can be manipulated and combined in many possible ways in parallel, to get many different output signals in parallel. The signals from every direction can be measured simultaneously, and the signals can be integrated for a longer time when studying far-off objects and simultaneously integrated for a shorter time to study fast-moving close objects, and so on. This cannot be done as effectively for analog beamforming, not only because each parallel signal combination requires its own circuitry, but more fundamentally because digital data can be copied perfectly but analog data cannot. (There is only so much analog power available, and amplification adds noise.) Therefore, if the received analog signal is split up and sent into a large number of different signal combination circuits, it can reduce the signal-to-noise ratio of each. In MIMO communication systems with large number of antennas, so called massive MIMO systems, the beamforming algorithms executed at the digital baseband can get very complex. In addition, if all beamforming is done at baseband, each antenna needs its own RF feed. At high frequencies and with large number of antenna elements, this can be very costly, and increase loss and complexity in the system. To remedy these issues, hybrid beamforming has been suggested where some of the beamforming is done using analog components and not digital. There are many possible different functions that can be performed using analog components instead of at the digital baseband. Beamforming, whether done digitally, or by means of analog architecture, has recently been applied in integrated sensing and communication technology. For instance, a beamformer was suggested, in imperfect channel state information situations to perform communication tasks, while at the same time performing target detection to sense targets in the scene. For speech audio Beamforming can be used to try to extract sound sources in a room, such as multiple speakers in the cocktail party problem. This requires the locations of the speakers to be known in advance, for example by using the time of arrival from the sources to mics in the array, and inferring the locations from the distances. Compared to carrier-wave telecommunications, natural audio contains a variety of frequencies. It is advantageous to separate frequency bands prior to beamforming because different frequencies have different optimal beamform filters (and hence can be treated as separate problems, in parallel, and then recombined afterward). Properly isolating these bands involves specialized non-standard filter banks. In contrast, for example, the standard fast Fourier transform (FFT) band-filters implicitly assume that the only frequencies present in the signal are exact harmonics; frequencies which lie between these harmonics will typically activate all of the FFT channels (which is not what is wanted in a beamform analysis). Instead, filters can be designed in which only local frequencies are detected by each channel (while retaining the recombination property to be able to reconstruct the original signal), and these are typically non-orthogonal unlike the FFT basis. See also References General Louay M. A. Jalloul and Sam. P. Alex, "Evaluation Methodology and Performance of an IEEE 802.16e System", Presented to the IEEE Communications and Signal Processing Society, Orange County Joint Chapter (ComSig), December 7, 2006. Available at: https://web.archive.org/web/20110414143801/http://chapters.comsoc.org/comsig/meet.html H. L. Van Trees, Optimum Array Processing, Wiley, NY, 2002. Jian Li, and Petre Stoica, eds. Robust adaptive beamforming. New Jersey: John Wiley, 2006. M. Soltanalian. Signal Design for Active Sensing and Communications. Uppsala Dissertations from the Faculty of Science and Technology (printed by Elanders Sverige AB), 2014. "A Primer on Digital Beamforming" by Toby Haynes, March 26, 1998 "What Is Beamforming?", an introduction to sonar beamforming by Greg Allen. "Dolph–Chebyshev Weights" antenna-theory.com A collection of pages providing a simple introduction to microphone array beamforming External links MU-MIMO Beamforming by Constructive Interference, Wolfram Demonstrations Project Acoustic measurement Antennas (radio) Signal processing Sonar Speech processing
Beamforming
Technology,Engineering
3,105
3,244,435
https://en.wikipedia.org/wiki/List%20of%20garden%20types
A wide range of garden types exist. Below is a list of examples. By country of origin Chinese garden Cantonese garden Sichuanese garden Dutch garden Egyptian garden English garden English landscape garden French garden French formal garden French landscape garden Gardens of the French Renaissance German garden Greek garden Indian garden Mughal garden Italian garden Italian Renaissance garden Japanese garden Japanese dry garden Japanese tea garden Tsubo-niwa Korean garden Persian garden Charbagh Paradise garden Spanish garden Andalusian Patio United States garden Colonial Revival garden By historical empire Byzantine gardens Mughal gardens Persian gardens Roman gardens In religion Bahá'í gardens Biblical garden Islamic garden Mary garden Sacred garden Other Aquascaping Back garden Baroque garden Bog garden Bosquet Botanical gardens Alpine Arboretum Palmetum Bottle garden Butterfly gardening Cactus garden Charbagh Color garden Community garden Allotment (gardening) Communal garden Garden sharing Container garden Cottage garden Energy-efficient landscaping Ferme ornée Fernery Flower box Flower garden Forest gardening Front garden Garden room Garden square Hanging garden (cultivation) Herb gardens Hòn Non Bộ Hortus conclusus Intercultural Garden Keyhole garden Kitchen garden Knot garden Maze Hedge maze Turf maze Memorial garden Monastic garden Moon garden Moss garden Organic horticulture Pattern gardening Permaculture Physic garden Playscape Paradise garden Pleasure garden Pollinator garden Rain garden Raised bed gardening Rock garden Roof garden Green roof Subtropical climate vegetated roof Rose garden School garden Sculpture garden Sculpture trail Sensory garden Shade garden Shakespeare garden Stumpery Sustainable gardening Tea garden Telegarden Therapeutic garden Terrace garden Trial garden Tropical garden Tropical horticulture Underground farming Upside-down gardening Urban horticulture Vertical garden Victory garden Walled garden Water garden Wildlife garden Window box Winter garden Woodland garden Xeriscaping Zen garden See also History of gardening Landscape design :Category:Garden features :Category:Types of garden External links Historic garden types Gardening lists Design-related lists Types Garden design history
List of garden types
Engineering
374
38,203,359
https://en.wikipedia.org/wiki/Reverse%20genetics
Reverse genetics is a method in molecular genetics that is used to help understand the function(s) of a gene by analysing the phenotypic effects caused by genetically engineering specific nucleic acid sequences within the gene. The process proceeds in the opposite direction to forward genetic screens of classical genetics. While forward genetics seeks to find the genetic basis of a phenotype or trait, reverse genetics seeks to find what phenotypes are controlled by particular genetic sequences. Automated DNA sequencing generates large volumes of genomic sequence data relatively rapidly. Many genetic sequences are discovered in advance of other, less easily obtained, biological information. Reverse genetics attempts to connect a given genetic sequence with specific effects on the organism. Reverse genetics systems can also allow the recovery and generation of infectious or defective viruses with desired mutations. This allows the ability to study the virus in vitro and in vivo. Techniques used In order to learn the influence a sequence has on phenotype, or to discover its biological function, researchers can engineer a change or disrupt the DNA. After this change has been made a researcher can look for the effect of such alterations in the whole organism. There are several different methods of reverse genetics: Directed deletions and point mutations Site-directed mutagenesis is a sophisticated technique that can either change regulatory regions in the promoter of a gene or make subtle codon changes in the open reading frame to identify important amino residues for protein function. Alternatively, the technique can be used to create null alleles so that the gene is not functional. For example, deletion of a gene by gene targeting (gene knockout) can be done in some organisms, such as yeast, mice and moss. Unique among plants, in Physcomitrella patens, gene knockout via homologous recombination to create knockout moss (see figure) is nearly as efficient as in yeast. In the case of the yeast model system directed deletions have been created in every non-essential gene in the yeast genome. In the case of the plant model system huge mutant libraries have been created based on gene disruption constructs. In gene knock-in, the endogenous exon is replaced by an altered sequence of interest. In some cases conditional alleles can be used so that the gene has normal function until the conditional allele is activated. This might entail 'knocking in' recombinase sites (such as lox or frt sites) that will cause a deletion at the gene of interest when a specific recombinase (such as CRE, FLP) is induced. Cre or Flp recombinases can be induced with chemical treatments, heat shock treatments or be restricted to a specific subset of tissues. Another technique that can be used is TILLING. This is a method that combines a standard and efficient technique of mutagenesis with a chemical mutagen such as ethyl methanesulfonate (EMS) with a sensitive DNA-screening technique that identifies point mutations in a target gene. In the field of virology, reverse-genetics techniques can be used to recover full-length infectious viruses with desired mutations or insertions in the viral genomes or in specific virus genes. Technologies that allow these manipulations include circular polymerase extension reaction (CPER) which was first used to generate infectious cDNA for Kunjin virus a close relative of West Nile virus. CPER has also been successfully utilised to generate a range of positive-sense RNA viruses such as SARS-CoV-2, the causative agent of COVID-19. Gene silencing The discovery of gene silencing using double stranded RNA, also known as RNA interference (RNAi), and the development of gene knockdown using Morpholino oligos, have made disrupting gene expression an accessible technique for many more investigators. This method is often referred to as a gene knockdown since the effects of these reagents are generally temporary, in contrast to gene knockouts which are permanent. RNAi creates a specific knockout effect without actually mutating the DNA of interest. In C. elegans, RNAi has been used to systematically interfere with the expression of most genes in the genome. RNAi acts by directing cellular systems to degrade target messenger RNA (mRNA). RNAi interference, specifically gene silencing, has become a useful tool to silence the expression of genes and identify and analyze their loss-of-function phenotype. When mutations occur in alleles, the function which it represents and encodes also is mutated and lost; this is generally called a loss-of-function mutation. The ability to analyze the loss-of-function phenotype allows analysis of gene function when there is no access to mutant alleles. While RNA interference relies on cellular components for efficacy (e.g. the Dicer proteins, the RISC complex) a simple alternative for gene knockdown is Morpholino antisense oligos. Morpholinos bind and block access to the target mRNA without requiring the activity of cellular proteins and without necessarily accelerating mRNA degradation. Morpholinos are effective in systems ranging in complexity from cell-free translation in a test tube to in vivo studies in large animal models. Interference using transgenes A molecular genetic approach is the creation of transgenic organisms that overexpress a normal gene of interest. The resulting phenotype may reflect the normal function of the gene. Alternatively it is possible to overexpress mutant forms of a gene that interfere with the normal (wildtype) gene's function. For example, over-expression of a mutant gene may result in high levels of a non-functional protein resulting in a dominant negative interaction with the wildtype protein. In this case the mutant version will out compete for the wildtype proteins partners resulting in a mutant phenotype. Other mutant forms can result in a protein that is abnormally regulated and constitutively active ('on' all the time). This might be due to removing a regulatory domain or mutating a specific amino residue that is reversibly modified (by phosphorylation, methylation, or ubiquitination). Either change is critical for modulating protein function and often result in informative phenotypes. Vaccine synthesis Reverse genetics plays a large role in vaccine synthesis. Vaccines can be created by engineering novel genotypes of infectious viral strains which diminish their pathogenic potency enough to facilitate immunity in a host. The reverse genetics approach to vaccine synthesis utilizes known viral genetic sequences to create a desired phenotype: a virus with both a weakened pathological potency and a similarity to the current circulating virus strain. Reverse genetics provides a convenient alternative to the traditional method of creating inactivated vaccines, viruses which have been killed using heat or other chemical methods. Vaccines created through reverse genetics methods are known as attenuated vaccines, named because they contain weakened (attenuated) live viruses. Attenuated vaccines are created by combining genes from a novel or current virus strain with previously attenuated viruses of the same species. Attenuated viruses are created by propagating a live virus under novel conditions, such as a chicken's egg. This produces a viral strain that is still live, but not pathogenic to humans, as these viruses are rendered defective in that they cannot replicate their genome enough to propagate and sufficiently infect a host. However, the viral genes are still expressed in the host's cell through a single replication cycle, allowing for the development of an immunity. Influenza vaccine A common way to create a vaccine using reverse genetic techniques is to utilize plasmids to synthesize attenuated viruses. This technique is most commonly used in the yearly production of influenza vaccines, where an eight plasmid system can rapidly produce an effective vaccine. The entire genome of the influenza A virus consists of eight RNA segments, so the combination of six attenuated viral cDNA plasmids with two wild-type plasmids allow for an attenuated vaccine strain to be constructed. For the development of influenza vaccines, the fourth and sixth RNA segments, encoding for the hemagglutinin and neuraminidase proteins respectively, are taken from the circulating virus, while the other six segments are derived from a previously attenuated master strain. The HA and NA proteins exhibit high antigen variety, and therefore are taken from the current strain for which the vaccine is being produced to create a well matching vaccine. The plasmid used in this eight-plasmid system contains three major components that allow for vaccine development. Firstly, the plasmid contains restriction sites that will enable the incorporation of influenza genes into the plasmid. Secondly, the plasmid contains an antibiotic resistance gene, allowing the selection of merely plasmids containing the correct gene. Lastly, the plasmid contains two promotors, human pol 1 and pol 2 promotor that transcribe genes in opposite directions. cDNA sequences of viral RNA are synthesized from attenuated master strains by using RT-PCR. This cDNA can then be inserted between an RNA polymerase I (Pol I) promoter and terminator sequence through restriction enzyme digestion. The cDNA and pol I sequence is then, in turn, surrounded by an RNA polymerase II (Pol II) promoter and a polyadenylation site. This entire sequence is then inserted into a plasmid. Six plasmids derived from attenuated master strain cDNA are cotransfected into a target cell, often a chicken egg, alongside two plasmids of the currently circulating wild-type influenza strain. Inside the target cell, the two "stacked" Pol I and Pol II enzymes transcribe the viral cDNA to synthesize both negative-sense viral RNA and positive-sense mRNA, effectively creating an attenuated virus. The result is a defective vaccine strain that is similar to the current virus strain, allowing a host to build immunity. This synthesized vaccine strain can then be used as a seed virus to create further vaccines. Advantages and disadvantages Vaccines engineered from reverse genetics carry several advantages over traditional vaccine designs. Most notable is speed of production. Due to the high antigenic variation in the HA and NA glycoproteins, a reverse-genetic approach allows for the necessary genotype (i.e. one containing HA and NA proteins taken from currently circulating virus strains) to be formulated rapidly. Additionally, since the final product of a reverse genetics attenuated vaccine production is a live virus, a higher immunogenicity is exhibited than in traditional inactivated vaccines, which must be killed using chemical procedures before being transferred as a vaccine. However, due to the live nature of attenuated viruses, complications may arise in immunodeficient patients. There is also the possibility that a mutation in the virus could result the vaccine to turning back into a live unattenuated virus. See also Forward genetics References Further reading External links Reassortment vs. Reverse Genetics Reverse Genetics: Building Flu Vaccines Piece by Piece Genetic engineering Molecular genetics
Reverse genetics
Chemistry,Engineering,Biology
2,277
146,153
https://en.wikipedia.org/wiki/Overburden%20pressure
Pressure is force magnitude applied over an area. Overburden pressure is a geology term that denotes the pressure caused by the weight of the overlying layers of material at a specific depth under the earth's surface. Overburden pressure is also called lithostatic pressure, or vertical stress. In a stratigraphic layer that is in hydrostatic equilibrium; the overburden pressure at a depth z, assuming the magnitude of the gravity acceleration is approximately constant, is given by: where: is the depth in meters. is the overburden pressure at depth . is the pressure at the surface. is the density of the material above the depth . is the gravity acceleration in . In deep-earth geophysics/geodynamics, gravitational acceleration varies significantly over depth and should not be assumed to be constant, and should be inside the integral. Some sections of stratigraphic layers can be sealed or isolated. These changes create areas where there is not static equilibrium. A location in the layer is said to be in under pressure when the local pressure is less than the hydrostatic pressure, and in overpressure when the local pressure is greater than the hydrostatic pressure. See also Effective stress Lateral earth pressure Pore water pressure Sedimentary rock References Geophysics Soil mechanics
Overburden pressure
Physics,Engineering
259
143,794
https://en.wikipedia.org/wiki/Abraham%20ibn%20Ezra
Abraham ben Meir Ibn Ezra (, often abbreviated as ; Ibrāhim al-Mājid ibn Ezra; also known as Abenezra or simply Ibn Ezra, 1089 / 1092 – 27 January 1164 / 23 January 1167) was one of the most distinguished Jewish biblical commentators and philosophers of the Middle Ages. He was born in Tudela, Taifa of Zaragoza (present-day Navarre). Biography Abraham Ibn Ezra was born in Tudela, one of the oldest and most important Jewish communities in Navarre. At the time, the town was under the rule of the emirs of the Muslim Taifa of Zaragoza. However, when he later moved to Córdoba, he claimed it was his birthplace. Ultimately, most scholars agree that his place of birth was Tudela. From outside sources, little is known of ibn Ezra's family; however, he wrote of a marriage to a wife who produced five children. While it is believed four died early, the last-born, Isaac, became an influential poet and a later convert to Islam in 1140. His son's conversion was deeply troubling for ibn Ezra, leading him to pen many poems reacting to the event for years afterward. Ibn Ezra was a close friend of Judah Halevi, who was approximately 14 years older. When ibn Ezra moved to Córdoba as a young man, Halevi followed him. This trend continued when the two began their lives as wanderers in 1137. Halevi died in 1141, but Ibn Ezra continued travelling for three decades, reaching as far as Baghdad. During his travels, he composed secular poetry of the lands he traveled through and rationalist Torah commentaries (for which he would be best remembered). He appears to have been unrelated to the contemporary scholar Moses ibn Ezra. Works In Spain, Ibn Ezra had already gained the reputation of a distinguished poet and thinker. However, apart from his poems, the vast majority of his work was composed after 1140. Written in Hebrew, as opposed to earlier thinkers' use of Judeo-Arabic, these works covering Hebrew grammar, Biblical exegesis, and scientific theory were tinged with the work of Arab scholars he had studied in Spain. Beginning many of his writings in Italy, Ibn Ezra also worked extensively to translate the works of grammarian and biblical exegetist Judah ben David Hayyuj from their original Judeo-Arabic to Hebrew. Published as early as 1140, these translations became some of the first expositions of Hebrew grammar to be written in Hebrew. During his time of publishing translations, Ibn Ezra also began to publish his own biblical commentaries. Making use of many of the techniques outlined by Hayyuj, Ibn Ezra would publish his first biblical commentary, a commentary on Kohelet in 1140. He would continue to publish such commentaries over mostly works from Ketuvim and Nevi'im throughout his journey, though he would manage to publish a short commentary over the entire Pentateuch while living in Lucca in 1145. This short commentary would be amended into longer portions beginning in 1155 with the publication of his expanded commentary on Genesis. Besides his commentaries on the Torah, Ibn Ezra would also publish a multitude of works on science in Hebrew. In doing so, he would continue his mission of spreading the knowledge he had gained in Spain to the Jews throughout the areas he visited and lived. This can be seen particularly in the works he published while living in France. Here, many of the works published can be seen as relating to astrology, and use of the astrolabe. Influence on biblical criticism and philosophy of religion In his commentary, Ibn Ezra adhered to the literal sense of the texts, avoiding Rabbinic allegory and Kabbalistic interpretation. He exercised an independent criticism that, according to some writers, exhibits a marked tendency toward rationalism. In addition, he sharply criticized those who blended the simplistic and logical explanation with Midrash, maintaining that such interpretations were never intended to supplant the plain understanding. Indeed, Ibn Ezra is claimed by proponents of higher biblical criticism of the Torah as one of its earliest pioneers. Baruch Spinoza, in concluding that Moses did not author the Torah and that the Torah and other protocanonical books were written or redacted by somebody else, cites Ibn Ezra's commentary on Deuteronomy. In his commentary, Ibn Ezra examines Deuteronomy 1:1 and expresses concern over the unusual phrasing that describes Moses as being "beyond the Jordan." This wording suggests that the writer was situated in the land of Canaan, which is located west of the Jordan River, even though Moses and the Children of Israel had not yet crossed the Jordan at that point in the Biblical narrative. Relating this inconsistency to others in the Torah, Ibn Ezra stated, "If you can grasp the mystery behind the following problematic passages: 1) The final twelve verses of this book [i.e., Deuteronomy 34:1–12, describing the death of Moses], 2) 'Moshe wrote [this song on the same day, and taught it to the children of Israel]' [Deuteronomy 31:22]; 3) 'At that time, the Canaanites dwelt in the land' [Genesis 12:6]; 4) '... In the mountain of God, He will appear' [Genesis 22:14]; 5) 'behold, his [Og king of Bashan] bed is a bed of iron [is it not in Rabbah of the children of Ammon?]' you will understand the truth."Spinoza concluded that Ibn Ezra's reference to "the truth", and other such references scattered throughout Ibn Ezra's commentary in reference to seemingly anachronistic verses, as "a clear indication that it was not Moses who wrote the Pentateuch but someone else who lived long after him, and that it was a different book that Moses wrote". Spinoza and later scholars were thus able to expand on several of Ibn Ezra's references as a means of providing stronger evidence for non-Mosaic authorship. On the other hand, Orthodox writers have stated that Ibn Ezra's commentary can be interpreted as consistent with Jewish tradition, stating that the Torah was divinely dictated to Moses. Ibn Ezra is also among the first scholars known to have published a text about the division of the Book of Isaiah into at least two distinct parts. In his commentary to Isaiah, he remarked that chapters 1-39 dealt with a different historical period (second half of the 8th century BCE) than chapters 40-66 (later than the last third of the 6th century BCE). This division of the book into First Isaiah and Deutero-Isaiah has been accepted nowadays by all but the most conservative Jews and Christians. Ibn Ezra's commentaries, especially some of the longer excursuses, contain numerous contributions to the philosophy of religion. One work in particular that belongs to this province, Yesod Mora ("Foundation of Awe"), on the division and the reasons for the Biblical commandments, he wrote in 1158 for a London-based friend, Joseph ben Jacob. In his philosophical thought, Neoplatonic ideas prevail, and astrology also had a place in his view of the world. He also wrote various works on mathematical and astronomical subjects. Bibliography Biblical commentaries Sefer ha-Yashar ("Book of the Straight") The complete commentary on the Torah, which was finished shortly before his death. Hebrew grammar Sefer Moznayim "Book of Scales" (1140), chiefly an explanation of the terms used in Hebrew grammar; as early as 1148 it was incorporated by Judah Hadassi in his Eshkol ha-Kofer, with no mention of Ibn Ezra. Sefer ha-Yesod, or Yesod Diqduq "Book of Language Fundamentals" (1143) Sefer Haganah 'al R. Sa'adyah Gaon, (1143) a defense of Saadyah Gaon against Adonim's criticisms. Tzakhoot (1145), on linguistic correctness, his best grammatical work, which also contains a brief outline of modern Hebrew meter. Sefer Safah Berurah "Book of Purified Language" (1146). Smaller works – partly grammatical, partly exegetical Sefat Yeter, in defense of Saadia Gaon against Dunash ben Labrat, whose criticism of Saadia ibn Ezra had brought with him from Egypt. Sefer ha-Shem ("Book of the Name"), a work on the names of God. Yesod Mispar, a small monograph on numerals. Iggeret Shabbat (1158), a responsum on Shabbat Religious philosophy Yesod Mora Vesod Hatorah (1158), on the division of and reasons for the Biblical commandments. Mathematics Sefer ha-Ekhad, on the peculiarities of the numbers 1–9. Sefer ha-Mispar or Yesod Mispar, arithmetic. Luchot, astronomical tables. Sefer ha-'Ibbur, on the calendar. Keli ha-Nechoshet, on the astrolabe. Shalosh She'elot, in answer to three chronological questions of David ben Joseph Narboni. Astrology Ibn Ezra composed his first book on astrology in Italy, before his move to France: Mishpetai ha-Mazzelot ("Judgments of the Zodiacal Signs"), on the general principles of astrology In seven books written in Béziers in 1147–1148 Ibn Ezra then composed a systematic presentation of astrology, starting with an introduction and a book on general principles, and then five books on particular branches of the subject. The presentation appears to have been planned as an integrated whole, with cross-references throughout, including references to subsequent books in the future tense. Each of the books is known in two versions, so it seems that at some point Ibn Ezra also created a revised edition of the series. Reshit Hokhma ("The Beginning of Wisdom"), an introduction to astrology, perhaps a revision of his earlier book Sefer ha-Te'amim ("Book of Reasons"), an overview of Arabic astrology, giving explanations for the material in the previous book. Sefer ha-Moladot ("Book of Nativities"), on astrology based on the time and place of birth. Sefer ha-Me'orot ("Book of Luminaries" or "Book of Lights"), on medical astrology. Sefer ha-She'elot ("Book of Interrogations"), on questions about particular events. Sefer ha-Mivharim ("Book of Elections", also known as "Critical Days"), on optimum days for particular activities. Sefer ha-Olam ("Book of the World"), on the fates of countries and wars, and other larger-scale issues. Translation of two works by the astrologer Mashallah ibn Athari: "She'elot" and "Qadrut". Poetry There are a great many other poems by Ibn Ezra, some of them religious and some secular – about friendship, wine, didactic or satirical. Like his friend Yehuda Halevi, he used the Arabic poetic form of Muwashshah. Legacy The crater Abenezra on the Moon was named in honor of Ibn Ezra. Robert Browning's poem "Rabbi ben Ezra", beginning "Grow old along with me/The best is yet to be", is derived from a meditation on Ibn Ezra's life and work that appeared in Browning's 1864 poetry collection Dramatis Personæ. Burial According to Jewish tradition, Abraham ibn Ezra was buried in Cabul, alongside Judah Halevi. See also Rabbinic literature List of rabbis Jewish views of astrology Jewish commentaries on the Bible Kabbalistic astrology Astrology in Judaism Hebrew astronomy Islamic astrology Abenezra (crater) References Further reading Carmi, T. (ed.), The Penguin book of Hebrew verse, Penguin Classics, 2006, London Charlap, Luba. 2001. Another view of Rabbi Abraham Ibn-Ezra's contribution to medieval Hebrew grammar. Hebrew Studies 42:67-80. Epstein, Meira, "Rabbi Avraham Ibn Ezra" – An article by Meira Epstein, detailing all of ibn Ezra's extant astrological works Glick, Thomas F.; Livesey, Steven John; and Wallis, Faith, Medieval Science, Technology, and Medicine: An Encyclopedia, Routledge, 2005. . Cf. pp. 247–250. Goodman, Mordechai S. (Translator), The Sabbath Epistle of Rabbi Abraham Ibn Ezra,('iggeret hashabbat). Ktav Publishing House, Inc., New Jersey (2009). Halbronn, Jacques, Le monde juif et l'astrologie, Ed Arché, Milan, 1985 Halbronn, Jacques, Le livre des fondements astrologiques, précédé du Commencement de la Sapience des Signes, Pref. G. Vajda, Paris, ed Retz 1977 Holden, James H., History of Horoscopic Astrology, American Federation of Astrologers, 2006. . Cf. pp. 132–135. Jewish Virtual Library, Abraham Ibn Ezra Johansson, Nadja, Religion and Science in Abraham Ibn Ezra's Sefer Ha-Olam (Including an English Translation of the Hebrew Text) Langermann, Tzvi, "Abraham Ibn Ezra", Stanford Encyclopedia of Philosophy, 2006. Accessed June 21, 2011. Levin, Elizabetha, Various Times in Abraham Ibn Ezra's Works and their Reflection in Modern Thought // KronoScope, Brill Academic Publishers,18, Issue 2, 2018, pp. 154–170. DOI: 10.1163/15685241-12341414 Levine, Etan. Ed., Abraham ibn Ezra's Commentary to the Pentateuch, Vatican Manuscript Vat. Ebr. 38. Jerusalem: Makor, 1974. Sánchez-Rubio García, Fernando (2016). El segundo comentario de Abraham Ibn Ezra al libro del Cantar de los Cantares. Edición crítica, traducción, notas y estudio introductorio. Tesis doctoral (UCM) Sela, Shlomo, "Abraham Ibn Ezra's Scientific Corpus Basic Constituents and General Characterization", in Arabic Sciences and Philosophy, (2001), 11:1:91–149 Cambridge University Press Sela, Shlomo, Abraham Ibn Ezra and the Rise of Medieval Hebrew Science, Brill, 2003. Siegel, Eliezer, Rabbi Abraham Ibn Ezra's Commentary to the Torah skyscript.co.uk, 120 Aphorisms for Astrologers by Abraham ibn Ezra skyscript.co.uk, Skyscript: The Life and Work of Abraham Ibn Ezra Smithuis, Renate, "Abraham Ibn Ezra's Astrological Works in Hebrew and Latin: New Discoveries and Exhaustive Listing", in Aleph (Aleph: Historical Studies in Science and Judaism), 2006, No. 6, Pages 239-338 Wacks, David. "The Poet, the Rabbi, and the Song: Abraham ibn Ezra and the Song of Songs". Wine, Women, and Song: Hebrew and Arabic Literature in Medieval Iberia. Eds. Michelle M. Hamilton, Sarah J. Portnoy and David A. Wacks. Newark, Del.: Juan de la Cuesta Hispanic Monographs, 2004. 47–58. Walfish, Barry, "The Two Commentaries of Abraham Ibn Ezra on the Book of Esther", The Jewish Quarterly Review, New Series, Vol. 79, No. 4 (April 1989), pp. 323–343, University of Pennsylvania Press External links Encyclopaedia Judaica (2007) entry on "Ibn Ezra, Abraham Ben Meir" with extensive bibliography by Uriel Simon and Raphael Jospe Commentaries over the Torah at Sefaria Poems in Hebrew at Ben Yehuda Project 1080s births 1167 deaths People from Tudela, Navarre Bible commentators Jewish poets 12th-century rabbis in al-Andalus Medieval Hebraists Medieval Jewish astrologers Astrologers from Al-Andalus Medieval Navarrese Jews Philosophers of Judaism Sephardi rabbis Grammarians of Hebrew Jewish astronomers Jewish liturgical poets Medieval Jewish philosophers
Abraham ibn Ezra
Astronomy
3,410
52,803,768
https://en.wikipedia.org/wiki/D%20Velorum
The Bayer designations d Velorum and D Velorum are distinct. Due to technical limitations, both designations link here. For the star : d Velorum, see HD 74772 (HR 3477) D Velorum, see HD 74753 (HR 3476) See also δ Velorum (Delta Velorum) Vela (constellation) Velorum, d
D Velorum
Astronomy
84
66,400,165
https://en.wikipedia.org/wiki/1370%20aluminium%20alloy
1370 Aluminium alloy is primarily aluminium (≥99.7%) alloyed with small amounts of boron, chromium, copper, gallium, iron, magnesium, manganese, silicon, vanadium and zinc. Properties of Aluminium alloy References Aluminium alloys
1370 aluminium alloy
Chemistry
57
33,096,981
https://en.wikipedia.org/wiki/South%20Boston%20CSO%20Storage%20Tunnel
The South Boston CSO Storage Tunnel, also known as the North Dorchester Bay CSO Storage Tunnel, is a large underground facility designed to reduce untreated sewage discharges into Boston Harbor from the Massachusetts Water Resources Authority combined sewer and stormwater system. It was opened on July 23, 2011, and is part of the federally mandated Boston Harbor Cleanup project. CSO stands for Combined Sewer Overflow. The main part of the facility is a tunnel in diameter, running along the harbor front. The tunnel starts at an Odor Control Building continues along the harbor front, with a midpoint near and ends with a pump station at Combined sewers are problematic because during heavy storms, they are forced by a high volume of rainwater from storm drains to carry untreated sanitary sewer output into Boston harbor, including dangerous amounts of human waste. In addition to the tunnel project, the MWRA is undertaking costly sewer separation in parts of South Boston near the Reserved Channel, and reconfiguring various drains and outflows. The tunnel provides a buffer to allow some combined sewers to remain in service. It has sufficient buffer capacity to hold combined sewage and rain water during most storms, helping to eliminate the Combined Sewer Outflow events that polluted nearby beaches on average 20 times per year. After the storm is over, the tunnel is "dewatered" back into the network at a rate the Deer Island Waste Water Treatment Plant can handle. References External links Massachusetts Water Resources Authority Sewage treatment plants in Massachusetts Buildings and structures in Boston 2011 establishments in Massachusetts
South Boston CSO Storage Tunnel
Engineering
311
161,293
https://en.wikipedia.org/wiki/Amphoterism
In chemistry, an amphoteric compound () is a molecule or ion that can react both as an acid and as a base. What exactly this can mean depends on which definitions of acids and bases are being used. Etymology and terminology Amphoteric is derived from the Greek word () meaning "both". Related words in acid-base chemistry are amphichromatic and amphichroic, both describing substances such as acid-base indicators which give one colour on reaction with an acid and another colour on reaction with a base. Amphiprotism Amphiprotism is exhibited by compounds with both Brønsted acidic and basic properties. A prime example is H2O. Amphiprotic molecules can either donate or accept a proton (). Amino acids (and proteins) are amphiprotic molecules because of their amine () and carboxylic acid () groups. Ampholytes Ampholytes are zwitterions. Molecules or ions that contain both acidic and basic functional groups. Amino acids hav both a basic group and an acidic group . Often such species exists as several structures in chemical equilibrium: H2N-RCH-CO2H + H2O<=> H2N-RCH-COO- + H3O+<=> H3N+-RCH-COOH + OH-<=> H3N+-RCH-COO- + H2O In approximately neutral aqueous solution (pH ≅ 7), the basic amino group is mostly protonated and the carboxylic acid is mostly deprotonated, so that the predominant species is the zwitterion . The pH at which the average charge is zero is known as the molecule's isoelectric point. Ampholytes are used to establish a stable pH gradient for use in isoelectric focusing. Metal oxides which react with both acids as well as bases to produce salts and water are known as amphoteric oxides. Many metals (such as zinc, tin, lead, aluminium, and beryllium) form amphoteric oxides or hydroxides. Aluminium oxide () is an example of an amphoteric oxide. Amphoterism depends on the oxidation states of the oxide. Amphoteric oxides include lead(II) oxide and zinc oxide, among many others. Amphiprotic molecules According to the Brønsted-Lowry theory of acids and bases, acids are proton donors and bases are proton acceptors. An amphiprotic molecule (or ion) can either donate or accept a proton, thus acting either as an acid or a base. Water, amino acids, hydrogencarbonate ion (or bicarbonate ion) , dihydrogen phosphate ion , and hydrogensulfate ion (or bisulfate ion) are common examples of amphiprotic species. Since they can donate a proton, all amphiprotic substances contain a hydrogen atom. Also, since they can act like an acid or a base, they are amphoteric. Examples The water molecule is amphoteric in aqueous solution. It can either gain a proton to form a hydronium ion , or else lose a proton to form a hydroxide ion . Another possibility is the molecular autoionization reaction between two water molecules, in which one water molecule acts as an acid and another as a base. H2O + H2O <=> H3O+ + OH- The bicarbonate ion, , is amphoteric as it can act as either an acid or a base: As an acid, losing a proton: HCO3- + OH- <=> CO3^2- + H2O As a base, accepting a proton: HCO3- + H+ <=> H2CO3 Note: in dilute aqueous solution the formation of the hydronium ion, , is effectively complete, so that hydration of the proton can be ignored in relation to the equilibria. Other examples of inorganic polyprotic acids include anions of sulfuric acid, phosphoric acid and hydrogen sulfide that have lost one or more protons. In organic chemistry and biochemistry, important examples include amino acids and derivatives of citric acid. Although an amphiprotic species must be amphoteric, the converse is not true. For example, a metal oxide such as zinc oxide, ZnO, contains no hydrogen and so cannot donate a proton. Nevertheless, it can act as an acid by reacting with the hydroxide ion, a base: Zinc oxide can also act as a base: Oxides Zinc oxide (ZnO) reacts both with acids and with bases: ZnO + \overset{acid}{H2SO4} -> ZnSO4 + H2O ZnO + \overset{base}{2 NaOH} + H2O -> Na2[Zn(OH)4] This reactivity can be used to separate different cations, for instance zinc(II), which dissolves in base, from manganese(II), which does not dissolve in base. Lead oxide (PbO): PbO + \overset{acid}{2 HCl} -> PbCl2 + H2O PbO + \overset{base}{2 NaOH} + H2O -> Na2[Pb(OH)4] Lead oxide (): PbO2 + \overset{acid}{4 HCl} -> PbCl4 + 2H2O PbO2 + \overset{base}{2 NaOH} + 2H2O -> Na2[Pb(OH)6] Aluminium oxide (): Al2O3 + \overset{acid}{6 HCl} -> 2 AlCl3 + 3 H2O Al2O3 + \overset{base}{2 NaOH} + 3 H2O -> 2 Na[Al(OH)4] (hydrated sodium aluminate) Stannous oxide (SnO): SnO + \overset{acid}{2 HCl} <=> SnCl2 + H2O SnO + \overset{base}{4 NaOH} + H2O <=> Na4[Sn(OH)6] Stannic oxide (): SnO2 + \overset{acid}{4 HCl} <=> SnCl4 + 2H2O SnO2 + \overset{base}{4 NaOH} + 2H2O <=> Na4[Sn(OH)8] Vanadium dioxide (): VO2 + \overset{acid}{2 HCl} -> VOCl2 + H2O 4 VO2 + \overset{base}{2 NaOH} -> Na2V4O9 + H2O Some other elements which form amphoteric oxides are gallium, indium, scandium, titanium, zirconium, chromium, iron, cobalt, copper, silver, gold, germanium, antimony, bismuth, beryllium, and tellurium. Hydroxides Aluminium hydroxide is also amphoteric: Al(OH)3 + \overset{acid}{3 HCl} -> AlCl3 + 3 H2O Al(OH)3 + \overset{base}{NaOH} -> Na[Al(OH)4] Beryllium hydroxide: Be(OH)2 + \overset{acid}{2 HCl} -> BeCl2 + 2 H2O Be(OH)2 + \overset{base}{2 NaOH} -> Na2[Be(OH)4] Chromium hydroxide: Cr(OH)3 + \overset{acid}{3 HCl} -> CrCl3 + 3H2O Cr(OH)3 + \overset{base}{NaOH} -> Na[Cr(OH)4] See also Ate complex Isoelectric point Zwitterion References Acid–base chemistry Chemical properties General chemistry
Amphoterism
Chemistry
1,714
46,964,197
https://en.wikipedia.org/wiki/Taxus%20%C3%97%20media
Taxus × media, also referred to as the Hybrid yew, Anglo-Japanese yew, or Anglojap yew is a conifer (more specifically, a yew) created by the hybridization of English yew Taxus baccata and Japanese yew Taxus cuspidata. This hybridization is thought to have been performed by the Massachusetts-based horticulturalist T.D. Hatfield in the early 1900s. Taxus × media is grown in a large number of shrubby, often wide-spreading, cultivars under a variety of names. Description Like most yew species, T. × media prefers well-drained and well-watered soils, but has some degree of drought tolerance and in fact may die in conditions of excessive precipitation if the soil beneath the plant is not sufficiently well-drained. Taxus × media is among the smallest extant species in the genus Taxus and (depending upon cultivar) may not even grow to the size of what one would consider a typical tree. Immature shrubs are very small and achieve (over the time span of ten to twenty years) heights of at most and diameters of at most , depending on the cultivar. Furthermore, T. × media is known to grow rather slowly and is not injured by frequent pruning, making this hybrid very desirable as a hedge in low-maintenance landscaping and also a good candidate for bonsai. Toxicity Taxus × media also shares with its fellow yew trees a high level of taxine in its branches, needles, and seeds. Taxine is toxic to the mammalian heart. Varieties (cultivars) References media Medicinal plants Plant nothospecies Trees of humid continental climate Ornamental trees Least concern plants Plants used in bonsai
Taxus × media
Biology
354
2,629,669
https://en.wikipedia.org/wiki/Robot-assisted%20surgery
Robot-assisted surgery or robotic surgery are any types of surgical procedures that are performed using robotic systems. Robotically assisted surgery was developed to try to overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capabilities of surgeons performing open surgery. In the case of robotically assisted minimally-invasive surgery, instead of the surgeon directly moving the instruments, the surgeon uses one of two methods to perform dissection, hemostasis and resection, using a direct telemanipulator, or through computer control. A telemanipulator (e.g. the da Vinci Surgical System) is a system of remotely controlled manipulators that allows the surgeon to operate real-time under stereoscopic vision from a control console separate from the operating table. The robot is docked next to the patient, and robotic arms carry out endoscopy-like maneuvers via end-effectors inserted through specially designed trocars. A surgical assistant and a scrub nurse are often still needed scrubbed at the tableside to help switch effector instruments or provide additional suction or temporary tissue retraction using endoscopic grasping instruments. In computer-controlled systems, the surgeon uses a computer system to relay control data and direct the robotic arms and its end-effectors, though these systems can also still use telemanipulators for their input. One advantage of using the computerized method is that the surgeon does not have to be present on campus to perform the procedure, leading to the possibility for remote surgery and even AI-assisted or automated procedures. Robotic surgery has been criticized for its expense, with the average costs in 2007 ranging from $5,607 to $45,914 per patient. This technique has not been approved for cancer surgery as of 2019 as the safety and usefulness is unclear. History The concept of using standard hand grips to control manipulators and cameras of various sizes down to sub-miniature was described in the Robert Heinlein story 'Waldo' in August 1942, which also mentioned brain surgery. The first robot to assist in surgery was the Arthrobot, which was developed and used for the first time in Vancouver in 1983. This robot assisted in being able to manipulate and position the patient's leg on voice command. Intimately involved were biomedical engineer James McEwen, Geof Auchinleck, a UBC engineering physics grad, and Dr. Brian Day as well as a team of engineering students. The robot was used in an orthopaedic surgical procedure on 12 March 1983, at the UBC Hospital in Vancouver. The next great step was in 1985. in brain biopsy under CT guidance with the assistance of a robotic arm—PUMA560. Over 60 arthroscopic surgical procedures were performed in the first 12 months, and a 1985. National Geographic video on industrial robots, The Robotics Revolution, featured the device. Other related robotic devices developed at the same time included a surgical scrub nurse robot, which handed operative instruments on voice command, and a medical laboratory robotic arm. A YouTube video entitled Arthrobot – the world's first surgical robot illustrates some of these in operation. In 1985 a robot, the Unimation Puma 200, was used to orient a needle for a brain biopsy while under CT guidance during a neurological procedure. In the late 1980s, Imperial College in London developed PROBOT, which was then used to perform prostatic surgery. The advantages to this robot was its small size, accuracy and lack of fatigue for the surgeon. In the 1990s, computer-controlled surgical devices began to emerge, enabling greater precision and control in surgical procedures. One of the most significant advancements in this period was the da Vinci Surgical System, which was approved by the FDA for use in surgical procedures in 2000 (Intuitive Surgical, 2021). The da Vinci system uses robotic arms to manipulate surgical instruments, allowing surgeons to perform complex procedures with greater accuracy and control. In 1992, the ROBODOC was introduced and revolutionized orthopedic surgery by being able to assist with hip replacement surgeries. The latter was the first surgical robot that was approved by the FDA in 2008. The ROBODOC from Integrated Surgical Systems (working closely with IBM) could mill out precise fittings in the femur for hip replacement. The purpose of the ROBODOC was to replace the previous method of carving out a femur for an implant, the use of a mallet and broach/rasp. Further development of robotic systems was carried out by SRI International and Intuitive Surgical with the introduction of the da Vinci Surgical System and Computer Motion with the AESOP and the ZEUS robotic surgical system. The first robotic surgery took place at The Ohio State University Medical Center in Columbus, Ohio under the direction of Robert E. Michler. AESOP was a breakthrough in robotic surgery when introduced in 1994, as it was the first laparoscopic camera holder to be approved by the FDA. NASA initially funded the company that produces AESOP, Computer Motion, due to its goal to create a robotic arm that can be used in space, but this project ended up becoming a camera used in laparoscopic procedures. Voice control was then added in 1996 with the AESOP 2000 and seven degrees of freedom to mimic a human hand was added in 1998 with the AESOP 3000. ZEUS was introduced commercially in 1998, and started the idea of telerobotics or telepresence surgery where the surgeon is at a distance from the robot on a console and operates on the patient. ZEUS was first used during a gynecological surgery in 1997 to reconnect Fallopian tubes in Cleveland Ohio, a beating heart coronary artery bypass graft in October 1999, and the Lindbergh Operation, which was a cholecystectomy performed remotely in September 2001. In 2003, ZEUS made its most prominent mark in cardiac surgery after successfully harvesting the left internal mammary arteries in 19 patients, all of which had very successful clinical outcomes. The original telesurgery robotic system that the da Vinci was based on was developed at Stanford Research Institute International in Menlo Park with grant support from DARPA and NASA. A demonstration of an open bowel anastomosis was given to the Association of Military Surgeons of the US. Although the telesurgical robot was originally intended to facilitate remotely performed surgery in the battlefield to reduce casualties and to be used in other remote environments, it turned out to be more useful for minimally invasive on-site surgery. The patents for the early prototype were sold to Intuitive Surgical in Mountain View, California. The da Vinci senses the surgeon's hand movements and translates them electronically into scaled-down micro-movements to manipulate the tiny proprietary instruments. It also detects and filters out any tremors in the surgeon's hand movements, so that they are not duplicated robotically. The camera used in the system provides a true stereoscopic picture transmitted to a surgeon's console. Compared to the ZEUS, the da Vinci robot is attached to trocars to the surgical table, and can imitate the human wrist. In 2000, the da Vinci obtained FDA approval for general laparoscopic procedures and became the first operative surgical robot in the US. Examples of using the da Vinci system include the first robotically assisted heart bypass (performed in Germany) in May 1998, and the first performed in the United States in September 1999; and the first all-robotic-assisted kidney transplant, performed in January 2009. The da Vinci Si was released in April 2009 and initially sold for $1.75 million. In 2005, a surgical technique was documented in canine and cadaveric models called the transoral robotic surgery (TORS) for the da Vinci robot surgical system as it was the only FDA-approved robot to perform head and neck surgery. In 2006, three patients underwent resection of the tongue using this technique. The results were more clear visualization of the cranial nerves, lingual nerves, and lingual artery, and the patients had a faster recovery to normally swallowing. In May 2006 the first artificial intelligence doctor-conducted unassisted robotic surgery was on a 34-year-old male to correct heart arrhythmia. The results were rated as better than an above-average human surgeon. The machine had a database of 10,000 similar operations, and so, in the words of its designers, was "more than qualified to operate on any patient". In August 2007, Dr. Sijo Parekattil of the Robotics Institute and Center for Urology (Winter Haven Hospital and University of Florida) performed the first robotic-assisted microsurgery procedure denervation of the spermatic cord for chronic testicular pain. In February 2008, Dr. Mohan S. Gundeti of the University of Chicago Comer Children's Hospital performed the first robotic pediatric neurogenic bladder reconstruction. On 12 May 2008, the first image-guided MR-compatible robotic neurosurgical procedure was performed at University of Calgary by Dr. Garnette Sutherland using the NeuroArm. In June 2008, the German Aerospace Centre (DLR) presented a robotic system for minimally invasive surgery, the MiroSurge. In September 2010, the Eindhoven University of Technology announced the development of the Sofie surgical system, the first surgical robot to employ force feedback. In September 2010, the first robotic operation at the femoral vasculature was performed at the University Medical Centre Ljubljana by a team led by Borut Geršak. In 2019 the Versius Surgical Robotic System was launched and is a rival of the Da Vinci surgical system and claims to be more flexible and versatile, having independent modular arms which are "quick and easy to set up". The small-scale design means that it is suitable for virtually any operating room and can be operated at either a standing or a sitting position. Uses Ophthalmology Ophthalmology is still part of the frontier for robotic-assisted surgeries. However, there are a couple of robotic systems that are capable of successfully performing surgeries. PRECEYES Surgical System is being used for vitreoretinal surgeries. This is a single arm robot, that is tele manipulated by a surgeon. This system attaches to the head of the operating room table and provides surgeons with increased precision with the help of the intuitive motion controller. Preceyes is the only robotic instrument to be CE certified. Some other companies like Forsight Robotics, Acusurgical that raised 5.75 M€ (France), and Horizon (US) are working in this field. The da Vinci Surgical System, though not specifically designed for ophthalmic procedures, uses telemanipulation to perform pterygium repairs and ex-vivo corneal surgeries. Heart Some examples of heart surgery being assisted by robotic surgery systems include: Atrial septal defect repair – the repair of a hole between the two upper chambers of the heart, Mitral valve repair – the repair of the valve that prevents blood from regurgitating back into the upper heart chambers during contractions of the heart, Coronary artery bypass – rerouting of blood supply by bypassing blocked arteries that provide blood to the heart. Thoracic Robotic surgery has become more widespread in thoracic surgery for mediastinal pathologies, pulmonary pathologies and more recently complex esophageal surgery. The da Vinci Xi system is used for lung and mediastinal mass resection. This minimally invasive approach as a comparable alternative to video-assisted thoracoscopic surgery (VATS) and the standard open thoracic surgery. Although VATS is the less expensive option, the robotic-assisted approach offers benefits such as 3D visualizations with seven degrees of freedom and improved dexterity while having equivalent perioperative outcomes. ENT The first successful robot-assisted cochlear implantation in a person took place in Bern, Switzerland in 2017. Surgical robots have been developed for use at various stages of cochlear implantation, including drilling through the mastoid bone, accessing the inner ear and inserting the electrode into the cochlea. Advantages of robot-assisted cochlear implantation include improved accuracy, resulting in fewer mistakes during electrode insertion and better hearing outcomes for patients. The surgeon uses image-guided surgical planning to program the robot based on the patient's individual anatomy. This helps the implant team to predict where the contacts of the electrode array will be located within the cochlea, which can assist with audio processor fitting post-surgery. The surgical robots also allow surgeons to reach the inner ear in a minimally invasive way. Challenges that still need to be addressed include safety, time, efficiency and cost. Surgical robots have also been shown to be useful for electrode insertion with pediatric patients. Gastrointestinal Multiple types of procedures have been performed with either the 'Zeus' or da Vinci robot systems, including bariatric surgery and gastrectomy for cancer. Surgeons at various universities initially published case series demonstrating different techniques and the feasibility of GI surgery using the robotic devices. Specific procedures have been more fully evaluated, specifically esophageal fundoplication for the treatment of gastroesophageal reflux and Heller myotomy for the treatment of achalasia. Robot-assisted pancreatectomies have been found to be associated with "longer operating time, lower estimated blood loss, a higher spleen-preservation rate, and shorter hospital stay[s]" than laparoscopic pancreatectomies; there was "no significant difference in transfusion, conversion to open surgery, overall complications, severe complications, pancreatic fistula, severe pancreatic fistula, ICU stay, total cost, and 30-day mortality between the two groups." Gynecology The first report of robotic surgery in gynecology was published in 1999 from the Cleveland Clinic. The adoption of robotic surgery has contributed to the increase in minimally invasive surgery for gynecologic disease. Gynecologic procedures may take longer with robot-assisted surgery and the rate of complications may be higher, but there are not enough high-quality studies to know at the present time. In the United States, robotic-assisted hysterectomy for benign conditions was shown to be more expensive than conventional laparoscopic hysterectomy in 2015, with no difference in overall rates of complications. This includes the use of the da Vinci surgical system in benign gynecology and gynecologic oncology. Robotic surgery can be used to treat fibroids, abnormal periods, endometriosis, ovarian tumors, uterine prolapse, and female cancers. Using the robotic system, gynecologists can perform hysterectomies, myomectomies, and lymph node biopsies. The Hominis robotic system developed by Momentis Surgical™ is aimed to provide a robotic platform for natural orifice transluminal endoscopic surgery (NOTES) for myomectomy through the vagina. A 2017 review of surgical removal of the uterus and cervix for early cervical cancer robotic and laparoscopic surgery resulted in similar outcomes with respect to the cancer. Bone Robots are used in orthopedic surgery. ROBODOC is the first active robotic system that performs some of the surgical actions in a total hip arthroplasty (THA). It is programmed preoperatively using data from computer tomography (CT) scans. This allows for the surgeon to choose the optimal size and design for the replacement hip. Acrobot and Rio are semi-active robotic systems that are used in THA. It consists of a drill bit that is controlled by the surgeon however the robotic system does not allow any movement outside the predetermined boundaries. Mazor X is used in spinal surgeries to assist surgeons with placing pedicle screw instrumentation. Inaccuracy when placing a pedicle screw can result in neurovascular injury or construct failure. Mazor X functions by using templating imaging to locate itself to the target location of where the pedicle screw is needed. Spine Robotic devices started to be used in minimally invasive spine surgery starting in the mid-2000s. As of 2014, there were too few randomized clinical trials to judge whether robotic spine surgery is more or less safe than other approaches. As of 2019, the application of robotics in spine surgery has mainly been limited to pedicle screw insertion for spinal fixation. In addition, the majority of studies on robot-assisted spine surgery have investigated lumbar or lumbosacral vertebrae only. Studies on use of robotics for placing screws in the cervical and thoracic vertebrae are limited. Transplant surgery The first fully robotic kidney transplantations were performed in the late 2000s. It may allow kidney transplantations in people who are obese who could not otherwise have the procedure. Weight loss however is the preferred initial effort. General surgery With regards to robotic surgery, this type of procedure is currently best suited for single-quadrant procedures, in which the operations can be performed on any one of the four quadrants of the abdomen. Cost disadvantages are applied with procedures such as a cholecystectomy and fundoplication, but are suitable opportunities for surgeons to advance their robotic surgery skills. Hernia and abdominal wall surgery Over the past several decades, there have been great advances in the field of abdominal wall and hernia surgery especially when it comes to robotic-assisted surgery. Unlike laparoscopic surgery, the robotic platform allows for the correction of large hernia defects with specialized techniques that would traditionally only be performed via an open approach. Compared to open surgery, robotic surgery for hernia repair can reduce pain, length of hospital stay, and improve outcomes. As the robotic instruments have 6 degrees of articulation, freedom of movement and ergonomics are greatly improved compared to laparoscopy. The first robotic inguinal hernia repairs were done in conjunction with prostatectomies in 2007. The first ventral hernia repairs were performed robotically in 2009. Since then the field has rapidly expanded to include most types of reconstruction including anterior as well as posterior component separation. With newer techniques such as direct access into the abdominal wall, major reconstruction of large hernias can be done without even entering the abdominal cavity. Due to its complexity, however, major reconstruction done robotically should be undertaken at advanced hernia centers such as the Columbia Hernia Center in New York City, NY, USA. The American Hernia Society and the European Hernia Society are moving towards specialty designation for hernia centers who are credentialed for complex hernia surgery, including robotic surgery. Urology Robotic surgery in the field of urology has become common, especially in the United States. There is inconsistent evidence of benefits compared to standard surgery to justify the increased costs. Some have found tentative evidence of more complete removal of cancer and fewer side effects from surgery for prostatectomy. In 2000, the first robot-assisted laparoscopic radical prostatectomy was performed. Robotic surgery has also been utilized in radical cystectomies. A 2013 review found less complications and better short term outcomes when compared to open technique. Pediatrics Pediatric procedures are also benefiting from robotic surgical systems. The smaller abdominal size in pediatric patients limits the viewing field in most urology procedures. The robotic surgical systems help surgeons overcome these limitations. Robotic technology provides assistance in performing Pyeloplasty - alternative to the conventional open dismembered pyeloplasty (Anderson-Hynes). Pyeloplasty is the most common robotic-assisted procedures in children. Ureteral reimplantation - alternative to the open intravesical or extravesical surgery. Ureteroureterostomy - alternative to the transperitoneal approach. Nephrectomy and heminephrectomy - Traditionally done with laparoscopy, it is not likely that a robotic procedure offers significant advantage due to its high cost. Comparison to traditional methods Major advances aided by surgical robots have been remote surgery, minimally invasive surgery and unmanned surgery. Due to robotic use, the surgery is done with precision, miniaturization, smaller incisions; decreased blood loss, less pain, and quicker healing time. Articulation beyond normal manipulation and three-dimensional magnification help to result in improved ergonomics. Due to these techniques, there is a reduced duration of hospital stays, blood loss, transfusions, and use of pain medication. The existing open surgery technique has many flaws such as limited access to the surgical area, long recovery time, long hours of operation, blood loss, surgical scars, and marks. The robot's costs range from $1 million to $2.5 million for each unit, and while its disposable supply cost is normally $1,500 per procedure, the cost of the procedure is higher. Additional surgical training is needed to operate the system. Numerous feasibility studies have been done to determine whether the purchase of such systems are worthwhile. As it stands, opinions differ dramatically. Surgeons report that, although the manufacturers of such systems provide training on this new technology, the learning phase is intensive and surgeons must perform 150 to 250 procedures to become adept in their use. During the training phase, minimally invasive operations can take up to twice as long as traditional surgery, leading to operating room tie-ups and surgical staffs keeping patients under anesthesia for longer periods. Patient surveys indicate they chose the procedure based on expectations of decreased morbidity, improved outcomes, reduced blood loss and less pain. Higher expectations may explain higher rates of dissatisfaction and regret. Compared with other minimally invasive surgery approaches, robot-assisted surgery gives the surgeon better control over the surgical instruments and a better view of the surgical site. In addition, surgeons no longer have to stand throughout the surgery and do not get tired as quickly. Naturally occurring hand tremors are filtered out by the robot's computer software. Finally, the surgical robot can continuously be used by rotating surgery teams. Laparoscopic camera positioning is also significantly steadier with less inadvertent movements under robotic controls than compared to human assistance. The use of mixed reality to support robot-assisted surgery was developed at the Air Force Research Laboratory in 1992 through the creation of "virtual fixtures" that overlay virtual boundaries or guides that assist the human operator and has become a common method for increasing safety and precision. There are some issues in regards to current robotic surgery usage in clinical applications. There is a lack of haptics in some robotic systems currently in clinical use, which means there is no force feedback, or touch feedback. No interaction between the instrument and the patient is felt. However, recently the Senhance robotic system by Asensus Surgical was developed with haptic feedback in order to improve the interaction between the surgeon and the tissue. The robots can also be very large, have instrumentation limitations, and there may be issues with multi-quadrant surgery as current devices are solely used for single-quadrant application. Critics of the system, including the American Congress of Obstetricians and Gynecologists, say there is a steep learning curve for surgeons who adopt the use of the system and that there's a lack of studies that indicate long-term results are superior to results following traditional laparoscopic surgery. Articles in the newly created Journal of Robotic Surgery tend to report on one surgeon's experience. Complications related to robotic surgeries range from converting the surgery to open, re-operation, permanent injury, damage to viscera and nerve damage. From 2000 to 2011, out of 75 hysterectomies done with robotic surgery, 34 had permanent injury, and 49 had damage to the viscera. Prostatectomies were more prone to permanent injury, nerve damage and visceral damage as well. Very minimal surgeries in a variety of specialties had to actually be converted to open or be re-operated on, but most did sustain some kind of damage or injury. For example, out of seven coronary artery bypass grafting, one patient had to go under re-operation. It is important that complications are captured, reported and evaluated to ensure the medical community is better educated on the safety of this new technology. If something was to go wrong in a robot-assisted surgery, it is difficult to identify culpability, and the safety of the practice will influence how quickly and widespread these practices are used. One drawback of the use of robotic surgery is the risk of mechanical failure of the system and instruments. A study from July 2005 to December 2008 was conducted to analyze the mechanical failures of the da Vinci Surgical System at a single institute. During this period, a total of 1797 robotic surgeries were performed used 4 da Vinci surgical systems. There were 43 cases (2.4%) of mechanical failure, including 24 (1.3%) cases of mechanical failure or malfunction and 19 (1.1%) cases of instrument malfunction. Additionally, one open and two laparoscopic conversions (0.17%) were performed. Therefore, the chance of mechanical failure or malfunction was found to be rare, with the rate of converting to an open or laparoscopic procedure very low. There are also current methods of robotic surgery being marketed and advertised online. Removal of a cancerous prostate has been a popular treatment through internet marketing. Internet marketing of medical devices are more loosely regulated than pharmaceutical promotions. Many sites that claim the benefits of this type of procedure had failed to mention risks and also provided unsupported evidence. There is an issue with government and medical societies promotion a production of balanced educational material. In the US alone, many websites promotion robotic surgery fail to mention any risks associated with these types of procedures, and hospitals providing materials largely ignore risks, overestimate benefits and are strongly influenced by the manufacturer. Use in popular media Since April 2018, medical insurance coverage was expanding in Japan, so doctors were considering promoting the procedure for cardiac surgery, as it has the advantage of reducing the burden on the patient. Japanese drama Black Pean takes on this challenge, showing both sides' point of view. Two University Hospitals are competing to be the best in the Cardiac Surgery Department. One, Tojo, has the best traditional surgeons, while the other, Teika, is all about researching and implementing the most recent technology. With this, Teika sends its technical specialist to Tojo to try to convince them to update their techniques, including the use of the Da Vinci robot (named in the drama as Darwin). Newhart Watanabe International Hospital, a pioneer in da Vinci surgery for the heart in Japan, was used as background for the drama, with Dr. Gou Watanabe providing technical support. See also References External links Computer-assisted surgery Telemedicine Health informatics
Robot-assisted surgery
Biology
5,495
1,188,176
https://en.wikipedia.org/wiki/Rat%20Park
Rat Park was a series of studies into drug addiction conducted in the late 1970s and published between 1978 and 1981 by Canadian psychologist Bruce K. Alexander and his colleagues at Simon Fraser University in British Columbia, Canada. At the time of the studies, research exploring the self-administration of morphine in animals often used small, solitary metal cages. Alexander hypothesized that these conditions may be responsible for exacerbating self-administration. To test this hypothesis, Alexander and his colleagues built Rat Park, a large housing colony 200 times the floor area of a standard laboratory cage. There were 16–20 rats of both sexes in residence, food, balls and wheels for play, and enough space for mating. The results of the experiment appeared to support his hypothesis that improved housing-conditions reduce the consumption of morphine water. This research highlighted an important issue in the design of morphine-self administration studies of the time, namely the use of austere housing-conditions, which confound the results. Rat Park experiments In Rat Park, the rats could drink a fluid from one of two drop dispensers, which automatically recorded how much each rat drank. One dispenser contained a sweetened morphine solution and the other plain tap water. Morphine solution was sweetened to reduce averse reaction to the taste of morphine; as a control, prior to morphine introduction, rats were offered a sweetened quinine solution instead. Alexander designed a number of experiments to test the rats' willingness to consume the morphine. The Seduction Experiment involved four groups of 8 rats. Group CC was isolated in laboratory cages when they were weaned at 22 days of age, and lived there until the experiment ended at 80 days of age; Group PP was housed in Rat Park for the same period; Group CP was moved from laboratory cages to Rat Park at 65 days of age; and Group PC was moved out of Rat Park and into cages at 65 days of age. The caged rats (Groups CC and PC) took to the morphine instantly, even with relatively little sweetener, with the caged males drinking 19 times more morphine than the Rat Park males in one of the experimental conditions. The rats in Rat Park resisted the morphine water. They would try it occasionally—with the females trying it more often than the males—but they showed a statistically significant preference for the plain water. He writes that the most interesting group was Group CP, the rats who were brought up in cages but moved to Rat Park before the experiment began. These animals rejected the morphine solution when it was stronger, but as it became sweeter and more dilute, they began to drink almost as much as the rats that had lived in cages throughout the experiment. They wanted the sweet water, he concluded, so long as it did not disrupt their normal social behavior. Even more significant, he writes, was that when he added naloxone, a drug which negates the effects of opioids, to the morphine-laced water, the Rat Park rats began to drink it. In another experiment, he forced rats in ordinary lab cages to consume the morphine-laced solution for 57 days without other liquid available to drink. When they moved into Rat Park, they were allowed to choose between the morphine solution and plain water. They drank the plain water. He writes that they did show some signs of dependence. There were "some minor withdrawal signs, twitching, what have you, but there were none of the mythic seizures and sweats you so often hear about ..." The authors concluded that isolated cages, as well as female sex, caused an increased consumption of morphine. The authors advised that it is important to consider the conditions of testing, as well as the sex of the animals, when exploring self-administration of morphine. Further experiments Studies that followed up on the contribution of environmental enrichment to addiction produced mixed results. A replication study found that both caged and "park" rats showed a decreased preference for morphine compared to Alexander's original study; the author suggested a genetic reason for the difference Alexander initially observed. Another study found that while social isolation can influence levels of heroin self-administration, isolation is not a necessary condition for heroin or cocaine injections to be reinforcing. Other studies have reinforced the effect of environmental enrichment on self-administration, such as one that showed it reduced re-instatement of cocaine seeking behavior in mice through cues (though not if that re-instatement was induced by cocaine itself) and another that showed it can eliminate previously established addiction-related behaviors. Furthermore, removing mice from enriched environments has been shown to increase vulnerability to cocaine addiction and exposure to complex environments during early stages of life produced dramatic changes in the reward system of the brain that resulted in reduced effects of cocaine. Broadly speaking, there is mounting evidence that the impoverished small cage environments that are standard for the housing of laboratory animals have undue influence on lab animal behavior and biology. These conditions can jeopardize both a basic premise of biomedical research—that healthy control animals are healthy—and the relevance of these kinds of animal studies to human conditions. Criticisms Replication Bruce Petrie (1996), a graduate student of Alexander's, attempted to replicate the study and correct for the original studies on 20 rats using two different methods for measuring morphine consumption between conditions (which introduced a potential confound). The study was not able to replicate the results, and the author suggested that strain differences between the rats Alexander's research group used could be the reason for this. There has been little subsequent interest in replicating the studies due to several methodological issues present in the originals. Issues included the small number of subjects used, the use of oral morphine, which does not mimic actual conditions of use (and introduces a confound because of the bitterness of morphine), and the measurement of morphine consumption, which differed between conditions. Other problems included equipment failures, lost data and rat deaths. However, some researchers have shown an interest in "conceptual" replication to continue exploring the contribution of environmental and social enrichment to addiction. Media interpretation Journalist Johann Hari gave a popular TED Talk about the results of the study in 2015. In it, he interpreted the studies to suggest that biological underpinnings are not the cause of addiction, instead shifting the etiology to a need for healthy relationships. The YouTube channel Kurzgesagt created and published a video based on Hari's book, which garnered over 19 million views. The channel later took down the video, stating that they improperly represented the evidence. Researchers have re-iterated that the results of Alexander's studies highlight issues with rat models kept in bare-bones lab environments, and help implicate the environment as a contributing factor to addiction. However, the media has overstated the studies' importance by suggesting it represents a paradigm shift in research, and that the environment is the only—or the key—factor in addiction. See also Behavioral sink Effect of psychoactive drugs on animals Konrad Lorenz References Further reading Alexander, B.K. (1990) Peaceful measures: Canada's way out of the War on Drugs, Toronto University Press. Alexander, B.K. (2000) "The globalization of addiction," Addiction Research Drucker, E. (1998) "Drug Prohibition and Public Health," U.S. Public Health Service, Vol. 114 Goldstein, A. Molecular and Cellular Aspects of the Drug Addictions. Springer-Verlag, 1990. Goldstein, A.From Biology to Drug Policy, Oxford University Press, 2001. Website of the U.S. Office of National Drug Control Policy Peele, Stanton. A discussion about addiction, archived link from July 7, 2004. External links Much More Than A Drug Problem Bruce Alexander's lecture in Vancouver Institute 5.2.2011. Rat Park drug experiment comic – Stuart McMillen comics Simon Fraser University Substance dependence Drugs in Canada Laboratory rats Psychology experiments Morphine Animal testing Effects of psychoactive drugs
Rat Park
Chemistry
1,636
38,273,940
https://en.wikipedia.org/wiki/Crystal%20LED
Crystal LED (CLED) refers to a screen manufacturing technique. It was invented by Sony and revealed at CES 2012. Overview This technology makes use of light emitting diodes mounted on each segment RGB of the display, such that each pixel is illuminated independently. This makes it the first "true" LED television. History 2012 Sony unveiled Crystal LED display technology in the CES 2012. The following year, the company was deciding between CLED and OLED, and did not display CLED at the 2013 CES, but produced an OLED instead. 2017 At CES 2017 Sony showcased the CLEDIS ™ (Crystal LED Integrated Structure) with a Crystal LED video wall approximately 32 by 9 feet (9.7 x 2.7 meter) with a resolution of 8000 x 2000 pixel. According to Sony, it is composed of single display modules measuring 17 7/8" (463.6mm) by 15 7/8" (403.2mm) each. The individual modules were not visible to the human eye. Due to the modular structure of the system theoretically, any resolution and size could be possible according to a Sony representative. See also LED MicroLED OLED References Display technology Optical diodes Light-emitting diodes Energy-saving lighting Japanese inventions
Crystal LED
Engineering
261
30,079,285
https://en.wikipedia.org/wiki/San%20Buenaventura%20Conservancy
The San Buenaventura Conservancy for Preservation is an historic preservation organization in Ventura, California also known by its early name of San Buenaventura. It works to recognize and revitalize historic, archeological and cultural resources in the region. The Conservancy is a non-profit 501c3 organization. The group was formed in 2004 after the demolition of the Mayfair Theater, an S. Charles Lee, Streamline Moderne, movie theater in downtown Ventura, California that was razed and replaced with a condominium project. Mission The San Buenaventura Conservancy mission statement: "To work through advocacy and outreach to recognize preserve and revitalize the irreplaceable historic, architectural and cultural resources of San Buenaventura and surrounding areas. To seek to increase public awareness of and participation in local preservation issues, and disseminate information useful in the preservation of structures and neighborhoods of San Buenaventura." San Buenaventura Conservancy website Programs & Projects The organization produces annual historic architecture tours in the historic neighborhoods and districts in midtown, downtown and the west side of Ventura, California. The conservancy is an all-volunteer organization with a ten-member board. Some of the Conservancy's most successful projects outside of the Ventura architectural Weekend tours and trade shows has been the ability of the board to work closely the City of Ventura, California and developers to find preservation solutions for historic buildings. At times the Conservancy advocates for specific historic buildings like Willett Ranch link to article and the Top Hat Burger Palace in Ventura, the Frank Petit House in South Oxnard, California, the Charles McCoy house, in Port Hueneme, California, and the Bracero farm Worker Camp in Piru, California, and the Wagon Wheel Motel on the 101 Freeway in Oxnard, California. Additionally the Conservancy works to strengthen preservation policies of local municipalities. It has achieved success at integrating appropriate preservation actions and policies into Ventura's General Plan, Downtown Specific Plan, and Westside Community Plan. On March 2, 2009 the San Buenaventura Conservancy – with attorney Susan Brandt-Hawley – filed suit in Ventura County Superior Court against the City of Oxnard, California, claiming that the City’s approval of the Oxnard Village Specific Plan project violated the California Environmental Quality Act (CEQA). link to article The project, as approved, requires the demolition of the Wagon Wheel Motel and restaurant, El Ranchito and bowling alley along with everything built on the site. The Conservancy case argues that the project can be feasibly accomplished without demolition of the Wagon Wheel, and CEQA therefore does not allow the Class 1 impact. The lawsuit requests issuance of a peremptory writ ordering the City to set aside its approval of the project pending compliance with CEQA. The original Ventura County Superior Court case was presented July 10, 2009. The Judge sided with the city of Oxnard. The San Buenaventura Conservancy has appealed the ruling and received a stay of demolition until the outcome of the appeal case: San Buenaventura Conservancy v. City of Oxnard et al. (CEQA) (Case: B220512 2nd District, Division 6.) link to article On Wednesday December 15, a three judge panel at the California 2nd district Court of Appeal in Ventura, California heard arguments from attorneys representing the case: Susan Brandt-Hawley for the San Buenaventura Conservancy, and Rachel Cook representing the developer ( Oxnard Village Investments, LLC.) and the city of Oxnard, California. The Appeals Court sided with the City of Oxnard on March 17, 2011 and agreed with the Superior court that the CEQA review was sufficient. The Wagon Wheel was demolished a week later. References Shepherd, Dirk, Save the Wagon Wheel, VC Reporter Newspaper article, Jan 11, 2007, link to article Levin, Charles, Ventura County Star Newspaper, Old motel might be declared landmark, January 23, 2007, link to article Singer, Matthew, Looking for a landmark, VC Reporter Newspaper article, January 25, 2007, link to article Clerici, Kevin Group sues Ventura to halt razing of ranch, Ventura County Star Newspaper article, June 28, 2007, link to article Lascher, Bill, VC Reporter Newspaper article, Hotel could occupy Chumash Village site, February 7, 2008, link to article Clerici, Kevin Historical Willett buildings to remain on site, Ventura County Star Newspaper article, February 27, 2008, link to article Lascher, Bill, Endangered Heritage and San Buenaventura Conservancy's 11 most endangered list, VC Reporter Newspaper article, June 12, 2008, link to article Cason, Coleen, Ventura County Star Newspaper article, Changing days, landmarks in photo calendar, December 28, 2008, link to article VC Reporter Newspaper article, The San Buenaventura Conservancy hosts an architectural, archaeological tour, January 15, 2009, link to article Hadley, Scott, Ventura County Star Newspaper, Oxnard Wagon Wheel Development to be taken up by council, January 27, 2009, Hadley, Scott, Ventura County Star Newspaper, Wagon Wheel redevelopment approved, January 29, 2009, Sullivan, Michael, VC Reporter Newspaper article, Historical homes in Oxnard meet a fiery grave this week, February 26, 2009, link to article Foster, Margaret, Lawsuit Stalls Loss of 1947 Motel, Preservation Magazine (Online), March 26, 2009, link to article Foster, Margaret, Calif. City Burns Down 1883 Farmhouses, Preservation Magazine (Online), March 31, 2009 Ventura County Star Newspaper, Memorial Boulder Keeps on Rollin''', March 29, 2009, link to article Chawkins Steve, Trying to keep Oxnard's Wagon Wheel in place, Los Angeles Times, April 10, 2009, link to article Hadley, Scott, Ventura County Star Newspaper, Judge Blocks Demolition of Wagon Wheel Buildings, October 31, 2009, Sisolak, Paul, Wagon Wheel Headed Back To Court, VC Reporter Newspaper article, November 12, 2009, Hadley, Scott, Ventura County Star Newspaper, Judge Clears way for Wagon Wheel Demolition, November 17, 2009, Sisolak, Paul, Court Appeal Possible in Wagon Wheel Preservation Case, VC Reporter Newspaper article, November 19, 2009, McKinnon, Lisa, Ventura County Star Newspaper, Ventura's Top Hat Burger Palace given 30 days to vacate site, January 8, 2010, link to article Cohn, Shane, VC Reporter Newspaper article, Up for debate The future of Ventura’s Westside may rest in Rancho Cañada Larga, December 9, 2010, link to article Hadley, Scott, Ventura County Star Newspaper, Final arguments presented in Wagon Wheel case'', December 15, 2010, link to article External links National Trust for Historic Preservation Wagon Wheel Photos on Flickr San Buenaventura Conservancy website Historic preservation organizations in the United States Heritage organizations Architectural history Urban planning in California
San Buenaventura Conservancy
Engineering
1,426
3,261,205
https://en.wikipedia.org/wiki/Landspout
Landspout is a term created by atmospheric scientist Howard B. Bluestein in 1985 for a tornado not associated with a mesocyclone. The Glossary of Meteorology defines a landspout: "Colloquial expression describing tornadoes occurring with a parent cloud in its growth stage and with its vorticity originating in the boundary layer. The parent cloud does not contain a preexisting mid-level mesocyclone. The landspout was so named because it looks like "a weak Florida Keys waterspout over land." Landspouts are typically weaker than mesocyclone-associated tornadoes spawned within supercell thunderstorms, in which the strongest tornadoes form. Characteristics Landspouts are a type of tornado that forms during the growth stage of a cumulus congestus or occasionally a cumulonimbus cloud when an updraft stretches boundary layer vorticity upward into a vertical axis and tightens it into a strong vortex. Landspouts can also occur due to interactions from outflow boundaries, as they can occasionally cause enhanced convergence and vorticity at the surface. These generally are smaller and weaker than supercell tornadoes and do not form from a mesocyclone or pre-existing rotation in the cloud. Because of this lower depth, smaller size, and weaker intensity, landspouts are rarely detected by Doppler weather radar (NWS). Landspouts share a strong resemblance and development process to that of waterspouts, usually taking the form of a translucent and highly laminar helical tube. "They are typically narrow, rope-like condensation funnels that form while the thunderstorm cloud is still growing and there is no rotating updraft", according to the National Weather Service. Landspouts are considered tornadoes since a rapidly rotating column of air is in contact with both the surface and a cumuliform cloud. Not all landspouts are visible, and many are first sighted as debris swirling at the surface before eventually filling in with condensation and dust. Orography can influence landspout (and even mesocyclone tornado) formation. A notable example is the propensity for landspout occurrence in the Denver Convergence Vorticity Zone (DCVZ). Life cycle Forming in relation to mesocyclones and under updrafts, a landspout generally lasts for less than 15 minutes; however, they can persist substantially longer, and produce significant damage. Landspouts tend to progress through recognizable stages of formation, maturation, and dissipation, and usually decay when a downdraft or significant precipitation (outflow) occur nearby. They may form in lines or groups of multiple landspouts. Damage Landspouts are usually at EF0 level where the intensity of winds is low and weak. However, winds inside a Landspout tornado can reach 100 miles per hour (MPH). See also Dust devil Fire whirl Funnel cloud Tornadogenesis Vortex engine Waterspout Whirlwind References External links Advanced Spotters' Field Guide Online Tornado FAQ Severe weather and convection Tornado Vortices fr:Tornade#Trombes terrestres
Landspout
Chemistry,Mathematics
659
7,274,018
https://en.wikipedia.org/wiki/Debye%E2%80%93Falkenhagen%20effect
The increase in the conductivity of an electrolyte solution when the applied voltage has a very high frequency is known as Debye–Falkenhagen effect. Impedance measurements on water-p-dioxane and the methanol-toluene systems have confirmed Falkenhagen's predictions made in 1929. See also Peter Debye Debye length Hans Falkenhagen Wien effect References Electrochemical concepts Peter Debye
Debye–Falkenhagen effect
Chemistry
98
5,145,183
https://en.wikipedia.org/wiki/List%20of%20art%20media
Media, or mediums, are the core types of material (or related other tools) used by an artist, composer, designer, etc. to create a work of art. For example, a visual artist may broadly use the media of painting or sculpting, which themselves have more specific media within them, such as watercolor paints or marble. The following is a list of artistic categories and the media used within each category: Architecture Cement, concrete, mortar Cob Glass Metal Stone, brick Wood Carpentry Adhesives Wood (timber) Ceramics Bone china Clay Glaze Porcelain Pottery Terracotta Tile Drawing Common drawing materials Acrylic paint Chalk Charcoal Colored pencil Conté Crayon Encaustic Fresco Gouache Graphite Ink Intaglio Oil paint Glass paint Pastel Pixel Printmaking Sketch Tempera Watercolor Glitter Common supports (surfaces) for drawing Canvas Card stock Concrete Fabric Glass Human body Metal Paper Papyrus Parchment Plaster Scratchboard Stone Vellum Wood Common drawing tools and methods Brush Finger Pen Ballpoint pen Eraser Erasing shield Fountain pen Gel pen Kneaded eraser Technical pen Marker Pencil Mechanical pencil (clutch, screw, and ratchet) Colored pencil Stylus Charcoal Electronic Graphic art software and 3D computer graphics Word processors and desktop publishing software Digital photography and digital cinematography Specialized input devices (e.g. variable pressure sensing tablets and touchscreens) Digital printing Programming languages Film Animation Cel animation Computer animation Cutout animation Drawn-on-film animation Stop motion Live action Puppet film Video art Single-channel video Video installation Film, as a form of mass communication, is itself also considered a medium in the sense used by fields such as sociology and communication theory (see also mass media). These two definitions of medium, while they often overlap, are different from one another: television, for example, utilizes the same types of artistic media as film, but may be considered a different medium from film within communication theory. Food A chef's tools and equipment, including ovens, stoves, grills, and griddles. Specialty equipment may be used, including salamanders, French tops, woks, tandoors, and induction burners. Glass Glassblowing, Glass fusing, colouring and marking methods. Installation Installation art is a site-specific form of sculpture that can be created with any material. An installation can occupy a large amount of space, create an ambience, transform/disrupt the space, exist in the space. One way to distinguish an installation from a sculpture (this may not apply to every installation) is to try to imagine it in a different space. If the objects present difficulties in a different space than the original, it is probably an installation. Literature Traditional writing media Digital word processor Internet websites Letterpress printing Computer printers Marker Pen and ink or *quill Pencil Common bases for writing Card stock Paper, perhaps ruled Vellum Natural world Floral design Rock Soil Vegetation Water Painting Common paint media Acrylic paint Blacklight paint Encaustic paint Fresco Gesso Glaze Gouache Ink Latex paint Oil paint Primer Ink wash (sumi-e) Tempera or poster paint Vinyl paint (toxic/poisonous) Vitreous enamel Watercolor Uncommon paint media Various bodily fluids and excrement including elephant dung Solar energy Garlic Rust Coffee Onion Coconut juice Mud Black palm Tomato Soy sauce Staple wire Ochre (Yellow, red, white or charcoal) Supports for painting Architectural structures Canvas Ceramics Cloth Glass Human body (typically for tattoos) Metal Paper Paperboard Vellum Wall Wood Common tools and methods Action painting Aerosol paint Airbrush Batik Brush Cloth Paint roller or paint pad Palette knife Sponge Pencil Finger Mural techniques Muralists use many of the same media as panel painters, but due to the scale of their works, use different techniques. Some such techniques include: Aerosol paint Digital painting Fresco Image projector Marouflage Mosaic Pouncing Graphic narrative media Comics creators use many of the same media as traditional painters. Performing arts The performing arts is a form of entertainment that is created by the artist's own body, face and presence as a medium. There are many skills and genres of performance; dance, theatre and re-enactment being examples. Performance art is a performance that may not present a conventional formal linear narrative. Photography In photography a photosensitive surface is used to capture an optical still image, usually utilizing a lens to focus light. Some media include: Digital image sensor Photographic film Potassium dichromate Potassium ferricyanide and ferric ammonium citrate Silver nitrate Printmaking In the art of printmaking, "media" tends to refer to the technique used to create a print. Common media include: Aquatint Collotype Computer printing Dye-sublimation printer Inkjet printer (sometimes called giclée printing) Laser printer Solid ink printer Thermal printer Embossing Engraving Etching Intaglio (printmaking) Letterpress (literature) Linocut Lithography Mezzotint Moku hanga Monotype Offset printing Photographic printing Planographic printing Printing press Relief printing Linocut Metalcut Relief etching Wood engraving Woodcut Screen-printing Woodblock printing Sculpture In sculpting, a solid structure and textured surface is shaped or combined using substances and components, to form a three-dimensional object. The size of a sculptured work can be built very big and could be considered as architecture, although more commonly a large statue or bust, and can be crafted very small and intricate as jewellery, ornaments and decorative reliefs. Materials Carving media Bone carving Bronze Gemstones Glass Granite Ice Ivory Marble Plaster Stone Wax Wood Casting media Cement Ceramics Metal Plaster Plastic Synthetic resin Wax Modeling media Clay Papier-mâché Plaster polystyrene Sand Styrofoam Assembled media Beads Corrugated fiberboard (cardboard) Edible material Foil Found objects Glue and other adhesives Paperboard Textile Wire Wood Finishing materials Acids to create a patina (corrosive) Glaze Polychrome Wax Tools Bristle brush Chisel and hammer (modern pneumatic) Clamp or vise Hammer or mallet (modern pneumatic) Kiln for heating ceramics and metals Knife Pliers Potter's wheel Power tools Sandpaper Saw Scraper Snips Welding and cutting torch Wirecutter Sound The art of sound can be singular or a combination of speech or objects and crafted instruments, to create sounds, rhythms and music for a range of sonic hearing purposes. See also music and sound art. Technical products The use of technical products as an art medium is a merging of applied art and science, that may involve aesthetics, efficiency and ergonomics using various materials. Textiles In the art of textiles a soft and flexible material of fibers or yarn is formed by spinning wool, flax, cotton, or other material on a spinning wheel and crocheting, knitting, macramé (knotting), weaving, or pressing fibres together (felt) to create a work. See also Collage Conceptual art Decorative arts Design tool Fashion design Fine art Fire performance Fresco Graffiti Graphic arts Liberal arts List of pen types, brands and companies Medium specificity Mixed media Multimedia New materials in 20th-century art Plastic arts Publishing Pyrotechnics Recording medium Stationery Video game art References External links Media (artists' materials) — definition from the Getty Art & Architecture Thesaurus. Artistic Medium, Internet Encyclopedia of Philosophy 01 Media Media Ceramic materials Painting materials Sculpture materials
List of art media
Engineering
1,504
5,656,649
https://en.wikipedia.org/wiki/Spillage
In industrial production, spillage is the loss of production output due to production of a series of defective or unacceptable products which must be rejected. Spillage is an often costly event which occurs in manufacturing when a process degradation or failure occurs that is not immediately detected and corrected, and in which defective or reject product therefore continues to be produced for some extended period of time. Spillage results in costs due to lost production volume, excessive scrap, delayed delivery of product, and wastage of human and capital equipment resources. Minimization of the occurrence and duration of manufacturing spillage requires that closed-loop control and associated process monitoring and metrology functions be integrated into critical steps of the overall manufacturing process. The extent to which process control is complete and metrology is high resolution so as to be comprehensive determines the extent to which spillages will be prevented. Waste
Spillage
Physics
172
18,470,082
https://en.wikipedia.org/wiki/Essence%20%28Electronic%20Surveillance%20System%20for%20the%20Early%20Notification%20of%20Community-based%20Epidemics%29
Essence is an abbreviation/acronym for the United States Department of Defense's Electronic Surveillance System for the Early Notification of Community-based Epidemics. Essence's goal is to monitor health data as it becomes available and discover epidemics and similar health concerns before they get out of control. The program was created and developed in 1999 by Michael Lewis, when he was a resident in the Preventive Medicine residency training program at the Walter Reed Army Institute of Research in Silver Spring, Maryland. Though the program was originally intended for early detection of bioterrorism attacks in the Washington, D.C., area in the wake of the September 11 attacks, the U.S. Army Surgeon General, James Peake, ordered Jay Mansfield, the information technology specialist responsible for the IT development of ESSENCE, to expand ESSENCE to look globally at the entire DoD Military Healthcare System as designed. Subsequently, ESSENCE has been adopted and adapted by the Centers for Disease Control and Prevention, Johns Hopkins University, and numerous health departments around the United States and other countries. References Epidemiology
Essence (Electronic Surveillance System for the Early Notification of Community-based Epidemics)
Environmental_science
215
1,364,686
https://en.wikipedia.org/wiki/Cognitive%20liberty
Cognitive liberty, or the "right to mental self-determination", is the freedom of an individual to control their own mental processes, cognition, and consciousness. It has been argued to be both an extension of, and the principle underlying, the right to freedom of thought. Though a relatively recently defined concept, many theorists see cognitive liberty as being of increasing importance as technological advances in neuroscience allow for an ever-expanding ability to directly influence consciousness. Cognitive liberty is not a recognized right in any international human rights treaties, but has gained a limited level of recognition in the United States, and is argued to be the principle underlying a number of recognized rights. Overview The term "cognitive liberty" was coined by neuroethicist Wrye Sententia and legal theorist and lawyer Richard Glen Boire, the founders and directors of the non-profit Center for Cognitive Liberty and Ethics (CCLE). Sententia and Boire define cognitive liberty as "the right of each individual to think independently and autonomously, to use the full power of his or her mind, and to engage in multiple modes of thought." The CCLE is a network of scholars dedicated to protecting freedom of thought in the modern world of accelerating neurotechnologies. They seek to develop public policies that will preserve and enhance freedom of thought, and offer guidance with regard to relevant developments in neurotechnology, psychopharmacology, cognitive sciences and law. Sententia and Boire conceived of the concept of cognitive liberty as a response to the increasing ability of technology to monitor and manipulate cognitive function, and the corresponding increase in the need to ensure individual cognitive autonomy and privacy. Sententia divides the practical application of cognitive liberty into two principles: As long as their behavior does not endanger others, individuals should not be compelled against their will to use technologies that directly interact with the brain or be forced to take certain psychoactive drugs. As long as they do not subsequently engage in behavior that harms others, individuals should not be prohibited from, or criminalized for, using new mind-enhancing drugs and technologies. These two facets of cognitive liberty are reminiscent of Timothy Leary's "Two Commandments for the Molecular Age", from his 1968 book The Politics of Ecstasy: Supporters of cognitive liberty therefore seek to impose both a negative and a positive obligation on states: to refrain from non-consensually interfering with an individual's cognitive processes, and to allow individuals to self-determine their own "inner realm" and control their own mental functions. Freedom from interference This first obligation, to refrain from non-consensually interfering with an individual's cognitive processes, seeks to protect individuals from having their mental processes altered or monitored without their consent or knowledge, "setting up a defensive wall against unwanted intrusions". Ongoing improvements to neurotechnologies, such as transcranial magnetic stimulation and electroencephalography (or "brain fingerprinting"), and to pharmacology, in the form of selective serotonin reuptake inhibitors (SSRIs), nootropics, modafinil and other psychoactive drugs, are continuing to increase the ability to both monitor and directly influence human cognition. As a result, many theorists have emphasized the importance of recognizing cognitive liberty in order to protect individuals from the state using such technologies to alter those individuals' mental processes: "states must be barred from invading the inner sphere of persons, from accessing their thoughts, modulating their emotions or manipulating their personal preferences." These specific ethical concerns regarding the use of neuroscience technologies to interfere or invade the brain form the fields of neuroethics and neuroprivacy. This element of cognitive liberty has been raised in relation to a number of state-sanctioned interventions in individual cognition, from the mandatory psychiatric 'treatment' of homosexuals in the US before the 1970s, to the non-consensual administration of psychoactive drugs to unwitting US citizens during CIA Project MKUltra, to the forcible administration of mind-altering drugs on individuals to make them competent to stand trial. Futurist and bioethicist George Dvorsky, chair of the Board of the Institute for Ethics and Emerging Technologies has identified this element of cognitive liberty as being of relevance to the debate around the curing of autism spectrum conditions. Duke University School of Law Professor Nita A. Farahany has also proposed legislative protection of cognitive liberty as a way of safeguarding the protection from self-incrimination found in the Fifth Amendment to the US Constitution, in the light of the increasing ability to access human memory. Her book 'The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology' discusses the matter in great detail. Though this element of cognitive liberty is often defined as an individual's freedom from state interference with human cognition, Jan Christoph Bublitz and Reinhard Merkel among others suggest that cognitive liberty should also prevent other, non-state entities from interfering with an individual's mental "inner realm". Bublitz and Merkel propose the introduction of a new criminal offense punishing "interventions severely interfering with another's mental integrity by undermining mental control or exploiting pre-existing mental weakness." Direct interventions that reduce or impair cognitive capacities such as memory, concentration, and willpower; alter preferences, beliefs, or behavioral dispositions; elicit inappropriate emotions; or inflict clinically identifiable mental injuries would all be prima facie impermissible and subject to criminal prosecution. Sententia and Boire have also expressed concern that corporations and other non-state entities might utilize emerging neurotechnologies to alter individuals' mental processes without their consent. Freedom to self-determine Where the first obligation seeks to protect individuals from interference with cognitive processes by the state, corporations or other individuals, this second obligation seeks to ensure that individuals have the freedom to alter or enhance their own consciousness. An individual who enjoys this aspect of cognitive liberty has the freedom to alter their mental processes in any way they wish to, whether through indirect methods such as meditation, yoga or prayer, or through direct cognitive intervention through psychoactive drugs or neurotechnology. As psychotropic drugs are a powerful method of altering cognitive function, many advocates of cognitive liberty are also advocates of drug law reform, claiming that the "war on drugs" is in fact a "war on mental states". The CCLE, as well as other cognitive liberty advocacy groups such as Cognitive Liberty UK, have lobbied for the re-examination and reform of prohibited drug law; one of the CCLE's key guiding principles is that "governments should not criminally prohibit cognitive enhancement or the experience of any mental state". Calls for reform of restrictions on the use of prescription cognitive-enhancement drugs (also called smart drugs or nootropics) such as Prozac, Ritalin and Adderall have also been made on the grounds of cognitive liberty. This element of cognitive liberty is also of great importance to proponents of the transhumanist movement, a key tenet of which is the enhancement of human mental function. Wrye Sententia has emphasized the importance of cognitive liberty in ensuring the freedom to pursue human mental enhancement, as well as the freedom to choose against enhancement. Sententia argues that the recognition of a "right to (and not to) direct, modify, or enhance one's thought processes" is vital to the free application of emerging neurotechnology to enhance human cognition and that something beyond the current conception of freedom of thought is needed. Sententia claims that "cognitive liberty's strength is that it protects those who do want to alter their brains, but also those who do not". Relationship with recognized human rights Cognitive liberty is not currently recognized as a human right by any international human rights treaty. While freedom of thought is recognized by Article 18 of the Universal Declaration of Human Rights (UDHR), freedom of thought can be distinguished from cognitive liberty in that the former is concerned with protecting an individual's freedom to think whatever they want, whereas cognitive liberty is concerned with protecting an individual's freedom to think however they want. Cognitive liberty seeks to protect an individual's right to determine their own state of mind and be free from external control over their state of mind, rather than just protecting the content of an individual's thoughts. It has been suggested that the lack of protection of cognitive liberty in previous human rights instruments was due to the relative lack of technology capable of directly interfering with mental autonomy at the time the core human rights treaties were created. As the human mind was considered invulnerable to direct manipulation, control or alteration, it was deemed unnecessary to expressly protect individuals from unwanted mental interference. With modern advances in neuroscience and in anticipation of its future development however, it is argued that such express protection is becoming increasingly necessary. Cognitive liberty then can be seen as an extension of or an "update" to the right to freedom of thought as it has been traditionally understood. Freedom of thought should now be understood to include the right to determine one's own mental state as well as the content of one's thoughts. However, some have instead argued that cognitive liberty is already an inherent part of the international human rights framework as the principle underlying the rights to freedom of thought, expression and religion. The freedom to think in whatever manner one chooses is a "necessary precondition to those guaranteed freedoms." Daniel Waterman and Casey William Hardison have argued that cognitive liberty is fundamental to Freedom of Thought because it encompasses the ability to have certain types of experiences, including the right to experience altered or non-ordinary states of consciousness. It has also been suggested that cognitive liberty can be seen to be a part of the inherent dignity of human beings as recognized by Article 1 of the UDHR. Most proponents of cognitive liberty agree, however, that cognitive liberty should be expressly recognized as a human right in order to properly provide protection for individual cognitive autonomy. At least one scholar and proponent of cognitive liberty, Christoph Bublitz, has used the term 'freedom of mind' to describe cognitive liberty: "mind altering interventions primary affect another sense of freedom, freedom of mind, a concept that has not received much attention although it should rank among the most important legal and political freedoms…This freedom is not often regarded in its own right but should be recognized and more fully developed in face of emerging mind-altering technologies…Freedom of mind is the freedom of a person to use her mental capacities as she pleases, free from external interferences and internal impediments". Legal recognition In the United States Richard Glen Boire of the Center for Cognitive Liberty and Ethics filed an amicus brief with the US Supreme Court in the case of Sell v. United States, in which the Supreme Court examined whether the court had the power to make an order to forcibly administer antipsychotic medication to an individual who had refused such treatment, for the sole purpose of making them competent to stand trial. In the United Kingdom In the case of R v Hardison, the defendant, charged with eight counts under the Misuse of Drugs Act 1971 (MDA), including the production of DMT and LSD, claimed that cognitive liberty was safeguarded by Article 9 of the European Convention on Human Rights. Hardison argued that "individual sovereignty over one's interior environment constitutes the very core of what it means to be free", and that as psychotropic drugs are a potent method of altering an individual's mental process, prohibition of them under the MDA was in opposition to Article 9. The court however disagreed, calling Hardison's arguments a "portmanteau defense" and relying upon the UN Drug Conventions and the earlier case of R v Taylor to deny Hardison's right to appeal to a superior court. Hardison was convicted and given a 20-year prison sentence, though he was released on 29 May 2013 after nine years in prison. Criticism The recent development of neurosciences is increasing the possibility of controlling and influence specific mental functions. The risks inherent in removing restrictions on controlled cognitive-enhancing drugs, including of widening the gap between those able to afford such treatments and those unable to do so, have caused many to remain skeptical about the wisdom of recognizing cognitive liberty as a right. Political philosopher and Harvard University professor Michael J. Sandel, when examining the prospect of memory enhancement, wrote that "some who worry about the ethics of cognitive enhancement point to the danger of creating two classes of human beings – those with access to enhancement technologies, and those who must make do with an unaltered memory that fades with age." See also Cognitive ergonomics Cosmetic pharmacology Drug liberalization Morphological freedom Neuroenhancement Neuroethics Neurolaw Personalized medicine Psychonautics Responsible drug use The Rhetoric of Drugs, by Jacques Derrida Self-ownership Techno-progressivism Thomas Szasz References Civil rights and liberties Drug culture Human rights Identity politics Medical ethics Psychedelics, dissociatives and deliriants Transhumanism
Cognitive liberty
Technology,Engineering,Biology
2,661
1,093,124
https://en.wikipedia.org/wiki/Oil%20of%20guaiac
Oil of guaiac is a fragrance ingredient used in soap and perfumery. Despite its name it does not come from the Guaiacum tree, but from the palo santo tree (Bulnesia sarmientoi). Oil of guaiac is produced through steam distillation of a mixture of wood and sawdust from palo santo. It is sometimes incorrectly called guaiac wood concrete. It is a yellow to greenish yellow semi-solid mass which melts around 40–50 °C. Once melted, it can be cooled back to room temperature yet remain liquid for a long time. Oil of guaiac has a soft roselike odour, similar to the odour of hybrid tea roses or violets. Because of this similarity, it has sometimes been used as an adulterant for rose oil. Oil of guaiac is primarily composed of 42–72% guaiol, bulnesol, δ-bulnesene, β-bulnesene, α-guaiene, guaioxide and β-patchoulene. It is considered non-irritating, non-sensitizing, and non-phototoxic to human skin. Oil of guaiac was also a pre-Renaissance remedy to syphilis. See also Guaiacum References Further reading D.L.J. Opdyke, 1974, Food Cosmet. Toxicol., 12 (Suppl.), 905 Essential oils Soaps History of pharmacy
Oil of guaiac
Chemistry
308
302,506
https://en.wikipedia.org/wiki/Jelly%20fungus
Jelly fungi are a paraphyletic group of several heterobasidiomycete fungal orders from different classes of the subphylum Agaricomycotina: Tremellales, Dacrymycetales, Auriculariales and Sebacinales. These fungi are so named because their foliose, irregularly branched fruiting body is, or appears to be, the consistency of jelly. Actually, many are somewhat rubbery and gelatinous. When dried, jelly fungi become hard and shriveled; when exposed to water, they return to their original form. Many species of jelly fungi can be eaten raw; poisonous jelly fungi are rare [needs source] and may not even exist. However, many species have an unpalatable texture or taste. They may or may not be sought in mushroom hunting due to their taste, which is described as similar to that of soil. However, some species, Tremella fuciformis for example, are not only edible but prized for use in soup and vegetable dishes. Notable jelly fungi Ascocoryne sarcoides – jelly drops, purple jellydisc (often mistaken for basidiomycota but is not) Auricularia auricula-judae – wood ear, Judas' ear, black fungus, jelly ear Auricularia polytricha – cloud ear Calocera cornea Calocera viscosa – yellow tuning fork, yellow stagshorn fungus Dacrymyces palmatus – orange jelly Dacryopinax spathularia Exidia glandulosa – black jelly roll, witches' butter Exidia recisa - amber jelly roll, willow brain Guepiniopsis alpina – golden jelly cone Phlogiotis helvelloides – apricot jelly Myxarium nucleatum – crystal brain, granular jelly roll Pseudohydnum gelatinosum – jelly tooth, jelly tongue Tremella foliacea – jelly leaf Tremella fuciformis – snow fungus Tremella mesenterica – witches' butter, yellow brain fungus Tremellodendron and Sebacina spp. – jellied false corals See also Cloud ear fungus Notes References External links AmericanMushrooms.com: Jelly Fungi Agaricomycotina Fungus common names Mushroom types Basidiomycota
Jelly fungus
Biology
485
321,962
https://en.wikipedia.org/wiki/Woodall%20number
In number theory, a Woodall number (Wn) is any natural number of the form for some natural number n. The first few Woodall numbers are: 1, 7, 23, 63, 159, 383, 895, … . History Woodall numbers were first studied by Allan J. C. Cunningham and H. J. Woodall in 1917, inspired by James Cullen's earlier study of the similarly defined Cullen numbers. Woodall primes Woodall numbers that are also prime numbers are called Woodall primes; the first few exponents n for which the corresponding Woodall numbers Wn are prime are 2, 3, 6, 30, 75, 81, 115, 123, 249, 362, 384, ... ; the Woodall primes themselves begin with 7, 23, 383, 32212254719, ... . In 1976 Christopher Hooley showed that almost all Cullen numbers are composite. In October 1995, Wilfred Keller published a paper discussing several new Cullen primes and the efforts made to factorise other Cullen and Woodall numbers. Included in that paper is a personal communication to Keller from Hiromi Suyama, asserting that Hooley's method can be reformulated to show that it works for any sequence of numbers , where a and b are integers, and in particular, that almost all Woodall numbers are composite. It is an open problem whether there are infinitely many Woodall primes. , the largest known Woodall prime is 17016602 × 217016602 − 1. It has 5,122,515 digits and was found by Diego Bertolotti in March 2018 in the distributed computing project PrimeGrid. Restrictions Starting with W4 = 63 and W5 = 159, every sixth Woodall number is divisible by 3; thus, in order for Wn to be prime, the index n cannot be congruent to 4 or 5 (modulo 6). Also, for a positive integer m, the Woodall number W2m may be prime only if 2m + m is prime. As of January 2019, the only known primes that are both Woodall primes and Mersenne primes are W2 = M3 = 7, and W512 = M521. Divisibility properties Like Cullen numbers, Woodall numbers have many divisibility properties. For example, if p is a prime number, then p divides W(p + 1) / 2 if the Jacobi symbol is +1 and W(3p − 1) / 2 if the Jacobi symbol is −1. Generalization A generalized Woodall number base b is defined to be a number of the form n × bn − 1, where n + 2 > b; if a prime can be written in this form, it is then called a generalized Woodall prime. The smallest value of n such that n × bn − 1 is prime for b = 1, 2, 3, ... are 3, 2, 1, 1, 8, 1, 2, 1, 10, 2, 2, 1, 2, 1, 2, 167, 2, 1, 12, 1, 2, 2, 29028, 1, 2, 3, 10, 2, 26850, 1, 8, 1, 42, 2, 6, 2, 24, 1, 2, 3, 2, 1, 2, 1, 2, 2, 140, 1, 2, 2, 22, 2, 8, 1, 2064, 2, 468, 6, 2, 1, 362, 1, 2, 2, 6, 3, 26, 1, 2, 3, 20, 1, 2, 1, 28, 2, 38, 5, 3024, 1, 2, 81, 858, 1, 2, 3, 2, 8, 60, 1, 2, 2, 10, 5, 2, 7, 182, 1, 17782, 3, ... , the largest known generalized Woodall prime with base greater than 2 is 2740879 × 322740879 − 1. See also Mersenne prime - Prime numbers of the form 2n − 1. References Further reading . . . External links Chris Caldwell, The Prime Glossary: Woodall number, and The Top Twenty: Woodall, and The Top Twenty: Generalized Woodall, at The Prime Pages. Steven Harvey, List of Generalized Woodall primes. Paul Leyland, Generalized Cullen and Woodall Numbers Integer sequences Unsolved problems in number theory Classes of prime numbers
Woodall number
Mathematics
950
44,871,096
https://en.wikipedia.org/wiki/Steve%20Punter
Steve Punter (born 1958 in Toronto, Ontario) is a Toronto-based programmer and media personality. Punter is noted for his work with Commodore microcomputers. He created WordPro, the first major word processor for the Commodore PET and Commodore 64 computers. He is also the designer of the Punter binary file transfer protocols which bear his name. He wrote the PunterNet networked BBS program in the late 1980s. In the 1980s Punter designed and operated the bulletin board system (BBS) for the Toronto PET Users Group. He was an occasional speaker at the World of Commodore expos, and is featured in the film BBS: The Documentary. He is an expert on cell phones and cell phone network coverage, in which capacity he has made occasional network TV appearances since the early 2000s. See also Punter protocol References External links Steve Punter at the Personal Computer Museum 1958 births Commodore people Computer programmers Living people People from Toronto
Steve Punter
Technology
193
8,407,466
https://en.wikipedia.org/wiki/Imbert%E2%80%93Fedorov%20effect
The Imbert–Fiodaraŭ effect (named after Fiodar Ivanavič Fiodaraŭ (1911–1994) and Christian Imbert (1937–1998) is an optical phenomenon in which a beam of circularly or elliptically polarized light undergoes a small sideways shift when refracted or totally internally reflected. The sideways shift is perpendicular to the plane containing the incident and reflected beams. This effect is the circular polarization analog of the Goos–Hänchen effect. References Optical phenomena
Imbert–Fedorov effect
Physics
104
69,901,594
https://en.wikipedia.org/wiki/Anixia%20berkeleyi
Anixia berkeleyi is a species of fungus belonging to the Anixia genus. It was discovered 1927 by Russian mycologist Nikolai Aleksandrovich Naumov. References Agaricomycetes Fungi described in 1927 Fungus species
Anixia berkeleyi
Biology
48
15,467,255
https://en.wikipedia.org/wiki/Health%202.0
"Health 2.0" is a term introduced in the mid-2000s, as the subset of health care technologies mirroring the wider Web 2.0 movement. It has been defined variously as including social media, user-generated content, and cloud-based and mobile technologies. Some Health 2.0 proponents see these technologies as empowering patients to have greater control over their own health care and diminishing medical paternalism. Critics of the technologies have expressed concerns about possible misinformation and violations of patient privacy. History Health 2.0 built on the possibilities for changing health care, which started with the introduction of eHealth in the mid-1990s following the emergence of the World Wide Web. In the mid-2000s, following the widespread adoption both of the Internet and of easy to use tools for communication, social networking, and self-publishing, there was spate of media attention to and increasing interest from patients, clinicians, and medical librarians in using these tools for health care and medical purposes. Early examples of Health 2.0 were the use of a specific set of Web tools (blogs, email list-servs, online communities, podcasts, search, tagging, Twitter, videos, wikis, and more) by actors in health care including doctors, patients, and scientists, using principles of open source and user-generated content, and the power of networks and social networks in order to personalize health care, to collaborate, and to promote health education. Possible explanations why health care has generated its own "2.0" term are the availability and proliferation of Health 2.0 applications across health care in general, and the potential for improving public health in particular. Current use While the "2.0" moniker was originally associated with concepts like collaboration, openness, participation, and social networking, in recent years the term "Health 2.0" has evolved to mean the role of Saas and cloud-based technologies, and their associated applications on multiple devices. Health 2.0 describes the integration of these into much of general clinical and administrative workflow in health care. As of 2014, approximately 3,000 companies were offering products and services matching this definition, with venture capital funding in the sector exceeding $2.3 billion in 2013. Public Health 2.0 Public Health 2.0 is a movement within public health that aims to make the field more accessible to the general public and more user-driven. The term is used in three senses. In the first sense, "Public Health 2.0" is similar to "Health 2.0" and describes the ways in which traditional public health practitioners and institutions are reaching out (or could reach out) to the public through social media and health blogs. In the second sense, "Public Health 2.0" describes public health research that uses data gathered from social networking sites, search engine queries, cell phones, or other technologies. A recent example is the proposal of statistical framework that utilizes online user-generated content (from social media or search engine queries) to estimate the impact of an influenza vaccination campaign in the UK. In the third sense, "Public Health 2.0" is used to describe public health activities that are completely user-driven. An example is the collection and sharing of information about environmental radiation levels after the March 2011 tsunami in Japan. In all cases, Public Health 2.0 draws on ideas from Web 2.0, such as crowdsourcing, information sharing, and user-centered design. While many individual healthcare providers have started making their own personal contributions to "Public Health 2.0" through personal blogs, social profiles, and websites, other larger organizations, such as the American Heart Association (AHA) and United Medical Education (UME), have a larger team of employees centered around online driven health education, research, and training. These private organizations recognize the need for free and easy to access health materials often building libraries of educational articles. Definitions The "traditional" definition of "Health 2.0" focused on technology as an enabler for care collaboration: "The use of social software t-weight tools to promote collaboration between patients, their caregivers, medical professionals, and other stakeholders in health." In 2011, Indu Subaiya redefined Health 2.0 as the use in health care of new cloud, Saas, mobile, and device technologies that are: Adaptable technologies which easily allow other tools and applications to link and integrate with them, primarily through use of accessible APIs Focused on the user experience, bringing in the principles of user-centered design Data driven, in that they both create data and present data to the user in order to help improve decision making This wider definition allows recognition of what is or what isn't a Health 2.0 technology. Typically, enterprise-based, customized client-server systems are not, while more open, cloud based systems fit the definition. However, this line was blurring by 2011-2 as more enterprise vendors started to introduce cloud-based systems and native applications for new devices like smartphones and tablets. In addition, Health 2.0 has several competing terms, each with its own followers—if not exact definitions—including Connected Health, Digital Health, Medicine 2.0, and mHealth. All of these support a goal of wider change to the health care system, using technology-enabled system reform—usually changing the relationship between patient and professional.: Personalized search that looks into the long tail but cares about the user experience Communities that capture the accumulated knowledge of patients, caregivers, and clinicians, and explains it to the world Intelligent tools for content delivery—and transactions Better integration of data with content Wider health system definitions In the late 2000s, several commentators used Health 2.0 as a moniker for a wider concept of system reform, seeking a participatory process between patient and clinician: "New concept of health care wherein all the constituents (patients, physicians, providers, and payers) focus on health care value (outcomes/price) and use competition at the medical condition level over the full cycle of care as the catalyst for improving the safety, efficiency, and quality of health care". Health 2.0 defines the combination of health data and health information with (patient) experience, through the use of ICT, enabling the citizen to become an active and responsible partner in his/her own health and care pathway. Health 2.0 is participatory healthcare. Enabled by information, software, and communities that we collect or create, we the patients can be effective partners in our own healthcare, and we the people can participate in reshaping the health system itself. Definitions of Medicine 2.0 appear to be very similar but typically include more scientific and research aspects—Medicine 2.0: "Medicine 2.0 applications, services and tools are Web-based services for health care consumers, caregivers, patients, health professionals, and biomedical researchers, that use Web 2.0 technologies as well as semantic web and virtual reality tools, to enable and facilitate specifically social networking, participation, apomediation, collaboration, and openness within and between these user groups. Published in JMIR Tom Van de Belt, Lucien Engelen et al. systematic review found 46 (!) unique definitions of health 2.0 Overview Health 2.0 refers to the use of a diverse set of technologies including Connected Health, electronic medical records, mHealth, telemedicine, and the use of the Internet by patients themselves such as through blogs, Internet forums, online communities, patient to physician communication systems, and other more advanced systems. A key concept is that patients themselves should have greater insight and control into information generated about them. Additionally Health 2.0 relies on the use of modern cloud and mobile-based technologies. Much of the potential for change from Health 2.0 is facilitated by combining technology driven trends such as Personal Health Records with social networking —"[which] may lead to a powerful new generation of health applications, where people share parts of their electronic health records with other consumers and 'crowdsource' the collective wisdom of other patients and professionals." Traditional models of medicine had patient records (held on paper or a proprietary computer system) that could only be accessed by a physician or other medical professional. Physicians acted as gatekeepers to this information, telling patients test results when and if they deemed it necessary. Such a model operates relatively well in situations such as acute care, where information about specific blood results would be of little use to a lay person, or in general practice where results were generally benign. However, in the case of complex chronic diseases, psychiatric disorders, or diseases of unknown etiology patients were at risk of being left without well-coordinated care because data about them was stored in a variety of disparate places and in some cases might contain the opinions of healthcare professionals which were not to be shared with the patient. Increasingly, medical ethics deems such actions to be medical paternalism, and they are discouraged in modern medicine. A hypothetical example demonstrates the increased engagement of a patient operating in a Health 2.0 setting: a patient goes to see their primary care physician with a presenting complaint, having first ensured their own medical record was up to date via the Internet. The treating physician might make a diagnosis or send for tests, the results of which could be transmitted directly to the patient's electronic medical record. If a second appointment is needed, the patient will have had time to research what the results might mean for them, what diagnoses may be likely, and may have communicated with other patients who have had a similar set of results in the past. On a second visit a referral might be made to a specialist. The patient might have the opportunity to search for the views of other patients on the best specialist to go to, and in combination with their primary care physician decides whom to see. The specialist gives a diagnosis along with a prognosis and potential options for treatment. The patient has the opportunity to research these treatment options and take a more proactive role in coming to a joint decision with their healthcare provider. They can also choose to submit more data about themselves, such as through a personalized genomics service to identify any risk factors that might improve or worsen their prognosis. As treatment commences, the patient can track their health outcomes through a data-sharing patient community to determine whether the treatment is having an effect for them, and they can stay up to date on research opportunities and clinical trials for their condition. They also have the social support of communicating with other patients diagnosed with the same condition throughout the world. Level of use of Web 2.0 in health care Partly due to weak definitions, the novelty of the endeavor and its nature as an entrepreneurial (rather than academic) movement, little empirical evidence exists to explain how much Web 2.0 is being used in general. While it has been estimated that nearly one-third of the 100 million Americans who have looked for health information online say that they or people they know have been significantly helped by what they found, this study considers only the broader use of the Internet for health management. A study examining physician practices has suggested that a segment of 245,000 physicians in the U.S are using Web 2.0 for their practice, indicating that use is beyond the stage of the early adopter with regard to physicians and Web 2.0. Types of Web 2.0 technology in health care Web 2.0 is commonly associated with technologies such as podcasts, RSS feeds, social bookmarking, weblogs (health blogs), wikis, and other forms of many-to-many publishing; social software; and web application programming interfaces (APIs). The following are examples of uses that have been documented in academic literature. Criticism of the use of Web 2.0 in health care Hughes et al. (2009) argue there are four major tensions represented in the literature on Health/Medicine 2.0. These concern: the lack of clear definitions issues around the loss of control over information that doctors perceive safety and the dangers of inaccurate information issues of ownership and privacy Several criticisms have been raised about the use of Web 2.0 in health care. Firstly, Google has limitations as a diagnostic tool for Medical Doctors (MDs), as it may be effective only for conditions with unique symptoms and signs that can easily be used as search term. Studies of its accuracy have returned varying results, and this remains in dispute. Secondly, long-held concerns exist about the effects of patients obtaining information online, such as the idea that patients may delay seeking medical advice or accidentally reveal private medical data. Finally, concerns exist about the quality of user-generated content leading to misinformation, such as perpetuating the discredited claim that the MMR vaccine may cause autism. In contrast, a 2004 study of a British epilepsy online support group suggested that only 6% of information was factually wrong. In a 2007 Pew Research Center survey of Americans, only 3% reported that online advice had caused them serious harm, while nearly one-third reported that they or their acquaintances had been helped by online health advice. See also e-Patient Health 3.0 Patient opinion leader Digital health References External links "Web Site Harnesses Power of Social Networks", The Washington Post, October 19, 2009 Medicine in society Telehealth Web 2.0 Health informatics
Health 2.0
Biology
2,735
63,989,962
https://en.wikipedia.org/wiki/Topological%20vector%20lattice
In mathematics, specifically in functional analysis and order theory, a topological vector lattice is a Hausdorff topological vector space (TVS) that has a partial order making it into vector lattice that possesses a neighborhood base at the origin consisting of solid sets. Ordered vector lattices have important applications in spectral theory. Definition If is a vector lattice then by the vector lattice operations we mean the following maps: the three maps to itself defined by , , , and the two maps from into defined by and. If is a TVS over the reals and a vector lattice, then is locally solid if and only if (1) its positive cone is a normal cone, and (2) the vector lattice operations are continuous. If is a vector lattice and an ordered topological vector space that is a Fréchet space in which the positive cone is a normal cone, then the lattice operations are continuous. If is a topological vector space (TVS) and an ordered vector space then is called locally solid if possesses a neighborhood base at the origin consisting of solid sets. A topological vector lattice is a Hausdorff TVS that has a partial order making it into vector lattice that is locally solid. Properties Every topological vector lattice has a closed positive cone and is thus an ordered topological vector space. Let denote the set of all bounded subsets of a topological vector lattice with positive cone and for any subset , let be the -saturated hull of . Then the topological vector lattice's positive cone is a strict -cone, where is a strict -cone means that is a fundamental subfamily of that is, every is contained as a subset of some element of ). If a topological vector lattice is order complete then every band is closed in . Examples The Lp spaces () are Banach lattices under their canonical orderings. These spaces are order complete for . See also References Bibliography Functional analysis
Topological vector lattice
Mathematics
376
5,181,859
https://en.wikipedia.org/wiki/Nassib%20Lahoud
Nassib Lahoud (23 November 19441 February 2012) was a Lebanese political figure. He held various posts including Member of parliament, Ambassador to the United States of America, and Minister of State (without portfolio) . He was also head of the Democratic Renewal Movement and a leading figure in the March 14 coalition, which nominated him as their presidential candidate when they held the parliamentary majority in 2008. His election was vetoed by Hezbollah and its allies, who refused to attend parliament and threatened not to recognise any president who was not the product of a consensus agreement between Lebanese political forces. President Michel Suleiman was elected to the post on 25 May 2008. Upon his death, Nassib Lahoud was referred to as the "President of our dreams" by Hezbollah's opponents as well as journalists and prominent members of civil society. Early life and education Nassib Lahoud was born in Baabdat, Matn, Lebanon on 23 November 1944. He was a member of the Lahoud family, one of the most prominent Lebanese-Christian political families (See List of political families). He was the son of ex-Member of Parliament, former minister of foreign affairs and minister of defense Salim Lahoud under President Camille Chamoun in the 1960s. Lahoud earned a BS degree in electrical engineering from Loughborough University, United Kingdom in 1968. Career After finishing his engineering studies, Lahoud founded Lahoud Engineering Co. Ltd. London (1972); the company's activities are mostly the construction of large-scale power and heavy industry plants. Lahoud had a career as businessman, both within Lahoud Engineering, and in the field of real estate, and was one of the largest landowners in the Metn. Lahoud was also a prolific art collector, and was particularly interested in European impressionist masters as well as Lebanese contemporary artists. Throughout his years as a businessman, Lahoud was active in Lebanese politics and was close to President Camille Chamoun until 1972, when Chamoun supported another candidate at the parliamentary elections. In the 1980s, still living in Mayfair, London, Lahoud became less interested in business and shifted his focus to Lebanese politics. Politics After his backstage participation in the Taif conference in Saudi Arabia, which ended the destructive Lebanese civil war, he became ambassador to the United States of America, where he had high-level relationships with leading U.S. politicians. In 1991 he returned to Lebanon and was appointed Maronite Christian parliamentarian of the Metn region. He was re-elected in organised legislative elections in 1992, 1996 and in 2000. From the beginning of his political life, he situated himself in the opposition to the pro-Syrian governments that ruled the country. He opposed the economic policies of the late prime minister Rafiq Hariri, and Syrian interference in the Lebanese political scene. He voted against the constitutional amendments imposed by Syria to extend the mandates of presidents Elias Hrawi in 1995, and Emile Lahoud in 2004. In 2001 he joined the Qornet Shehwan Gathering, regrouping prominent Christian opposition figures under the patronage of Maronite Patriarch Nasrallah Boutros Sfeir. The same year, he founded along with about two hundreds and fifty Lebanese intellectuals (including Misbah Ahdab, Camille Ziadé, Nadim Salem, Antoine Haddad, Ziad Baroud, Wafic Zantout, Mona Fayad, Hareth Sleiman, Malek Mroueh, and Melhem Chaoul) the Democratic Renewal Movement. A movement that presents itself as a reformist and laic, opposition political movement. He was a prominent figure of the anti-Syrian opposition movement that started to organise after the extension of the pro-Syrian president Emile Lahoud's term and which gained more political weight following the assassination of Rafiq Hariri on 14 February 2005. The Democratic Renewal Movement was part of the March 14 Alliance, which started as an anti-Syrian coalition. Nassib Lahoud took part in the 2005 legislative elections in his Metn district, at the head of the opposition list. He lost the elections against Michel Aoun's Free Patriotic Movement and allied pro-Syrian figures. Presidential elections Nassib Lahoud was since 1995 considered to be one of the most serious candidates for the presidency. Despite his defeat in the parliamentary elections in 2005, after four terms in office, he announced that he was prepared to run for president if the March 14 Alliance - with a majority in parliament - decided to back him. His willingness to run for the presidency was part of a wider campaign to remove the pro-Syrian president Emile Lahoud (Nassib's cousin) who was widely considered to be the last bastion of Syrian hegemony over the country. In 2008 the parliamentary majority ultimately designated Lahoud as its presidential nominee but the Hezbollah-led opposition refused to attend a legislative session to elect a new president, thus ensuring there was no quorum. President Michel Suleiman was later elected by consensus after Hezbollah threatened not to yield to a state if the president was Nassib Lahoud or any other candidate chosen by the parliamentary majority. Testament to the political tension prevalent during those times, Hezbollah supporters eventually attacked March 14 majority-affiliated buildings and newspapers in Beirut, in a show of force reminiscent of the days of the Lebanese civil war. This was due to a refusal by the government to rescind two controversial executive decrees that aimed to dismantle Hezbollah's private telecommunications system as well as sack the military general responsible for airport security. Nassib Lahoud was later appointed Minister of State, as the representative of the Ex-Qornet Shehwan Gathering, in the first government under President Michel Suleiman's mandate. Lahoud refused to run at the next parliamentary elections alongside old symbols of Syrian hegemony over Lebanon, even though they were now new independent allies of the March 14 coalition. He thus withdrew from the 2009 parliamentary elections, Personal life Lahoud was married to Abla Fustuq, with whom he had two children, Salim and Joumana. He also had a step-daughter, Roula. Abla is part of the Fustuq business family, based in London. She is the sister of Mahmoud Fustuq and Aida Fustuq, who was once married to Saudi ruler King Abdullah. Lahoud was very fond of his hometown Baabdat in the Metn. He was known for many high-profile friends and relationships within politics, arts, business, academics and society around the world. An avid chess and backgammon player, as well as art collector, Lahoud had close family and business ties with the Saudis. Death Lahoud died at the age of 67 at Hotel Dieu Hospital in Ashrafieh, Beirut on 1 February 2012 after a long illness. He received a state funeral which was attended by the President and the Prime Minister and posthumous homages were published in many newspapers and Web sites. References External links 1944 births 2012 deaths Alumni of Loughborough University Ambassadors of Lebanon to the United States Candidates for President of Lebanon Democratic Renewal (Lebanon) politicians Ministers without portfolio of Lebanon Lahoud family Lebanese Maronites Members of the Parliament of Lebanon Lebanese engineers Electrical engineers People of the Lebanese Civil War 20th-century Lebanese diplomats
Nassib Lahoud
Engineering
1,508
40,014,290
https://en.wikipedia.org/wiki/Greedy%20embedding
In distributed computing and geometric graph theory, greedy embedding is a process of assigning coordinates to the nodes of a telecommunications network in order to allow greedy geographic routing to be used to route messages within the network. Although greedy embedding has been proposed for use in wireless sensor networks, in which the nodes already have positions in physical space, these existing positions may differ from the positions given to them by greedy embedding, which may in some cases be points in a virtual space of a higher dimension, or in a non-Euclidean geometry. In this sense, greedy embedding may be viewed as a form of graph drawing, in which an abstract graph (the communications network) is embedded into a geometric space. The idea of performing geographic routing using coordinates in a virtual space, instead of using physical coordinates, is due to Rao et al. Subsequent developments have shown that every network has a greedy embedding with succinct vertex coordinates in the hyperbolic plane, that certain graphs including the polyhedral graphs have greedy embeddings in the Euclidean plane, and that unit disk graphs have greedy embeddings in Euclidean spaces of moderate dimensions with low stretch factors. Definitions In greedy routing, a message from a source node s to a destination node t travels to its destination by a sequence of steps through intermediate nodes, each of which passes the message on to a neighboring node that is closer to t. If the message reaches an intermediate node x that does not have a neighbor closer to t, then it cannot make progress and the greedy routing process fails. A greedy embedding is an embedding of the given graph with the property that a failure of this type is impossible. Thus, it can be characterized as an embedding of the graph with the property that for every two nodes x and t, there exists a neighbor y of x such that d(x,t) > d(y,t), where d denotes the distance in the embedded space. Graphs with no greedy embedding Not every graph has a greedy embedding into the Euclidean plane; a simple counterexample is given by the star K1,6, a tree with one internal node and six leaves. Whenever this graph is embedded into the plane, some two of its leaves must form an angle of 60 degrees or less, from which it follows that at least one of these two leaves does not have a neighbor that is closer to the other leaf. In Euclidean spaces of higher dimensions, more graphs may have greedy embeddings; for instance, K1,6 has a greedy embedding into three-dimensional Euclidean space, in which the internal node of the star is at the origin and the leaves are a unit distance away along each coordinate axis. However, for every Euclidean space of fixed dimension, there are graphs that cannot be embedded greedily: whenever the number n is greater than the kissing number of the space, the graph K1,n has no greedy embedding. Hyperbolic and succinct embeddings Unlike the case for the Euclidean plane, every network has a greedy embedding into the hyperbolic plane. The original proof of this result, by Robert Kleinberg, required the node positions to be specified with high precision, but subsequently it was shown that, by using a heavy path decomposition of a spanning tree of the network, it is possible to represent each node succinctly, using only a logarithmic number of bits per point. In contrast, there exist graphs that have greedy embeddings in the Euclidean plane, but for which any such embedding requires a polynomial number of bits for the Cartesian coordinates of each point. Special classes of graphs Trees The class of trees that admit greedy embeddings into the Euclidean plane has been completely characterized, and a greedy embedding of a tree can be found in linear time when it exists. For more general graphs, some greedy embedding algorithms such as the one by Kleinberg start by finding a spanning tree of the given graph, and then construct a greedy embedding of the spanning tree. The result is necessarily also a greedy embedding of the whole graph. However, there exist graphs that have a greedy embedding in the Euclidean plane but for which no spanning tree has a greedy embedding. Planar graphs conjectured that every polyhedral graph (a 3-vertex-connected planar graph, or equivalently by Steinitz's theorem the graph of a convex polyhedron) has a greedy embedding into the Euclidean plane. By exploiting the properties of cactus graphs, proved the conjecture; the greedy embeddings of these graphs can be defined succinctly, with logarithmically many bits per coordinate. However, the greedy embeddings constructed according to this proof are not necessarily planar embeddings, as they may include crossings between pairs of edges. For maximal planar graphs, in which every face is a triangle, a greedy planar embedding can be found by applying the Knaster–Kuratowski–Mazurkiewicz lemma to a weighted version of a straight-line embedding algorithm of Schnyder. The strong Papadimitriou–Ratajczak conjecture, that every polyhedral graph has a planar greedy embedding in which all faces are convex, remains unproven. Unit disk graphs The wireless sensor networks that are the target of greedy embedding algorithms are frequently modeled as unit disk graphs, graphs in which each node is represented as a unit disk and each edge corresponds to a pair of disks with nonempty intersection. For this special class of graphs, it is possible to find succinct greedy embeddings into a Euclidean space of polylogarithmic dimension, with the additional property that distances in the graph are accurately approximated by distances in the embedding, so that the paths followed by greedy routing are short. References Geometric graph theory Routing algorithms Distributed computing Telecommunications
Greedy embedding
Mathematics,Technology
1,227
9,729,735
https://en.wikipedia.org/wiki/Stonesetting
Stonesetting is the art of securely setting or attaching gemstones into jewelry. Cuts There are two general types of gemstone cutting: cabochon and facet. Cabochons are smooth, often domed, with flat backs. Agates and turquoise are usually cut this way, but precious stones such as rubies, emeralds and sapphires may also be. Many stones like star sapphires and moonstones must be cut this way in order to properly display their unusual appearance. A faceted shape resembles that of the modern diamond. It has a flat, polished surface, typically with a transparent surface that refracts light inside the gemstone and reflects light on the outside. In the case of a cabochon stone, the side of the stone is usually cut at a shallow angle, so that when the bezel is pushed over the stone, the angle permits it to hold the stone tightly in place. In the case of faceted stones, a shallow groove is cut into the side of the bezel into which the girdle of the stone is placed, with metal prongs then pushed over the face of the stone, holding it in place; cabochons may also be set into prong settings. In both cases, the pressure and the angle of the prongs holds the stone in place. Just as the angle of the sides of a cabochon creates the pressure to hold the stone in place, so there is an overlying principle in setting faceted stones. If one looks at a side view of a round diamond, for example, one will see that there is an outer edge, called the girdle, and the top angles up from there, and the bottom angles down from there. Faceted stones are set by "pinching" that angle with metal. All of the styles of faceted stone setting use this concept in one way or another. Settings There are thousands of variations of setting styles, but there are several fundamental types: Bezel The earliest known technique of attaching stones to jewelry was bezel setting. A bezel is a strip of metal bent into the shape and size of the stone and then soldered to the piece of jewelry. The stone is then inserted into the bezel, and the metal edge of the bezel pressed over the edge of the stone, holding it in place. This method works well for either cabochons or faceted stones. Prong A prong setting is the simplest and most common type of setting, largely because it uses the least amount of metal to hold the stone in place, displaying most of the stone and forming a secure setting. Generally, a prong setting is formed of a number of short, thin strips of metal, called prongs, which are arranged in a shape and size to hold the given stone, and are fixed at the base. Then a burr of the proper size is used to cut what is known as a "bearing", which is a notch that corresponds to the angles of the stone. The burr most often used is called a "hart bur", and is angled and sized for the job of setting diamonds. The bearing is cut equally into all of the prongs and at the same height above the base. The stone is then inserted, and pliers or a pusher are used to bend the prongs gently over the crown of the stone, with the tops of the prongs are clipped off with snips, filed to an even height above the stone, and finished. Usually a "cup burr" is used to give the prong a nice round tip. A cup burr is in the shape of a hemisphere with teeth on the inside, for making rounded tips on wires and prongs. There are many variations of prong settings including just two prongs, the more common four prong setting or up to 24 or more, with many variations involving decoration, size and shapes of the prongs themselves, and how they are fixed or used in jewelry. The method of setting is generally the same for all, no matter how many prongs are present. Channel A channel setting is a method whereby stones are suspended between two bars or strips of metal, called channels. Typically, a line of small stones set between two bars is called a channel setting, and a design where the bars cross the stones is called a bar set. The channel is a variation of a "U" shape, with two sides and a bottom. The sides are made slightly narrower than the width of the stone or stones to be set, and then, using the same burs as in prong setting, a small notch, called a bearing, is cut into each wall. The stone is put in place in those notches, and the metal on top is pushed down, tightening the stone in place. The proper way to set a channel is to cut a notch for each stone, but for cheaper production work sometimes a groove is cut along each channel. Since the metal can be very stiff and strong, a reciprocating hammer, similar to a jackhammer but jewelry sized, may be used to hammer down the metal, as it can be difficult to do by hand. The metal is then filed down and finished, and the inner edge near the stones cleaned up and straightened as necessary. As with all jewelry, there can be many variations of channel work. At times the walls will be raised—sometimes a center stone will be set between two bars that rise high from the base ring—or the channel may be cut directly into the surface, making the stones flush with the metal. Bead "Bead setting" is a generic term for setting a stone directly into metal using gravers, also called burins, which are essentially tiny chisels. A hole is drilled directly into the surface of the metal, before a ball burr is used to make a concave depression the size of the stone. Some setters will set the stone into the concave depression, and some will use a hart burr to cut a bearing around the edge. The stone is then inserted into the space, and gravers or burins are used to lift and push a tiny bit of the metal into and over the edge of the stone. Then a beading tool – a simple steel shaft with a concave dimple cut into the tip – is pushed onto the bit of metal, rounding and smoothing it, pushing it firmly onto the stone, and creating a "bead". There are many types of setting that use the bead setting technique. When many stones are set closely together in this fashion, roughly apart, covering a surface, this is known as a "pavé" setting, from the French for "paved" or "cobblestoned". When a long line is engraved into the metal going up to each of the beads, this is known as a "star setting". The other common usage of this setting is known as "bead and bright", "grain setting" or "threading" in Europe, alongside many other names. This is when, after the stone is set as described above, the background metal around the stone is cut away, usually in geometric shapes, resulting in the stone being left with four beads in a lowered box shape with an edge around it. Often it is a row of stones, so it will be in a long shape with a raised edge and a row of stones and beads down the center. This type of setting is still used often, but it was very common in the early- to mid-20th century. Burnish Burnish setting, also sometimes referred to as flush setting, shot setting, or gypsy setting, is similar to bead setting, but after the stone is inserted into the space, instead of using a graver to lift beads, a burnishing tool is used to push the metal around the stone. The stone will be roughly flush with the surface, with a burnished or rubbed edge around it. This type of setting has a long history, and has seen a resurgence in contemporary jewelry. Sometimes the metal is finished using sandblasting. References Jewellery making Gemstones hr:Uglavljivanje dragog kamenja
Stonesetting
Physics
1,670
18,327,191
https://en.wikipedia.org/wiki/MEDUSA%20%28weapon%29
MEDUSA (Mob Excess Deterrent Using Silent Audio) is a directed-energy non-lethal weapon designed by WaveBand Corporation in 2003-2004 for temporary personnel incapacitation. The weapon is based on the microwave auditory effect resulting in a strong sound sensation in the human head when it is subject to certain kinds of pulsed/modulated microwave radiation. The developers claimed that through the combination of pulse parameters and pulse power, it is possible to raise the auditory sensation to a “discomfort” level, deterring personnel from entering a protected perimeter or, if necessary, temporarily incapacitating particular individuals. In 2005, Sierra Nevada Corporation acquired WaveBand Corporation. Description According to the U.S. Navy in 2004, the system would be "portable, low power, have a controllable radius of coverage, be able to switch from crowd to individual coverage, cause a temporarily incapacitating effect, have a low probability of fatality or permanent injury, cause no damage to property, and have a low probability of affecting friendly personnel". In addition to perimeter protection and crowd control, a proposed application of MEDUSA was "for use in systems to assist communication with hearing impaired persons". Fate The project received a positive initial evaluation from the Navy. However, Sierra Nevada Corporation had discontinued the project as of 2008, "possibly because it may have [been] shown to permanently damage human brain tissue". See also Sonic weaponry References Neuroscience Non-lethal weapons
MEDUSA (weapon)
Biology
292
24,716,821
https://en.wikipedia.org/wiki/The%20Non-GMO%20Project
The Non-GMO Project is a 501(c)(3) non-profit organization focusing on genetically modified organisms. The organization began as an initiative of independent natural foods retailers in the U.S. and Canada, with the stated aim to label products produced in compliance with their Non-GMO Project Standard, which aims to prevent genetically modified foodstuffs from being present in retail food products. The organization is headquartered in Bellingham, Washington. The Non-GMO label began use in 2012 with Numi Organic Tea products. History The Non-GMO Project was incorporated in California on December 14, 2006. Two natural food retailers formed the project, with a goal of creating a standardized definition for non-genetically modified organisms. The project worked with FoodChain Global Advisors which provided the scientific and technical expertise. In spring 2007, the project's board of directors was expanded to include representatives from additional groups, and formed advisory boards for technical and policy issues. Mission The Non-GMO Project's stated mission is "to preserve and build sources of non-GMO products, educate consumers, and provide verified non-GMO choices". It provides third-party verification and labeling for non-genetically modified food and products. The project also works with food manufacturers, distributors, growers, and seed suppliers to develop standards for detection of genetically modified organisms and for the reduction of contamination risk of the non-genetically modified food supply with genetically modified organisms. FoodChain Global Advisors, a part of Global ID Group. The Non-GMO Project claims to "educate consumers and the food industry to help build awareness about GMOs and their impact on our health". It asserts that everyone deserves an informed choice about whether or not to consume genetically modified organisms. Standard and label The Non-GMO Project maintains a consensus-based standard, which outlines their system for ensuring best practices for avoiding genetically modified organisms. Methods such as segregation, traceability, risk assessment, sampling techniques, and quality control management are emphasized in the Standard. The project's Product Verification Program assesses ingredients, products, and manufacturing facilities to establish compliance with the standard. All ingredients with major risk must be tested for compliance with the Non-GMO Project Standard prior to their use in a Non-GMO Project Verified Product. The process is managed through a web-based application and evaluation program developed for the project. The project's label indicates compliance with the standards. Sales According to the Non-GMO Project, as of September 2013, Non-GMO Project Verified products exceeded $3.5 billion. This would be approximately 0.4% of the total food sales in the United States ($1.3 trillion in 2012). The Non-GMO Project reports 797 verification program enrollment inquiries in the second quarter of 2013 compared to 194 inquiries during the same period in 2012, representing more than a 300% increase. Controversy The Non-GMO Project often puts its labels on products containing inputs it considers "low-risk", including foods with inputs that could not be derived from GMOs. The Project maintains this is because many products that appear to be inherently non-GMO (such as orange juice, oats, tea and table salt) often contain GMO-derived additives (such as citric acid). Some critics say the Project may be using a business model that is based on fear and lack of information. There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. See also Genetically modified food controversies Health food Biological patent TRIPS Agreement References External links "Non-GMO Project Standard 03.31.2023". Non-GMO Project. March 31, 2023. 501(c)(3) organizations Biotechnology organizations Biological engineering Genetic engineering by country Non-profit organizations based in Bellingham, Washington 2007 establishments in the United States Organizations established in 2007 Pseudoscience Pseudoscientific diet advocates
The Non-GMO Project
Engineering,Biology
875
12,565,998
https://en.wikipedia.org/wiki/NGC%203603-A1
NGC 3603-A1 (HD 97950A1) is a double-eclipsing binary star system located at the centre of the HD 97950 cluster in the NGC 3603 star-forming region, about 25,000 light years from Earth. Both stars are of spectral type WN6h and among the most luminous and most massive known. HD 97950 was catalogued as a star, but was known to be a dense cluster or close multiple star. In 1926, the six brightest members were given letters from A to F, although several of them have since been resolved into more than one star. Star A was first resolved into three components using speckle interferometry, although they can now be directly imaged using space-based or adaptive optics. Component A1 was finally determined to be a spectroscopic binary. The two component stars of NGC 3603-A1 circle each other every 3.77 days, and show brightness variations of about 0.3 magnitudes due to eclipses. The stars orbit very close to each other, separated by barely their own diameters and at or near to filling their roche lobes. The masses of A1a and A1b determined from the orbital parameters are and respectively. This makes them the two most massive stars directly measured, i.e. with their masses determined (using Keplerian orbits), and not estimated from models. The masses estimated from analysis of the physical properties are slightly higher at and . Each component is a Wolf-Rayet (WR) star, with spectra dominated by strong broadened emission lines. Type WN6 indicates that ionised nitrogen lines are strong in comparison to ionised carbon lines, and the suffix h indicates that hydrogen is also seen in the spectrum. This type of WR star is not the classical stripped helium-burning aged star, but a young highly luminous object with CNO cycle fusion products showing at the surface due to strong conventional and rotational mixing, and high mass loss rates from the atmosphere. The emission lines are generated in the stellar wind and the photosphere is completely hidden. The surface fraction of hydrogen is still estimated to be 60-70%. Although the stars are very young, around 1.5 million years old, they have already lost a considerable fraction of their initial masses. The initial masses are estimated to have been and , meaning they have lost and respectively. References External links NASA Image of the day Space Daily Carina (constellation) Wolf–Rayet stars NGC 3603 097950A1 054948A1 Durchmusterung objects Eclipsing binaries
NGC 3603-A1
Astronomy
523
9,059,911
https://en.wikipedia.org/wiki/Tempest%20prognosticator
The tempest prognosticator, also known as the leech barometer, is a 19th-century invention by George Merryweather in which leeches are used in a barometer. The twelve leeches are kept in small bottles inside the device; when they become agitated by an approaching storm, they attempt to climb out of the bottles and trigger a small hammer which strikes a bell. The likelihood of a storm is indicated by the number of times the bell is struck. Invention and development Dr. Merryweather, honorary curator of the Whitby Literary and Philosophical Society's Museum, detailed the sensitivity that medicinal leeches displayed in reaction to electrical conditions in the atmosphere. He was inspired by two lines from Edward Jenner's poem Signs of Rain: "The leech disturbed is newly risen; Quite to the summit of his prison." Merryweather spent much of 1850 developing his ideas and came up with six designs for what he originally referred to as "An Atmospheric Electromagnetic Telegraph, conducted by Animal Instinct." These ranged from a cheap version, which he envisaged would be used by the government and the shipping industries, to a more expensive design. The expensive design, which took inspiration from the architecture of Indian temples, was made by local craftsmen and shown in the 1851 Great Exhibition at The Crystal Palace in London. On 27 February 1851, he gave a nearly three-hour essay to members of the Philosophical Society entitled "Essay explanatory of the Tempest Prognosticator in the building of the Great Exhibition for the Works of Industry of All Nations." Method The tempest prognosticator comprises twelve pint bottles in a circle around and beneath a large bell. Atop the glasses are small metal tubes which contain a piece of whalebone and a wire connecting them to small hammers positioned to strike the bell. In his essay Merryweather described the workings of the device: The leech would have difficulty entering the metal tubes but would endeavour to do so if sufficiently motivated by the likelihood of bad weather. By ringing the bell it would signify that that individual leech is indicating that a storm is approaching. Merryweather referred to the leeches as his "jury of philosophical councilors" and that the more of them that rang the bell the more likely that a storm would occur. In his essay Merryweather also noted other features of the design, including the fact that the leeches were placed in glass bottles placed in a circle to prevent them from feeling "the affliction of solitary confinement". Accuracy and success Merryweather spent all of 1850 testing the device, sending a letter to the president of the Philosophical Society and the Whitby Institute, Henry Belcher, to warn him of an impending storm. The results of 28 of these predictions are kept in the library of Whitby Museum. Merryweather stated in his essay the great success that he had had with the device. Merryweather lobbied for the government to make use of his design around the British coastline but they instead opted for Robert FitzRoy's storm glass. Replicas The original device has been lost, but at least three replicas have been made. The hundredth anniversary of the invention brought renewed interest as a replica was made for the 1951 Festival of Britain. This non-working version was made from the description in a printed copy of Merryweather's essay and a copperplate drawing of the original. The device was shown in the Dome of Discovery and given to the Whitby Philosophical Society when the festival ended. Plans and photographs of this replica were then used to create faithful working models, one at Barometer World near Okehampton, Devon, and another at the Great Dickens Christmas Fair in San Francisco. See also Miner's canary Frog battery Pasilalinic-sympathetic compass Project Pigeon References External links Caring for your leech Meteorological instrumentation and equipment
Tempest prognosticator
Technology,Engineering
773
3,257,382
https://en.wikipedia.org/wiki/VIATRA
VIATRA is an open-source model transformation framework based on the Eclipse Modeling Framework (EMF) and hosted by the Eclipse Foundation. VIATRA supports the development of model transformations with specific focus on event-driven, reactive transformations, i.e., rule-based scenarios where transformations occur as reactions to certain external changes in the model. Building upon an incremental query support for locating patterns and changes in the model, VIATRA offers a language (the VIATRA Query Language, VQL) to define transformations and a reactive transformation engine to execute certain transformations upon changes in the underlying model. Application domains VIATRA, as an open-source framework offering, serves as a central integration point and enabler engine in various applications, both in an industrial and in an academic context. Earlier versions of the framework have been intensively used for providing tool support for developing and verifying critical embedded systems in numerous European research projects such as DECOS, MOGENTES, INDEXYS and SecureChange. As a major industrial application of VIATRA, it is utilized as the underlying model querying and transformation engine of the IncQuery Suite. Thus, VIATRA is a key technical component in several industrial collaborations around model-based systems engineering (MBSE), fostering innovative systems engineering practices in domains like aerospace, manufacturing, industrial automation and automotive. Furthermore, via the applications of the IncQuery Suite, VIATRA serves as the foundation for model-based endeavors of ongoing, large-scale European industrial digitalization endeavors, such as the Arrowhead Tools and the Embrace projects. VIATRA is well integrated with Eclipse Modeling tools. However, VIATRA works outside the Eclipse environment as well, as demonstrated by the IncA project using the JetBrains MPS platform. Functionality VIATRA provides the following main services: An incremental query engine together with a graph pattern based language to specify and execute model queries efficiently. An internal DSL over the Xtend language to specify both batch and event-driven, reactive transformations. A model obfuscator to remove sensitive information from a confidential model (e.g., to create bug reports). Origins and history The current VIATRA project is a full rewrite of the previous VIATRA2 framework, coming with full compatibility and support for EMF models. The project features a History wiki page that describes the main differences between the different versions. As for applications of the earlier VIATRA2 framework, it served as the underlying model transformation engine of the DECOS European IP in the field of dependable embedded systems. Furthermore, a traditional application area for VIATRA2 – starting as early as 1998 – was to support the analysis of system models taken from various application areas (safety-critical and/or embedded systems, robust e-business applications, middleware, service-oriented architecture) described using various modeling languages (SysML, UML, BPMN, etc.) during a model-driven systems engineering process. Such a model analysis typically also includes the verification and validation, the testing, the safety and security analysis as well as the early assessment of non-functional characteristics (such as reliability, availability, responsiveness, throughput, etc.) of the system under design. These use-cases and application fields still constitute focal areas for VIATRA, mostly addressed via the IncQuery Suite as an interface on the user's end. Approach Since precise model-based systems development is the primary application area of VIATRA, it necessitates that (i) the model transformations are specified in a mathematically precise way, and (ii) these transformations are automated so that the target mathematical models can be derived fully automatically. To achieve this, VIATRA relies upon a mathematically precise rule-based specification formalism, namely, graph transformation (GT). VIATRA aims at invisible formal methods: here, formal details are hidden by automated model transformations projecting system models into various mathematical domains (and, preferably, vice versa). The basic concept in defining model transformations within VIATRA is the (graph) pattern. A pattern is a collection of model elements arranged into a certain structure fulfilling additional constraints (as defined by attribute conditions or other patterns). Patterns can be matched on certain model instances, and upon successful pattern matching, elementary model manipulation is specified by graph transformation rules. Like OCL, graph transformation rules describe pre- and postconditions to the transformations, but graph transformation rules are guaranteed to be executable, which is a main conceptual difference. In particular, as reactive, event-driven transformations are the current focus of VIATRA, VIATRA includes a rule execution engine which monitors changes (interpreted as events) in the model, and fires a rule whenever a change led to the fulfillment of the precondition for that rule (and, potentially, if some further control conditions are also met). See also Model Driven Engineering (MDE) Model Driven Architecture (MDA) Domain Specific Language (DSL) Model Transformation Language (MTL) MOF Queries/Views/Transformations (MOF QVT) ATLAS Transformation Language (ATL) XML transformation language (XML TL) References External links VIATRA Eclipse project page Unified Modeling Language Graph rewriting
VIATRA
Mathematics
1,058
36,443,298
https://en.wikipedia.org/wiki/Clavulina%20ingrata
Clavulina ingrata is a species of coral fungus in the family Clavulinaceae. Found in Malaysia, it was described by E.J.H. Corner in 1950. References External links Fungi described in 1950 Fungi of Asia ingrata Fungus species
Clavulina ingrata
Biology
54
9,632,792
https://en.wikipedia.org/wiki/Pedophile%20Group
The Pedophile Group, Pedophile Group Association or Danish Pedophile Association was a Danish organisation which was disbanded on 21 March 2004. A website is still running, operated by a group of active members of the former association. It was founded in 1985. On 23 July 1996, the group had eighty registered members and participated in an International Congress in Denmark. It was also connected with the pedophile advocacy organisation Ipce (formerly the International Pedophile and Child Emancipation). A 2004 newspaper article identified DeFillip as the organization's spokesman. In 2000, a Danish TV documentary team went undercover to investigate the group. Members were shown exchanging child porn and giving advice on how to contact children in internet chatrooms. A man was arrested by police in connection with the investigation. In 2000, the group asked its members to provide misleading information to authorities to help Eric Franklin Rosser evade prosecution. Rosser was a former member of John Mellencamp's band who had been charged with producing and distributing child pornography. He was convicted in 2001, however, and was added to the U.S. Federal Bureau of Investigation's most wanted list. In 2004, the Danish newspaper Dagbladet Information ran a front-page article by the journalist Kristian Ditlev Jensen calling for the organisation's home page to be taken down. Similar criticism of the groups came from papers such as Berlingske, Jyllands-Posten and Politiken. In 2004, the Danish Parliament of the 2001 Danish general election voted against the dissolution of the association. Notes 1985 establishments in Denmark 2004 disestablishments in Denmark Organizations established in 1985 Organizations disestablished in 2004 Pedophile advocacy Clubs and societies in Denmark
Pedophile Group
Biology
353
75,604,717
https://en.wikipedia.org/wiki/Solar%20coordinate%20systems
In solar observation and imaging, coordinate systems are used to identify and communicate locations on and around the Sun. The Sun is made of plasma, so there are no permanent demarcated points that can be referenced. Background The Sun is a rotating sphere of plasma at the center of the Solar System. It lacks a solid or liquid surface, so the interface separating its interior and its exterior is usually defined as the boundary where plasma becomes opaque to visible light, the photosphere. Since plasma is gaseous in nature, this surface has no permanent demarcated points that can be used for reference. Furthermore, its rate of rotation varies with latitude, rotating faster at the equator than at the poles. Cardinal directions In observations of the solar disk, cardinal directions are typically defined so that the Sun's northern and southern hemispheres point toward Earth's northern and southern celestial poles, respectively, and the Sun's eastern and western hemispheres point toward Earth's eastern and western horizons, respectively. In this scheme, clockwise from north at 90° intervals one encounters west, south, and east, and the direction of solar rotation is from east to west. Heliographic Heliographic coordinate systems are used to identify locations on the Sun's surface. The two most commonly used systems are the Stonyhurst and Carrington systems. They both define latitude as the angular distance from the solar equator, but differ in how they define longitude. In Stonyhurst coordinates, the longitude is fixed for an observer on Earth, and, in Carrington coordinates, the longitude is fixed for the Sun's rotation. Stonyhurst system The Stonyhurst heliographic coordinate system, developed at Stonyhurst College in the 1800s, has its origin (where longitude and latitude are both 0°) at the point where the solar equator intersects the central solar meridian as seen from Earth. Longitude in this system is therefore fixed for observers on Earth. Carrington system The Carrington heliographic coordinate system, established by Richard C. Carrington in 1863, rotates with the Sun at a fixed rate based on the observed rotation of low-latitude sunspots. It rotates with a sidereal period of exactly 25.38 days, which corresponds to a mean synodic period of 27.2753 days. Whenever the Carrington prime meridian (the line of 0° Carrington longitude) passes the Sun's central meridian as seen from Earth, a new Carrington rotation begins. These rotations are numbered sequentially, with Carrington rotation number 1 starting on 9 November 1853. Heliocentric Heliocentric coordinate systems measure spatial positions relative to an origin at the Sun's center. There are four systems in use: the heliocentric inertial (HCI) system, the heliocentric Aries ecliptic (HAE) system, the heliocentric Earth ecliptic (HEE) system, and the heliocentric Earth equatorial (HEEQ) system. They are summarized in the following table. The third axis not presented in the table completes a right-handed Cartesian triad. See also Astronomical coordinate system Ecliptic coordinate system References External links , a sub-package of SunPy used to handle solar coordinates MTOF/PM Data by Carrington Rotation, a table of Carrington rotation start and stop times Sun Astronomical coordinate systems
Solar coordinate systems
Astronomy,Mathematics
672
52,190,906
https://en.wikipedia.org/wiki/Eukaryogenesis
Eukaryogenesis, the process which created the eukaryotic cell and lineage, is a milestone in the evolution of life, since eukaryotes include all complex cells and almost all multicellular organisms. The process is widely agreed to have involved symbiogenesis, in which an archeon and a bacterium came together to create the first eukaryotic common ancestor (FECA). This cell had a new level of complexity and capability, with a nucleus, at least one centriole and cilium, facultatively aerobic mitochondria, sex (meiosis and syngamy), a dormant cyst with a cell wall of chitin and/or cellulose and peroxisomes. It evolved into a population of single-celled organisms that included the last eukaryotic common ancestor (LECA), gaining capabilities along the way, though the sequence of the steps involved has been disputed, and may not have started with symbiogenesis. In turn, the LECA gave rise to the eukaryotes' crown group, containing the ancestors of animals, fungi, plants, and a diverse range of single-celled organisms. Context Life arose on Earth once it had cooled enough for oceans to form. The last universal common ancestor (LUCA) was an organism which had ribosomes and the genetic code; it lived some 4 billion years ago. It gave rise to two main branches of prokaryotic life, the bacteria and the archaea. From among these small-celled, rapidly-dividing ancestors arose the Eukaryotes, with much larger cells, nuclei, and distinctive biochemistry. The eukaryotes form a domain that contains all complex cells and most types of multicellular organism, including the animals, plants, and fungi. Symbiogenesis According to the theory of symbiogenesis (also known as the endosymbiotic theory) championed by Lynn Margulis, a member of the archaea gained a bacterial cell as a component. The archaeal cell was a member of the Asgard group. The bacterium was one of the Alphaproteobacteria, which had the ability to use oxygen in its respiration. This enabled it – and the archaeal cells that included it – to survive in the presence of oxygen, which was poisonous to other organisms adapted to reducing conditions. The endosymbiotic bacteria became the eukaryotic cell's mitochondria, providing most of the energy of the cell. Lynn Margulis and colleagues have suggested that the cell also acquired a Spirochaete bacterium as a symbiont, providing the cell skeleton of microtubules and the ability to move, including the ability to pull chromosomes into two sets during mitosis, cell division. More recently, the Asgard archaean has been identified as belonging to the Heimdallarchaeota. Last eukaryotic common ancestor (LECA) The last eukaryotic common ancestor (LECA) is the hypothetical last common ancestor of all living eukaryotes, around 2 billion years ago, and was most likely a biological population. It is believed to have been a protist with a nucleus, at least one centriole and cilium, facultatively aerobic mitochondria, sex (meiosis and syngamy), a dormant cyst with a cell wall of chitin and/or cellulose, and peroxisomes. It had been proposed that the LECA fed by phagocytosis, engulfing other organisms. However, in 2022, Nico Bremer and colleagues confirmed that the LECA had mitochondria, and stated that it had multiple nuclei, but disputed that it was phagotrophic. This would mean that the ability found in many eukaryotes to engulf materials developed later, rather than being acquired first and then used to engulf the alphaproteobacteria that became mitochondria. The LECA has been described as having "spectacular cellular complexity". Its cell was divided into compartments. It appears to have inherited a set of endosomal sorting complex proteins that enable membranes to be remodelled, including pinching off vesicles to form endosomes. Its apparatuses for transcribing DNA into RNA, and then for translating the RNA into proteins, were separated, permitting extensive RNA processing and allowing the expression of genes to become more complex. It had mechanisms for reshuffling its genetic material, and possibly for manipulating its own evolvability. All of these gave the LECA "a compelling cohort of selective advantages". Eukaryotic sex Sex in eukaryotes is a composite process, consisting of meiosis and fertilisation, which can be coupled to reproduction. Dacks and Roger proposed on the basis of a phylogenetic analysis that facultative sex was likely present in the common ancestor of all eukaryotes. Early in eukaryotic evolution, about 2 billion years ago, organisms needed a solution to the major problem that oxidative metabolism releases reactive oxygen species that damage the genetic material, DNA. Eukaryotic sex provides a process, homologous recombination during meiosis, for using informational redundancy to repair such DNA damage. Scenarios Biologists have proposed multiple scenarios for the creation of the eukaryotes. While there is broad agreement that the LECA must have had a nucleus, mitochondria, and internal membranes, the order in which these were acquired has been disputed. In the syntrophic model, the first eukaryotic common ancestor (FECA, around 2.2 gya) gained mitochondria, then membranes, then a nucleus. In the phagotrophic model, it gained a nucleus, then membranes, then mitochondria. In a more complex process, it gained all three in short order, then other capabilities. Other models have been proposed. Whatever happened, many lineages must have been created, but the LECA either out-competed or came together with the other lineages to form a single point of origin for the eukaryotes. Nick Lane and William Martin have argued that mitochondria came first, on the grounds that energy had been the limiting factor on the size of the prokaryotic cell. The phagotrophic model presupposes the ability to engulf food, enabling the cell to engulf the aerobic bacterium that became the mitochondrion. Eugene Koonin and others, noting that the archaea share many features with eukaryotes, argue that rudimentary eukaryotic traits such as membrane-lined compartments were acquired before endosymbiosis added mitochondria to the early eukaryotic cell, while the cell wall was lost. In the same way, mitochondrial acquisition must not be regarded as the end of the process, for still new complex families of genes had to be developed after or during the endosymbiotic exchange. In this way, from FECA to LECA, we can think of organisms that can be considered as protoeukaryotes. At the end of the process, LECA was already a complex organism with the presence of protein families involved in cellular compartmentalization. Diversification: crown eukaryotes In turn, the LECA gave rise to the eukaryotes' crown group, containing the ancestors of animals, fungi, plants, and a diverse range of single-celled organisms with the new capabilities and complexity of the eukaryotic cell. Single cells without cell walls are fragile and have a low probability of being fossilised. If fossilised, they have few features to distinguish them clearly from prokaryotes: size, morphological complexity, and (eventually) multicellularity. Early eukaryote fossils, from the late Paleoproterozoic, include acritarch microfossils with relatively robust ornate carbonaceous vesicles of Tappania from 1.63 gya and Shuiyousphaeridium from 1.8 gya. References External links Attraction and sex among our microbial Last Eukaryotic Common Ancestors, The Atlantic, November 11, 2020 Eukaryote biology Evolution by taxon Most recent common ancestors
Eukaryogenesis
Biology
1,706
19,865,904
https://en.wikipedia.org/wiki/MullenLowe%20Profero
MullenLowe Profero is a digital marketing agency operating across twelve offices, with over 600 employees globally. Its services typically include customer experience, digital marketing, creative, media, technology, user experience (UX) and strategy. It is part of the MullenLowe Group global network, which is owned by global agency network Interpublic Group (IPG). History Profero launched on 25 March 1998 by brothers Daryl and Wayne Arnold. During the same year Profero acquired Sky (UK and Ireland) as an early client, by October 1998 Profero delivered a fully integrated campaign and was known to be the first and only organisation to "own" the internet in the UK for an entire day. Since 1998 Profero Group have produced over 4,000 campaigns for clients including M&S, COI and Western Union, the company has expanded into Europe, Asia and Australia. In 2000 Profero become the first digital agency to join the Institute of Practitioners in Advertising, a move that led to further development of the company and the opening of additional regional offices. From 2000 to 2010 Profero opened offices across the world to develop their global offering (London, Sydney, Shanghai, Hong Kong, Singapore, Tokyo Beijing, New York City, and Seoul). Profero were responsible for the concept behind Factory Shanghai in Shanghai, China. In July 2012, Profero’s acquired New York City-based Hispanic agency Vox Collective, its second expansion in America, seen as a move to increase the Profero Group's influence across Latin American markets. On 21 January 2014, Profero was acquired by Lowe & Partners, and began operating globally as MullenLowe Profero. On 7 March 2017, MullenLowe appointed Razorfish’s Vincent Digonnet as APAC CEO. In December, Profero co-founder and group head Wayne Arnold departed the company. With the announcement, Aaron Reitkopf was appointed Global Chairman. In April 2018, MullenLowe Group CEO UK Dale Gall left the company, and was replaced by Jeremy Hine. In April 2020, MullenLowe Profero parent MullenLowe Group merged its MullenLowe Open agency into MullenLowe Profero. In June, the company announced that APAC CEO Digonnet was retiring, and that the division would be jointly led by previous MullenLowe Group Southeast Asia CEO Paul Soon and MullenLowe Profero in Tokyo CEO James Hollow. References External links Alchemy Leads LLC Information technology management Advertising agencies of the United Kingdom Marketing companies established in 1998 1998 establishments in the United Kingdom
MullenLowe Profero
Technology
518
49,557,682
https://en.wikipedia.org/wiki/Fire%20rock
Fire rock is manufactured lava rock that is sold in various shapes and sizes, and is used as a medium for retaining direct heat. Fire rocks are used in natural gas fireplaces or in natural gas or propane burning fire pits. It may be used as the main fuel distributor or as padding for fire glass to go on top. Fire rocks are proven to increase combustion efficiency and maintain a desirable aesthetic quality. They also are known for dispersing the flame of the fireplace or fire pit well, by allowing for gaps that act as channels for air and gas to funnel the flame through. Fire rocks come in many different colors and range in sizes from a half-inch to an inch. Use Fire rock is noted for its ability to withstand high temperatures of direct heat for prolonged periods of time. Unlike river rock, which is non-porous and highly explosive when heated, fire rock is considered a porous substance that is capable of releasing heat through tiny holes in its outer layer. Damp fire rocks have been known to "pop" and throw pea sized pieces of rock as they heat. The moisture that is caught in the fire rock creates air pockets that can easily split the rock when heated. Fire rocks must be stored away from high moisture areas and kept dry to keep this problem from occurring. Eco Friendly Because fire rock does not emit any carbon dioxide when heated, it is considered an eco-friendly burning solution. Fire rock can be used multiple times and when cared for can be burned for years on end. It produces no ash, which cuts back on the number of toxins in the home. References Volcanic rocks Refractory materials
Fire rock
Physics
330
309,392
https://en.wikipedia.org/wiki/Wigner%27s%20classification
In mathematics and theoretical physics, Wigner's classification is a classification of the nonnegative energy irreducible unitary representations of the Poincaré group which have either finite or zero mass eigenvalues. (These unitary representations are infinite-dimensional; the group is not semisimple and it does not satisfy Weyl's theorem on complete reducibility.) It was introduced by Eugene Wigner, to classify particles and fields in physics—see the article particle physics and representation theory. It relies on the stabilizer subgroups of that group, dubbed the Wigner little groups of various mass states. The Casimir invariants of the Poincaré group are (Einstein notation) where is the 4-momentum operator, and where is the Pauli–Lubanski pseudovector. The eigenvalues of these operators serve to label the representations. The first is associated with mass-squared and the second with helicity or spin. The physically relevant representations may thus be classified according to whether but or whether with Wigner found that massless particles are fundamentally different from massive particles. For the first case Note that the eigenspace (see generalized eigenspaces of unbounded operators) associated with is a representation of SO(3). In the ray interpretation, one can go over to Spin(3) instead. So, massive states are classified by an irreducible Spin(3) unitary representation that characterizes their spin, and a positive mass, . For the second case Look at the stabilizer of This is the double cover of SE(2) (see projective representation). We have two cases, one where irreps are described by an integral multiple of called the helicity, and the other called the "continuous spin" representation. For the third case The only finite-dimensional unitary solution is the trivial representation called the vacuum. Massive scalar fields As an example, let us visualize the irreducible unitary representation with and It corresponds to the space of massive scalar fields. Let be the hyperboloid sheet defined by: The Minkowski metric restricts to a Riemannian metric on , giving the metric structure of a hyperbolic space, in particular it is the hyperboloid model of hyperbolic space, see geometry of Minkowski space for proof. The Poincare group acts on because (forgetting the action of the translation subgroup with addition inside ) it preserves the Minkowski inner product, and an element of the translation subgroup of the Poincare group acts on by multiplication by suitable phase multipliers where These two actions can be combined in a clever way using induced representations to obtain an action of acting on that combines motions of and phase multiplication. This yields an action of the Poincare group on the space of square-integrable functions defined on the hypersurface in Minkowski space. These may be viewed as measures defined on Minkowski space that are concentrated on the set defined by The Fourier transform (in all four variables) of such measures yields positive-energy, finite-energy solutions of the Klein–Gordon equation defined on Minkowski space, namely without physical units. In this way, the irreducible representation of the Poincare group is realized by its action on a suitable space of solutions of a linear wave equation. Theory of projective representations Physically, one is interested in irreducible projective unitary representations of the Poincaré group. After all, two vectors in the quantum Hilbert space that differ by multiplication by a constant represent the same physical state. Thus, two unitary operators that differ by a multiple of the identity have the same action on physical states. Therefore the unitary operators that represent Poincaré symmetry are only defined up to a constant—and therefore the group composition law need only hold up to a constant. According to Bargmann's theorem, every projective unitary representation of the Poincaré group comes from an ordinary unitary representation of its universal cover, which is a double cover. (Bargmann's theorem applies because the double cover of the Poincaré group admits no non-trivial one-dimensional central extensions.) Passing to the double cover is important because it allows for half-odd-integer spin cases. In the positive mass case, for example, the little group is SU(2) rather than SO(3); the representations of SU(2) then include both integer and half-odd-integer spin cases. Since the general criterion in Bargmann's theorem was not known when Wigner did his classification, he needed to show by hand (§5 of the paper) that the phases can be chosen in the operators to reflect the composition law in the group, up to a sign, which is then accounted for by passing to the double cover of the Poincaré group. Further classification Left out from this classification are tachyonic solutions, solutions with no fixed mass, infraparticles with no fixed mass, etc. Such solutions are of physical importance, when considering virtual states. A celebrated example is the case of deep inelastic scattering, in which a virtual space-like photon is exchanged between the incoming lepton and the incoming hadron. This justifies the introduction of transversely and longitudinally-polarized photons, and of the related concept of transverse and longitudinal structure functions, when considering these virtual states as effective probes of the internal quark and gluon contents of the hadrons. From a mathematical point of view, one considers the SO(2,1) group instead of the usual SO(3) group encountered in the usual massive case discussed above. This explains the occurrence of two transverse polarization vectors and which satisfy and to be compared with the usual case of a free boson which has three polarization vectors each of them satisfying See also Induced representation Representation theory of the diffeomorphism group Representation theory of the Galilean group Representation theory of the Poincaré group System of imprimitivity Pauli–Lubanski pseudovector References Representation theory of Lie groups Quantum field theory Mathematical physics
Wigner's classification
Physics,Mathematics
1,230
33,639,328
https://en.wikipedia.org/wiki/Comparison%20of%20orbital%20launcher%20families
This article compares different orbital launcher families (launchers which are significantly different from other members of the same 'family' have separate entries). The article is organized into two tables: the first contains a list of currently active and under-development launcher families, while the second contains a list of retired launcher families. The related article "Comparison of orbital launch systems" lists each individual launcher system within any given launcher family, categorized by its current operational status. This article does not include suborbital launches (i.e. flights which were not intended to reach LEO or VLEO). Description Family: Name of the family/model of launcher Country: Origin country of launcher Manufac.: Main manufacturer Payload: Maximum mass of payload, for 3 altitudes LEO, low Earth orbit GTO, geostationary transfer orbit TLI, trans-Lunar injection Cost: Price for a launch at this time, in millions of US$ Launches reaching... Total: Flights which lift-off, or where the vehicle is destroyed during the terminal countNote: only includes orbital launches (flights launched with the intention of reaching orbit). Suborbital tests launches are not included in this listing. Space (regardless of outcome): Flights which reach approximately 100 km or more above Earth's surface. Any orbit (regardless of outcome): Flights which achieve at least one complete orbit even if the orbit differs from the targeted orbit. Target orbit (without damage to the payload) Status: Actual status of launcher (retired, development, active) Date of flight First: Year of first flight of first family member Last (if applicable): Year of Last flight (for vehicles retired from service) Refs: citations Same cores are grouped together (like Ariane 1, 2 & 3, but not V). List of active and under-development launcher families List of retired launcher families See also Comparison of orbital launch systems Comparison of orbital rocket engines Comparison of space station cargo vehicles List of orbital launch systems Notes References Technological comparisons
Comparison of orbital launcher families
Technology
407
60,073,107
https://en.wikipedia.org/wiki/LOC101928193
LOC101928193 is a protein which in humans is encoded by the LOC101928193 gene. There are no known aliases for this gene or protein. Similar copies of this gene, called orthologs, are known to exist in several different species across mammals, amphibians, fish, mollusks, cnidarians, fungi, and bacteria. The human LOC101928193 gene is located on the long (q) arm of chromosome 9 with a cytogenic location at 9q34.2. The molecular location of the gene is from base pair 133,189,767 to base pair 133,192,979 on chromosome 9 for an mRNA length of 3213 nucleotides. The gene and protein are not yet well understood by the scientific community, but there is data on its genetic makeup and expression. The LOC101928193 protein is targeted for the cytoplasm and has the highest level of expression in the thyroid, ovary, skin, and testes in humans. Gene Locus The cytogenic location of LOC101928193 in humans is located on the positive strand at 9q34.2. The molecular location of the protein-encoding region of LOC101928193 is from base pairs 133,189,767 to 133,192,979. Within this region, there is 1 intron and 2 exons. Gene neighborhood LOC101928193 is flanked by GBGT1 and 0BP2B on chromosome 9. GBGT1 encodes a member of the ABO gene family and also plays a role in synthesizing glycolipids that are involved in tropism and binding pathogens. 0BP2B is a gene that associates with E-Selectin Level in the ABO gene region. mRNA In humans, the LOC101928193 gene produces 3 transcript variants, which produce 3 isoforms of the protein. The LOC101928193 isoform X1 is the longest one at 406 codons in length. LOC101928193 isoform X2 is 388 codons long and LOC101928193 isoform X3 is 399 codons long. All isoforms have 2 exons and their coding mRNA is 3213 nucleotides long. Protein The molecular weight of LOC101928193 is 43.5 kilodaltons. The isoelectric point is 9 pI. Composition Compared to most human proteins, there are more valine, glycine, serine, histidine, and phenylalanine residues in LOC101928193. LOC101928193 is an alanine, methionine, asparagine, aspartic acid, glutamic acid, and lysine poor protein. The enrichment of all other amino acids is normal compared to other human proteins. LOC101928193 composition is highly conserved between mammals. LOC101928193 has an amino acid charge distribution of 0.7% negative, 4.9% positive, and 94.4% neutral. There are no charge runs, hydrophobic segments, or transmembrane domains. Domains and motifs There are two different motifs present in LOC101928193. Myristoylation sites are found in the protein sequence 17 times, and a zinc finger domain motif occurs once. The presence of myristoylation sites indicates that LOC101928193 may function in membrane targeting, protein-protein interactions, and signal transduction pathways. Zinc finger domain motifs aid in gene transcription, cell adhesion, protein folding, and chromatin remodeling. Primary sequence The LOC101928193 primary coding sequence mRNA is 3213 nucleotides long. There are no upstream open-reading frames, Kozak consensus sequences, or transmembrane regions. Secondary structure LOC101928193 has a predicted secondary structure of 56.40% random coils and 43.60% beta sheets. No alpha helices are predicted to occur. Due to the lack of alpha helices in the protein, no coiled coils are predicted to occur in the LOC101928193 secondary structure. Tertiary structure The tertiary structure of LOC101928193 is an all beta-sheet protein, as can be seen by its predicted tertiary structure. Both the N-terminus and the C-terminus lack beta-sheets. Post-translational modifications O-GlcNAc There are 13 predicted O-GlcNAc sites within the LOC101928193 protein. O-GlcNAc is a unique form of protein glycosylation that occurs exclusively in the nuclear and cytoplasmic compartments of the cell. O-Glc-NAcylated proteins are abundant on proteins involved in signaling pathways, stress responses, cytoskeletal assembly, and energy metabolism. N-linked glycosylation There are no N-linked glycosylation sites due to the absence of asparagine residues. Phosphorylation LOC101928193 has many sites of phosphorylation at several serines, threonines, and tyrosines throughout its structure that results in a conformational change and aids in signaling pathways and regulation. There are 33 predicted phosphorylation sites. The relative amount of phosphorylation sites is highly conserved throughout orthologs of LOC101928193. Subcellular localization LOC101928193 is targeted to the cytoplasm for Homo sapiens, rodents, amphibians, fish, and mollusks. It is predicted to localize in the nucleus for cnidarians, fungi, and bacteria. Expression LOC101928193 is not expressed ubiquitously, but is instead tissue specific in low levels of mRNA abundance compared to other human proteins. LOC101928193 has the highest level of expression in the thyroid and has high levels of expression in the ovaries, skin, and testes. Additionally, the gene is expressed in 23 other tissues at levels lower than 0.1 RPKM (Reads Per Kilobase of transcript per Million mapped reads) in humans. Other studies have also found that tissue-specific circular RNA induction of LOC101928193 during human fetal development has the highest levels in the heart, kidney, and stomach at 10 weeks gestational time. Regulation of Expression Epigenetic Epigenetic processes such as DNA methylation and histone modification that control expression have not been found in LOC101928193. Transcriptional Promoter There is one promoter for the LOC101928193 gene (GXP_6058323), and it is 1101 nucleotides long on the positive strand from base pairs 133,188,767 to 133,189,867 on chromosome 9. The transcription start site can be found at the 1001 base pair position. Transcription factor binding sites Several transcription factors are predicted to bind to the promoter sequence. Some examples include: X-box binding factors Nuclear factor kappa B/c-rel MAF and AP1 related factors Interferon regulatory factors RBPJ-kappa MEF3 binding sites Cart-1 homeoprotein Fork head domain factors Based on the functions of these transcription factors, it is possible that LOC101928193 may have been involved in gene repression, hematopoiesis regulation, fetal development, inhibition, DNA-binding, or limb development. Translational and mRNA stability Under conditions consistent with the temperature in the human body, multiple stem loops are predicted to occur in the 5' UTR, the coding region of the protein, and in the 3' UTR. The stem loops direct RNA folding, protect structural stability for mRNA, provide recognition sites for RNA binding proteins, and serve as a substrate for enzymatic reactions. There is an interior loop and a stem loop in the mRNA near AUG on the 5' UTR. These structures are often bound by proteins or cause the attenuation of a transcript in order to regulate translation. Furthermore, these stem-loops aid in mRNA stability and the predicted 5' UTR conformation has a free energy of -124.30 kcal/mol. In the 3' UTR, there are 6 predicted stem loops to occur with a free energy of -310.70 kcal/mol, which is spontaneously formed. There are no known microRNA targets in the 3' UTR. Homology and Evolution Paralogs There are no known paralogs of LOC101928193. Orthologs LOC101928193 has over 20 orthologs that are present in mammals, amphibians, fish, mollusks, cnidarians, fungi, and bacteria. The most distant orthologs are found in bacteria that diverged from humans more than 4.29 billion years ago. No orthologs for LOC101928193 have been discovered in close mammalian relatives of humans, including in primates. Below is a table of a range of organisms with orthologs related to the human LOC101928193 protein. Distant homologs The most distant detectable homolog is in several viral and bacterial species that diverged from humans over 4.29 billion years ago. Homologous domains There is a conserved coding region of 28 amino acids that is repeated six times in the protein-encoding region within LOC101928193 and across its orthologs. This domain begins with a glycine at the amino acid position of 194, 222, 250, 278, 306, and 334 within LOC101928193. The domain is conserved across mammals, cnidarians, fish, bacteria, and amphibians, and even in some species within these taxonomic groups that are not orthologs but share the same domain. The sequence always begins with a polar glycine and a hydrophobic valine. There is also a conserved basic arginine within the middle of the sequence. Phylogeny No other species has LOC101928193 in the same form as in humans. Several species within mammals, amphibians, fish, mollusks, cnidarians, fungi, and bacteria have LOC101928193 in a slightly different form with a similarity usually between 30 and 50%. Several taxonomic groups do not express any proteins or genes similar to LOC101928193 including Archaeans, plants, and several animal species. Inheritance LOC101928193 may not follow a normal inheritance pattern or occur regularly in the genome as it has a scattered occurrence throughout evolutionarily related species. Furthermore, the similarity between orthologs of LOC101928193 is constant over time and is not higher in closely related taxonomic groups or lower in distantly related taxonomic groups. It is possible that LOC101928193 incorporates into the genome of different species through viral pathways as LOC101928193 has been found to have ligand binding sites for cyanobacteria proteins, like chlorophyll a. Orthologs of LOC101928193 have been found to contain UL36, which is a large tegument protein that functions in the viral cycle and is commonly found in human herpesvirus simplex virus 1. References Suggested Reading Genes on human chromosome 9 Proteins
LOC101928193
Chemistry
2,400
58,350,677
https://en.wikipedia.org/wiki/NGC%206056
NGC 6056 is a barred lenticular galaxy located about 525 million light-years away in the constellation Hercules. It was discovered by astronomer Lewis Swift on June 8, 1886. It was then rediscovered by Swift on June 8, 1888 and was later listed as IC 1176. NGC 6056 is a member of the Hercules Cluster. See also List of NGC objects (6001–7000) References External links 6056 57075 Hercules (constellation) Hercules Cluster Astronomical objects discovered in 1886 Barred lenticular galaxies IC objects
NGC 6056
Astronomy
105
39,682,014
https://en.wikipedia.org/wiki/Islamabad%E2%80%93New%20Delhi%20hotline
The Islamabad–New Delhi hotline is a system that allows direct communication between the leaders of India and Pakistan. The hotline, according to the media sources, was established in 1971, shortly after the end of the 1971 war. The hotline linked the Prime minister's Office in Islamabad via Directorate-General of Military Operations (DGMO) to Secretariat Building in New Delhi. The hotline has seldom been used by the military leadership of India and Pakistan, even at the time of an escalation of tension. It is also called Hotline Linkage. In regard to the Moscow–Washington hotline model, the hotline serves the purpose, as both technological and strategic rationale, for establishing the link between two countries. The Islamabad–Delhi hotline is a secure communication link over which many procedural operations are obtained in different formats. History According to the Indian media sources, the hotline was established by the governments of India and Pakistan shortly after the end of the 1971 war. The foreign ministries of India and Pakistan signed the mutual agreement for the implementation of the hotline. The hotline was modelled directly on the Moscow–Washington hotline which was established in 1963. The hotline became operational in the 1970s after both countries' foreign ministries transmitted the messages. The first usage of the hotline was in 1991 between the militaries of India and Pakistan to work on confidence-building measures. The second usage of the hotline was in 1997, when both countries informed each other on trade issues. In 1998, when both countries had publicly conducted nuclear tests (Pokhran-II, Chagai-I & Chagai-II), the hotline was extensively used between the leaders of both countries. Since 2005, the hotline has been used by each country to inform the other of their nuclear missile tests in the region. Other hotlines There are other hotlines for issues involving terrorism (established in 2011), cyber warfare and record communications on prevention of nuclear risk. The nuclear hotline was set up on 20 June 2004, which was initiated by President Pervez Musharraf with the assistance of United States military officers (as advisors) in his regime. See also Moscow–Washington hotline Beijing–Washington hotline Seoul–Pyongyang hotline References Communication circuits India–Pakistan relations 1971 in politics History of the foreign relations of Pakistan History of Islamabad History of Delhi (1947–present) Military communications of India Hotlines between countries
Islamabad–New Delhi hotline
Engineering
496
41,734,688
https://en.wikipedia.org/wiki/Malware%20analysis
Malware analysis is the study or process of determining the functionality, origin and potential impact of a given malware sample such as a virus, worm, trojan horse, rootkit, or backdoor. Malware or malicious software is any computer software intended to harm the host operating system or to steal sensitive data from users, organizations or companies. Malware may include software that gathers user information without permission. Use cases There are three typical use cases that drive the need for malware analysis: Computer security incident management: If an organization discovers or suspects that some malware may have gotten into its systems, a response team may wish to perform malware analysis on any potential samples that are discovered during the investigation process to determine if they are malware and, if so, what impact that malware might have on the systems within the target organizations' environment. Malware research: Academic or industry malware researchers may perform malware analysis simply to understand how malware behaves and the latest techniques used in its construction. Indicator of compromise extraction: Vendors of software products and solutions may perform bulk malware analysis in order to determine potential new indicators of compromise; this information may then feed the security product or solution to help organizations better defend themselves against attack by malware. Types The method by which malware analysis is performed typically falls under one of two types: Static malware analysis: Static or Code Analysis is usually performed by dissecting the different resources of the binary file without executing it and studying each component. The binary file can also be disassembled (or reverse engineered) using a disassembler such as IDA or Ghidra. The machine code can sometimes be translated into assembly code which can be read and understood by humans: the malware analyst can then read the assembly as it is correlated with specific functions and actions inside the program, then make sense of the assembly instructions and have a better visualization of what the program is doing and how it was originally designed. Viewing the assembly allows the malware analyst/reverse engineer to get a better understanding of what is supposed to happen versus what is really happening and start to map out hidden actions or unintended functionality. Some modern malware is authored using evasive techniques to defeat this type of analysis, for example by embedding syntactic code errors that will confuse disassemblers but that will still function during actual execution. Dynamic malware analysis: Dynamic or Behavioral analysis is performed by observing the behavior of the malware while it is actually running on a host system. This form of analysis is often performed in a sandbox environment to prevent the malware from actually infecting production systems; many such sandboxes are virtual systems that can easily be rolled back to a clean state after the analysis is complete. The malware may also be debugged while running using a debugger such as GDB or WinDbg to watch the behavior and effects on the host system of the malware step by step while its instructions are being processed. Modern malware can exhibit a wide variety of evasive techniques designed to defeat dynamic analysis including testing for virtual environments or active debuggers, delaying execution of malicious payloads, or requiring some form of interactive user input. Stages Examining malicious software involves several stages, including, but not limited to the following: Manual Code Reversing Interactive Behavior Analysis Static Properties Analysis Fully-Automated Analysis References Analysis Computer forensics
Malware analysis
Technology,Engineering
691
459,471
https://en.wikipedia.org/wiki/Breathing%20gas
A breathing gas is a mixture of gaseous chemical elements and compounds used for respiration. Air is the most common and only natural breathing gas, but other mixtures of gases, or pure oxygen, are also used in breathing equipment and enclosed habitats. Oxygen is the essential component for any breathing gas. Breathing gases for hyperbaric use have been developed to improve on the performance of ordinary air by reducing the risk of decompression sickness, reducing the duration of decompression, reducing nitrogen narcosis or allowing safer deep diving. Description A breathing gas is a mixture of gaseous chemical elements and compounds used for respiration. Air is the most common and only natural breathing gas. Other mixtures of gases, or pure oxygen, are also used in breathing equipment and enclosed habitats such as scuba equipment, surface supplied diving equipment, recompression chambers, high-altitude mountaineering, high-flying aircraft, submarines, space suits, spacecraft, medical life support and first aid equipment, and anaesthetic machines. Contents Oxygen is the essential component for any breathing gas, at a partial pressure of between roughly 0.16 and 1.60 bar at the ambient pressure, occasionally lower for high altitude mountaineering, or higher for hyperbaric oxygen treatment. The oxygen is usually the only metabolically active component unless the gas is an anaesthetic mixture. Some of the oxygen in the breathing gas is consumed by the metabolic processes, and the inert components are unchanged, and serve mainly to dilute the oxygen to an appropriate concentration, and are therefore also known as diluent gases. Most breathing gases therefore are a mixture of oxygen and one or more metabolically inert gases. Breathing gases for hyperbaric use have been developed to improve on the performance of ordinary air by reducing the risk of decompression sickness, reducing the duration of decompression, reducing nitrogen narcosis or allowing safer deep diving. The techniques used to fill diving cylinders with gases other than air are called gas blending. Breathing gases for use at ambient pressures below normal atmospheric pressure are usually pure oxygen or air enriched with oxygen to provide sufficient oxygen to maintain life and consciousness, or to allow higher levels of exertion than would be possible using air. It is common to provide the additional oxygen as a pure gas added to the breathing air at inhalation, or though a life-support system. For diving and other hyperbaric use A safe breathing gas for hyperbaric use has four essential features: It must contain sufficient oxygen to support life, consciousness and work rate of the breather. It must not contain harmful contaminants. Carbon monoxide and carbon dioxide are common poisons which may contaminate breathing gases. There are many other possibilities. It must not become toxic when being breathed at high pressure such as when underwater. Oxygen and nitrogen are examples of gases that become toxic under pressure. It must not be too dense to breathe. Work of breathing increases with density and viscosity. Maximum ventilation drops by about 50% when density is equivalent to air at 30 msw, and carbon dioxide levels rise unacceptably for moderate exercise with a gas density exceeding 6 g/litre. Breathing gas density of 10 g/litre or more may cause runaway hypercapnia even at very low work levels, with potentially fatal effects. These common diving breathing gases are used: Air is a mixture of 21% oxygen, 78% nitrogen, and approximately 1% other trace gases, primarily argon; to simplify calculations this last 1% is usually treated as if it were nitrogen. Being freely available and simple to use, it is the most common diving gas. As its nitrogen component causes nitrogen narcosis, it is considered to have a safe depth limit of about for most divers, although the maximum operating depth (MOD) of air taking an allowable oxygen partial pressure of 1,6 bar is . Breathing air is air meeting specified standards for contaminants. Pure oxygen is mainly used to speed the shallow decompression stops at the end of a military, commercial, or technical dive. Risk of acute oxygen toxicity increases rapidly at pressures greater than 6 metres sea water. It was much used in frogmen's rebreathers, and is still used by attack swimmers. are mixtures of two or more gases specifically blended for use as breathing gas for divers. They may be mixed to the composition required before the dive, for open circuit use, or produced in the breathing circuit of a rebreather during the dive. Diving using mixed gases is referred to as mixed gas diving. Nitrox is a mixture of oxygen and nitrogen, either by mixing oxygen with air, or by removing nitrogen from air, and generally refers to mixtures which are more than 21% oxygen. It can be used as a tool to accelerate in-water decompression stops or to decrease the risk of decompression sickness and thus prolong a dive (a common misconception is that the diver can go deeper, this is not true owing to a shallower maximum operating depth than on conventional air). Trimix is a mixture of oxygen, nitrogen and helium and is often used at depth in technical diving and commercial diving instead of air to decrease density, reduce nitrogen narcosis and to avoid the dangers of oxygen toxicity. Heliox is a mixture of oxygen and helium and is often used in the deep phase of a commercial deep dive to eliminate nitrogen narcosis and reduce density, to limit work of breathing. Heliox is the standard mixture type for deep offshore saturation diving. Heliair is a form of trimix that is easily blended from helium and air without using pure oxygen. It always has a 21:79 ratio of oxygen to nitrogen; the balance of the mix is helium. Hydreliox is a mixture of oxygen, helium, and hydrogen and is used for dives below 130 metres in commercial diving. Experimental work using hydreliox is also done in deep technical diving, where the hydrogen is used to reduce HPNS. Hydrox, a gas mixture of hydrogen and oxygen, is used as a breathing gas in very deep diving. Neox (also called neonox) is a mixture of oxygen and neon sometimes employed in deep commercial diving. It is rarely used due to its cost. Also, DCS symptoms produced by neon ("neox bends") have a poor reputation, being widely reported to be more severe than those produced by an exactly equivalent dive-table and mix with helium. Breathing air Breathing air is atmospheric air with a standard of purity suitable for human breathing in the specified application. For hyperbaric use, the partial pressure of contaminants is increased in proportion to the absolute pressure, and must be limited to a safe composition for the depth or pressure range in which it is to be used. Classification by oxygen fraction Breathing gases for diving are classified by oxygen fraction. The boundaries set by authorities may differ slightly, as the effects vary gradually with concentration and between people, and are not accurately predictable. where the oxygen content does not differ greatly from that of air and allows continuous safe use at atmospheric pressure. , or where the oxygen content exceeds atmospheric levels, generally to a level where there is some measurable physiological effect over long term use, and sometimes requiring special procedures for handling due to increased fire hazard. The associated risks are oxygen toxicity at depth and fire, particularly in the breathing apparatus. where the oxygen content is less than that of air, generally to the extent that there is a significant risk of measurable physiological effect over the short term. The immediate risk is usually hypoxic incapacitation at or near the surface. Individual component gases Breathing gases for diving are mixed from a small number of component gases which provide special characteristics to the mixture which are not available from atmospheric air. Oxygen Oxygen (O2) must be present in every breathing gas. This is because it is essential to the human body's metabolic process, which sustains life. The human body cannot store oxygen for later use as it does with food. If the body is deprived of oxygen for more than a few minutes, unconsciousness and death result. The tissues and organs within the body (notably the heart and brain) are damaged if deprived of oxygen for much longer than four minutes. Filling a diving cylinder with pure oxygen costs around five times more than filling it with compressed air. As oxygen supports combustion and causes rust in diving cylinders, it should be handled with caution when gas blending. Oxygen has historically been obtained by fractional distillation of liquid air, but is increasingly obtained by non-cryogenic technologies such as pressure swing adsorption (PSA) and vacuum swing adsorption (VSA) technologies. The fraction of the oxygen component of a breathing gas mixture is sometimes used when naming the mix: hypoxic mixes, strictly, contain less than 21% oxygen, although often a boundary of 16% is used, and are designed only to be breathed at depth as a "bottom gas" where the higher pressure increases the partial pressure of oxygen to a safe level. Trimix, Heliox and Heliair are gas blends commonly used for hypoxic mixes and are used in professional and technical diving as deep breathing gases. A may be assigned to a hypoxic gas mixture, based on the depth at which the partial pressure is equal to the minimum oxygen partial pressure acceptable to the person or organisation using the gas. normoxic mixes have the same proportion of oxygen as air, 21%. The maximum operating depth of a normoxic mix could be as shallow as . Trimix with between 17% and 21% oxygen is often described as normoxic because it contains a high enough proportion of oxygen to be safe to breathe at the surface. hyperoxic mixes have more than 21% oxygen. Enriched Air Nitrox (EANx) is a typical hyperoxic breathing gas. Hyperoxic mixtures, when compared to air, cause oxygen toxicity at shallower depths but can be used to shorten decompression stops by drawing dissolved inert gases out of the body more quickly. The fraction of the oxygen determines the greatest depth at which the mixture can safely be used to avoid oxygen toxicity. This depth is called the maximum operating depth. The concentration of oxygen in a gas mix depends on the fraction and the pressure of the mixture. It is expressed by the partial pressure of oxygen (PO2). The partial pressure of any component gas in a mixture is calculated as: partial pressure = total absolute pressure × volume fraction of gas component For the oxygen component, PO2 = P × FO2 where: PO2 = partial pressure of oxygen P = total pressure FO2 = volume fraction of oxygen content The minimum safe partial pressure of oxygen in a breathing gas is commonly held to be 16 kPa (0.16 bar). Below this partial pressure the diver may be at risk of unconsciousness and death due to hypoxia, depending on factors including individual physiology and level of exertion. When a hypoxic mix is breathed in shallow water it may not have a high enough PO2 to keep the diver conscious. For this reason normoxic or hyperoxic "travel gases" are used at medium depth between the "bottom" and "decompression" phases of the dive. The maximum safe PO2 in a breathing gas depends on exposure time, the level of exercise and the security of the breathing equipment being used. It is typically between 100 kPa (1 bar) and 160 kPa (1.6 bar); for dives of less than three hours it is commonly considered to be 140 kPa (1.4 bar), although the U.S. Navy has been known to authorize dives with a PO2 of as much as 180 kPa (1.8 bar). At high PO2 or longer exposures, the diver risks oxygen toxicity which may result in a seizure. Each breathing gas has a maximum operating depth that is determined by its oxygen content. For therapeutic recompression and hyperbaric oxygen therapy partial pressures of 2.8 bar are commonly used in the chamber, but there is no risk of drowning if the occupant loses consciousness. For longer periods such as in saturation diving, 0.4 bar can be tolerated over several weeks. Oxygen analysers are used to measure the oxygen partial pressure in the gas mix. Divox is breathing grade oxygen labelled for diving use. In the Netherlands, pure oxygen for breathing purposes is regarded as medicinal as opposed to industrial oxygen, such as that used in welding, and is only available on medical prescription. The diving industry registered Divox as a trademark for breathing grade oxygen to circumvent the strict rules concerning medicinal oxygen thus making it easier for (recreational) scuba divers to obtain oxygen for blending their breathing gas. In most countries, there is no difference in purity in medical oxygen and industrial oxygen, as they are produced by exactly the same methods and manufacturers, but labeled and filled differently. The chief difference between them is that the record-keeping trail is much more extensive for medical oxygen, to more easily identify the exact manufacturing trail of a "lot" or batch of oxygen, in case problems with its purity are discovered. Aviation grade oxygen is similar to medical oxygen, but may have a lower moisture content. Diluent gases Gases which have no metabolic function in the breathing gas are used to dilute the gas, and are therefore classed as diluent gases. Some of them have a reversible narcotic effect at high partial pressure, and must therefore be limited to avoid excessive narcotic effects at the maximum pressure at which they are intended to be breathed. Diluent gases also affect the density of the gas mixture and thereby the work of breathing. Nitrogen Nitrogen (N2) is a diatomic gas and the main component of air, the cheapest and most common breathing gas used for diving. It causes nitrogen narcosis in the diver, so its use is limited to shallower dives. Nitrogen can cause decompression sickness. Equivalent air depth is used to estimate the decompression requirements of a nitrox (oxygen/nitrogen) mixture. Equivalent narcotic depth is used to estimate the narcotic potency of trimix (oxygen/helium/nitrogen mixture). Many divers find that the level of narcosis caused by a dive, whilst breathing air, is a comfortable maximum. Nitrogen in a gas mix is almost always obtained by adding air to the mix. Helium Helium (He) is an inert gas that is less narcotic than nitrogen at equivalent pressure (in fact there is no evidence for any narcosis from helium at all), and it has a much lower density, so it is more suitable for deeper dives than nitrogen. Helium is equally able to cause decompression sickness. At high pressures, helium also causes high-pressure nervous syndrome, which is a central nervous system irritation syndrome which is in some ways opposite to narcosis. Helium mixture fills are considerably more expensive than air fills due to the cost of helium and the cost of mixing and compressing the mix. Helium is not suitable for dry suit inflation owing to its poor thermal insulation properties – compared to air, which is regarded as a reasonable insulator, helium has six times the thermal conductivity. Helium's low molecular weight (monatomic MW=4, compared with diatomic nitrogen MW=28) increases the timbre of the breather's voice, which may impede communication. This is because the speed of sound is faster in a lower molecular weight gas, which increases the resonance frequency of the vocal cords. Helium leaks from damaged or faulty valves more readily than other gases because atoms of helium are smaller allowing them to pass through smaller gaps in seals. Helium is found in significant amounts only in natural gas, from which it is extracted at low temperatures by fractional distillation. Neon Neon (Ne) is an inert gas sometimes used in deep commercial diving but is very expensive. Like helium, it is less narcotic than nitrogen, but unlike helium, it does not distort the diver's voice. Compared to helium, neon has superior thermal insulating properties. Hydrogen Hydrogen (H2) has been used in deep diving gas mixes but is very explosive when mixed with more than about 4 to 5% oxygen (such as the oxygen found in breathing gas). This limits use of hydrogen to deep dives and imposes complicated protocols to ensure that excess oxygen is cleared from the breathing equipment before breathing hydrogen starts. Like helium, it raises the timbre of the diver's voice. The hydrogen-oxygen mix when used as a diving gas is sometimes referred to as Hydrox. Mixtures containing both hydrogen and helium as diluents are termed Hydreliox. Unwelcome components of breathing gases for diving Many gases are not suitable for use in diving breathing gases. Here is an incomplete list of gases commonly present in a diving environment: Argon Argon (Ar) is an inert gas that is more narcotic than nitrogen, so is not generally suitable as a diving breathing gas. Argox is used for decompression research. It is sometimes used for dry suit inflation by divers whose primary breathing gas is helium-based, because of argon's good thermal insulation properties. Argon is more expensive than air or oxygen, but considerably less expensive than helium. Argon is a component of natural air, and constitutes 0.934% by volume of the Earth's atmosphere. Carbon dioxide Carbon dioxide (CO2) is produced by the metabolism in the human body and can cause carbon dioxide poisoning. When breathing gas is recycled in a rebreather or life support system, the carbon dioxide is removed by scrubbers before the gas is re-used. Carbon monoxide Carbon monoxide (CO) is a highly toxic gas that competes with dioxygen for binding to hemoglobin, thereby preventing the blood from carrying oxygen (see carbon monoxide poisoning). It is typically produced by incomplete combustion. Four common sources are: Internal combustion engine exhaust gas containing CO in the air being drawn into a diving air compressor. CO in the intake air cannot be stopped by any filter. The exhausts of all internal combustion engines running on petroleum fuels contain some CO, and this is a particular problem on boats, where the intake of the compressor cannot be arbitrarily moved as far as desired from the engine and compressor exhausts. Heating of lubricants inside the compressor may vaporize them sufficiently to be available to a compressor intake or intake system line. In some cases hydrocarbon lubricating oil may be drawn into the compressor's cylinder directly through damaged or worn seals, and the oil may (and usually will) then undergo combustion, being ignited by the immense compression ratio and subsequent temperature rise. Since heavy oils don't burn well – especially when not atomized properly – incomplete combustion will result in carbon monoxide production. A similar process is thought to potentially happen to any particulate material, which contains "organic" (carbon-containing) matter, especially in cylinders which are used for hyperoxic gas mixtures. If the compressor air filter(s) fail, ordinary dust will be introduced to the cylinder, which contains organic matter (since it usually contains humus). A more severe danger is that air particulates on boats and industrial areas, where cylinders are filled, often contain carbon-particulate combustion products (these are what makes a dirt rag black), and these represent a more severe CO danger when introduced into a cylinder. Carbon monoxide is generally avoided as far as is reasonably practicable by positioning of the air intake in uncontaminated air, filtration of particulates from the intake air, use of suitable compressor design and appropriate lubricants, and ensuring that running temperatures are not excessive. Where the residual risk is excessive, a hopcalite catalyst can be used in the high pressure filter to convert carbon monoxide into carbon dioxide, which is far less toxic. Hydrocarbons Hydrocarbons (CxHy) are present in compressor lubricants and fuels. They can enter diving cylinders as a result of contamination, leaks, or due to incomplete combustion near the air intake. They can act as a fuel in combustion increasing the risk of explosion, especially in high-oxygen gas mixtures. Inhaling oil mist can damage the lungs and ultimately cause the lungs to degenerate with severe lipid pneumonia or emphysema. Moisture content The process of compressing gas into a diving cylinder removes moisture from the gas. This is good for corrosion prevention in the cylinder but means that the diver inhales very dry gas. The dry gas extracts moisture from the diver's lungs while underwater contributing to dehydration, which is also thought to be a predisposing risk factor of decompression sickness. It is also uncomfortable, causing a dry mouth and throat and making the diver thirsty. This problem is reduced in rebreathers because the soda lime reaction, which removes carbon dioxide, also puts moisture back into the breathing gas, and the relative humidity and temperature of exhaled gas is relatively high and there is a cumulative effect due to rebreathing. In hot climates, open circuit diving can accelerate heat exhaustion because of dehydration. Another concern with regard to moisture content is the tendency of moisture to condense as the gas is decompressed while passing through the regulator; this coupled with the extreme reduction in temperature, also due to the decompression, can cause the moisture to solidify as ice. This icing up in a regulator can cause moving parts to seize and the regulator to fail or free flow. This is one of the reasons that scuba regulators are generally constructed from brass, and chrome plated (for protection). Brass, with its good thermal conductive properties, quickly conducts heat from the surrounding water to the cold, newly decompressed air, helping to prevent icing up. Gas analysis Gas mixtures must generally be analysed either in process or after blending for quality control. This is particularly important for breathing gas mixtures where errors can affect the health and safety of the end user. It is difficult to detect most gases that are likely to be present in diving cylinders because they are colourless, odourless and tasteless. Electronic sensors exist for some gases, such as oxygen analysers, helium analyser, carbon monoxide detectors and carbon dioxide detectors. Oxygen analysers are commonly found underwater in rebreathers. Oxygen and helium analysers are often used on the surface during gas blending to determine the percentage of oxygen or helium in a breathing gas mix. Chemical and other types of gas detection methods are not often used in recreational diving, but are used for periodic quality testing of compressed breathing air from diving air compressors. Breathing gas standards Standards for breathing gas quality are published by national and international organisations, and may be enforced in terms of legislation. In the UK, the Health and Safety Executive indicate that the requirements for breathing gases for divers are based on the BS EN 12021:2014. The specifications are listed for oxygen compatible air, nitrox mixtures produced by adding oxygen, removing nitrogen, or mixing nitrogen and oxygen, mixtures of helium and oxygen (heliox), mixtures of helium, nitrogen and oxygen (trimix), and pure oxygen, for both open circuit and reclaim systems, and for high pressure and low pressure supply (above and below 40 bar supply). Oxygen content is variable depending on the operating depth, but the tolerance depends on the gas fraction range, being ±0.25% for an oxygen fraction below 10% by volume, ±0.5% for a fraction between 10% and 20%, and ±1% for a fraction over 20%. Water content is limited by risks of icing of control valves, and corrosion of containment surfaces – higher humidity is not a physiological problem – and is generally a factor of dew point. Other specified contaminants are carbon dioxide, carbon monoxide, oil, and volatile hydrocarbons, which are limited by toxic effects. Other possible contaminants should be analysed based on risk assessment, and the required frequency of testing for contaminants is also based on risk assessment. In Australia breathing air quality is specified by Australian Standard 2299.1, Section 3.13 Breathing Gas Quality. Diving gas blending Gas blending (or gas mixing) of breathing gases for diving is the filling of gas cylinders with non-air breathing gas mixtures. Filling cylinders with a mixture of gases has dangers for both the filler and the diver. During filling there is a risk of fire due to use of oxygen and a risk of explosion due to the use of high-pressure gases. The composition of the mix must be safe for the depth and duration of the planned dive. If the concentration of oxygen is too lean the diver may lose consciousness due to hypoxia and if it is too rich the diver may develop oxygen toxicity. The concentration of inert gases, such as nitrogen and helium, are planned and checked to avoid nitrogen narcosis and decompression sickness. Methods used include batch mixing by partial pressure or by mass fraction, and continuous blending processes. Completed blends are analysed for composition for the safety of the user. Gas blenders may be required by legislation to prove competence if filling for other persons. Density Excessive density of a breathing gas can raise the work of breathing to intolerable levels, and can cause carbon dioxide retention at lower densities. Helium is used as a component to reduce density as well as to reduce narcosis at depth. Like partial pressure, density of a mixture of gases is in proportion to the volumetric fraction of the component gases, and absolute pressure. The ideal gas laws are adequately precise for gases at respirable pressures. The density of a gas mixture at a given temperature and pressure can be calculated as: ρm = (ρ1 V1 + ρ2 V2 + .. + ρn Vn) / (V1 + V2 + ... + Vn) where ρm = density of the gas mixture ρ1 ... ρn = density of each of the components V1 ... Vn = partial volume of each of the component gases Since gas fraction Fi (volumetric fraction) of each gas can be expressed as Vi / (V1 + V2 + ... + Vn ) by substitution, ρm = (ρ1 F1 + ρ2 F2 + .. + ρn Fn) Hypobaric breathing gases Breathing gases for use at reduced ambient pressure are used for high altitude flight in unpressurised aircraft, in space flight, particularly in space suits, and for high altitude mountaineering. In all these cases, the primary consideration is providing an adequate partial pressure of oxygen. In some cases the breathing gas has oxygen added to make up a sufficient concentration, and in other cases the breathing gas may be pure or nearly pure oxygen. Closed circuit systems may be used to conserve the breathing gas, which may be in limited supply – in the case of mountaineering the user must carry the supplemental oxygen, and in space flight the cost of lifting mass into orbit is very high. Medical breathing gases Medical use of breathing gases other than air include oxygen therapy and anesthesia applications. Oxygen therapy Oxygen is required by people for normal cell metabolism. Air is typically 21% oxygen by volume. This is normally sufficient, but in some circumstances the oxygen supply to tissues is compromised. Oxygen therapy, also known as supplemental oxygen, is the use of oxygen as a medical treatment. This can include for low blood oxygen, carbon monoxide toxicity, cluster headaches, and to maintain enough oxygen while inhaled anesthetics are given. Long term oxygen is often useful in people with chronically low oxygen such as from severe COPD or cystic fibrosis. Oxygen can be given in a number of ways including nasal cannula, face mask, and inside a hyperbaric chamber. High concentrations of oxygen can cause oxygen toxicity such as lung damage or result in respiratory failure in those who are predisposed. It can also dry out the nose and increase the risk of fires in those who smoke. The target oxygen saturation recommended depends on the condition being treated. In most conditions a saturation of 94-98% is recommended, while in those at risk of carbon dioxide retention saturations of 88-92% are preferred, and in those with carbon monoxide toxicity or cardiac arrest the saturation should be as high as possible. The use of oxygen in medicine become common around 1917. It is on the World Health Organization's List of Essential Medicines. The cost of home oxygen is about US$150 a month in Brazil and US$400 a month in the United States. Home oxygen can be provided either by oxygen tanks or an oxygen concentrator. Oxygen is believed to be the most common treatment given in hospitals in the developed world. Anaesthetic gases The most common approach to general anaesthesia is through the use of inhaled general anesthetics. Each has its own potency which is correlated to its solubility in oil. This relationship exists because the drugs bind directly to cavities in proteins of the central nervous system, although several theories of general anaesthetic action have been described. Inhalational anesthetics are thought to exact their effects on different parts of the central nervous system. For instance, the immobilizing effect of inhaled anesthetics results from an effect on the spinal cord whereas sedation, hypnosis and amnesia involve sites in the brain. An inhalational anaesthetic is a chemical compound possessing general anaesthetic properties that can be delivered via inhalation. Agents of significant contemporary clinical interest include volatile anaesthetic agents such as isoflurane, sevoflurane and desflurane, and anaesthetic gases such as nitrous oxide and xenon. Administration Anaesthetic gases are administered by anaesthetists (a term which includes anaesthesiologists, nurse anaesthetists, and anaesthesiologist assistants) through an anaesthesia mask, laryngeal mask airway or tracheal tube connected to an anaesthetic vaporiser and an anaesthetic delivery system. The anaesthetic machine (UK English) or anesthesia machine (US English) or Boyle's machine is used to support the administration of anaesthesia. The most common type of anaesthetic machine in use in the developed world is the continuous-flow anaesthetic machine, which is designed to provide an accurate and continuous supply of medical gases (such as oxygen and nitrous oxide), mixed with an accurate concentration of anaesthetic vapour (such as isoflurane), and deliver this to the patient at a safe pressure and flow. Modern machines incorporate a ventilator, suction unit, and patient monitoring devices. Exhaled gas is passed through a scrubber to remove carbon dioxide, and the anaesthetic vapour and oxygen are replenished as required before the mixture is returned to the patient. See also References External links Industrial gases Breathing apparatus components
Breathing gas
Chemistry
6,407
36,685,545
https://en.wikipedia.org/wiki/Low-tide%20elevation
Low-tide elevation is a naturally formed area of land which is above water and surrounded by water at low tide but submerged at high tide. It may be a mudflat or reef. Legal status Low tide elevations may be used as basepoints for the calculation of maritime zones unless they lie at a distance exceeding the breadth of the territorial sea (12-miles) from the nearest mainland or island. According to the Asia Maritime Transparency Initiative of the Center for Strategic and International Studies, "If an LTE (low-tide elevation) is located within maritime zones of a littoral state, such as territorial sea, exclusive economic zone, and continental shelf, it automatically belongs to that state." References Sources See also Territorial waters Exclusive economic zone Continental shelf International waters United Nations Convention on the Law of the Sea Hydrography Law of the sea
Low-tide elevation
Environmental_science
170
73,554,993
https://en.wikipedia.org/wiki/Ligand-targeted%20liposome
A ligand-targeted liposome (LTL) is a nanocarrier with specific ligands attached to its surface to enhance localization for targeted drug delivery. The targeting ability of LTLs enhances cellular localization and uptake of these liposomes for therapeutic or diagnostic purposes. LTLs have the potential to enhance drug delivery by decreasing peripheral systemic toxicity, increasing in vivo drug stability, enhancing cellular uptake, and increasing efficiency for chemotherapeutics and other applications. Liposomes are beneficial in therapeutic manufacturing because of low batch-to-batch variability, easy synthesis, favorable scalability, and strong biocompatibility. Ligand-targeting technology enhances liposomes by adding targeting properties for directed drug delivery. Ligand selection Ligands are molecules responsible for binding to receptors in the cellular targeting process. Surface-coupled ligands offer a greater degree of freedom to move on the liposome membrane for optimal interactions. Ligands are typically monoclonal antibodies (mAbs) or antibody fragments, but can also include other molecules such as ARPG, proteins, peptides, vitamins, carbohydrates, and glycoproteins. The choice of ligand can significantly influence the behavioral and functional properties of a ligand-targeted liposome. Antibody fragments have lower immunogenicity and improved pharmacokinetics. mAbs are unique and can be used for inhibition of DNA repair, terminating the cell cycle, and triggering apoptosis, all of which factor into applications for anticancer drugs. Peptides are relatively easy and affordable to prepare with low antigenicity and lower opsonization, which are thus more resistant to enzymatic degradation. Proteins can target the transferrin receptor membrane glycoprotein. Sugars and vitamins are recognized by cellular transport receptors. Ligand choice is based on receptor expression, ligand internalization, binding affinity, and type of ligand. Ligands alone are not able to carry an efficient payload for therapeutic levels but can carry more of the agent when combined with liposomes. Ligand attachment to liposome Ligands can be attached to liposomes through ligation to create ligand-targeted liposomes in a variety of ways. Liposomes have a lipid outer layer that can be used to bind ligands. Conjugation of the ligand to the surface of a liposome can be achieved through multiple routes. Covalent binding is a prominent way due to the anchoring between the long-chain fatty acids and the ligand. Combinations of covalent binding through disulfide linkages, heating, and hydrophobic interactions can be used depending on the properties of the liposome and ligand. Adsorption and membrane fusion are non-covalent methods for the attachment of monoclonal antibodies. Chemical linkages such as covalent bonds are more effective at increasing the amount of attached ligand to the carrier as opposed to non-covalent methods. During chemical coupling for manufacturing, it is crucial that ligands maintain their integrity when attached to the liposome surface. If ligands, such as antibodies, do not maintain binding specificity, proper orientation, and coupling efficiency, the liposome will not be effective. Cellular interaction and delivery of contained agent Since the ligand is responsible for cellular interaction, it is chosen for the application depending on the target site. The target site contains binding sites that the ligand targets to deliver the LTL to the desired area. Favorable target site characteristics are determined by what is commonly expressed by tissues of the pathology of interest. Determinants can include histones, basement membrane fibrinogen, selectins, adhesion molecules, and other ligand targets. For example, in some human cancer tumors such as ovarian carcinomas, folate is over-expressed. LTLs for targeting cancer often use a ligand that targets this over-expression of folate to localize drug delivery to the desired area. The tumor microenvironment of solid tumor cancers is also a unique targeting site. Tumor endothelial cells are important for angiogenesis, which is key to tumor growth; therefore, using LTLs to target these cells can limit the growth and vascularization of a tumor. Ligand-targeted liposomes utilize active targeting to interact with the desired cells. Once administered intravenously into blood circulation, ligand-targeted liposomes must travel to reach the target area to deliver their contents. LTLs retain the contained agent until the process of cellular uptake. Receptor-mediated endocytosis is the most common way LTLs deliver material to the cell. The targeting ligand connected to the liposome attaches to the binding site found on the targeted cell. The LTL's contents are transported to Lysosomes to be processed. This process allows the molecules to cross the blood-brain barrier, which allows the drug to be delivered to tissue that is relatively difficult to reach without a specific mechanism. Less commonly, pinocytosis or phagocytosis may be used for cellular uptake of the liposome. Certain recognition sites, such as ecto-NAD+oglycohydro|ase, mediate uptake to aid in the internalization and effectiveness of the LTLs. The remainder of LTLs in circulation after binding to the target site are mainly cleared through the reticuloendothelial system (RES). The RES includes different organs including the kidneys, lungs, spleen, liver, bone marrow, and lymph nodes. The liver is the primary organ for the clearance of LTLs. The RES is most likely able to clear LTLs due to fenestrations in their microvasculature that allow for extravasation. Phagocytic cells within the RES break down LTLs. Applications of LTLs in medicine Ligand-targeted liposomes are used for a variety of applications depending on the liposome, ligand, and liposome contents. Ligand-targeted liposomes can be used for diagnostics through imaging. The liposomes can contain imaging agents to aid in visualization such as fluorescent dyes, labeling probes, and contrast agents. Commonly, a radioactive gamma-emitter, fluorescent marker, or magnetic resonance imaging (MRI) agent is encapsulated in the liposome for this application. The active targeting mechanism of LTLs allows the target tissue to retain the imaging agent while the remaining agent is cleared from circulation. The ligand-targeted liposomes increase the specificity and sensitivity of the images taken through positron emission tomography (PET), single-photon emission computed tomography (SPECT), and MRI techniques through the ligand localization to receptors of interest. Biotinylated liposomes containing [67Ga] coupled with a later injection of avidin have been shown to reduce background signal and produce the needed contrast for imaging while reducing the circulation time of radioactive imaging agent. Molecular imaging of processes over time in vivo is also made possible using ligand-targeted nanoparticles. As of 2015, many ligand-targeted imaging agents such as MIP-1404, MIP-1405, MIP-1072, MIP-109, and 18F-DCFBC were undergoing clinical trials. The ability of a liposome to encapsulate these imaging agents and deliver them to specific regions through ligand targeting is helpful for precision detection. Ligand-targeted liposomes are a promising method of drug delivery. These systems are efficient in delivering the drug to localized areas with low peripheral distribution, which minimizes off-target effects. The favorable biodistribution to target tissue is an encouraging property of this drug delivery system. In addition to highly targeting tissue, LTLs have a short circulating half-life, so they can be quickly cleared from the bloodstream. LTLs can be used to deliver AuNRs for localized delivery of photo-thermal therapy in cancer treatment. Photodynamic therapy (PDT) is a non-invasive cancer therapy that relies on a photosensitizing (PS) pro-drug to interact with light and oxygen as a cancer therapeutic agent. PSs can be encapsulated in LTLs—allowing them to move through systemic circulation to the tumor site for ligand binding—to specify the area of their effect. Using PDT causes damage to cancer cells and tumor microvasculature. There are many liposome-based products currently approved or undergoing clinical trials. Aside from cancer therapies, ligand-targeted liposomes can also be used to target inflammation in the body that may be present due to rheumatoid arthritis, psoriasis, vascular inflammation, and organ transplantation. E-selectin is a cell-specific receptor expressed by inflamed endothelium that ligands can target. LTLs also have the potential for localized treatment in fungal infections. AmBisome (L-AMB) is an LTL that contains Amphotericin B (AMPH-B), an anti-fungal treatment that is effective for a broad variety of fungal infections. AMPH-B can be toxic after prolonged exposure, making it a good candidate for the targeting and rapid clearing of systemic circulation of LTLs. AmBisome is also effective due to the inflammation in the area of fungal activity, which increases vascular permeation. Disadvantages Consistently producing ligand-targeted liposomes through traditional methods is difficult. The process can be tedious, challenging to control and result in a poorly defined system. Using the 'post-insertion' technique—in which Micelles formed from PEG-linked ligands are incubated with pre-formed, drug-loaded, non-targeted liposomes to combine and form LTLs—can limit the associated manufacturing challenges. When using certain ligands, such as antibodies, the risk for an immunological reaction poses a risk. Liposome design including size, charge, morphology, composition, surface characteristics, and dose size can all influence the immune response to administered LTLs. The ligands used can elicit an immune response when introduced into the body. For example, when peptide ligands such as CDX are used for brain-targeted delivery systems, they are immunogenic and trigger an immune response. Complement Activation-Related Pseudo-allergies (CARPA) is a hypersensitivity syndrome that can be triggered when LTLs activate the innate immune system and the complement system. CARPA can cause many side effects including anaphylaxis, cardiopulmonary distress, and facial swelling. These side effects have the potential to be severe, which generates concern when administering LTLs to patients with health problems, especially cardiovascular issues. This reaction can be reduced by slowing infusion rates or incorporating the use of allergy medicines like antihistamines into the treatment regimen. Due to the immune response, LTLs can experience the accelerated blood clearance (ABC) phenomenon. This phenomenon is more common in repeated dosage usage of LTLs, such as multi-dose PEGylated formulas, because of immunological memory. The pharmokinetics and clearance rates of the second dose have been shown to be significantly reduced while accumulation in the spleen and liver increases. This poses challenges for clinical applications of LTLs that require multiple doses to be effective. Ligand-targeted liposomes need specific conditions to remain intact for use. Controlling environmental factors such as temperature and pH is necessary to maintain the integrity of the molecules. This can be helpful for temperature-sensitive or pH-dependent drug release conditions but is harmful if the pH changes at an inopportune time. This technology can also be used in combination with enzymes such as in Gal-Dox, which releases active doxorubicin in combination with β-Galactosidase. Making sure the compound does not encounter the enzyme too early is also important for effective usage. There is a possibility that LTLs lead to immunosuppression. LTLs are cleared through the RES which is part of the innate immune system. Macrophage saturation to remove the liposomes could impact the ability of the phagocytic cells to function properly to conduct immune functions. Significant immune suppression has not been observed in clinical cases for therapeutic doses of LTLs containing non-cytotoxic drugs. References Medicinal chemistry
Ligand-targeted liposome
Chemistry,Biology
2,514
58,960,535
https://en.wikipedia.org/wiki/1ES%201101-232
1ES 1101-232 is an active galactic nucleus of a distant galaxy known as a blazar. It is also a BL Lac object. An X-ray source (catalogued as A 1059-22) was first recorded by Maccagni and colleagues in a 1978 paper; they thought the source arose from a galaxy in the Abell 1146 galaxy cluster, which contained many giant elliptical galaxies. In 1989, Remillard and colleagues linked the X-ray source with a visual object and established that the object was surrounded by a large elliptical galaxy. They also discovered that the object (and galaxy) were more distant, with a redshift of 0.186. The host galaxy appears to be part of a distant galaxy cluster. Between 2004 and 2005, 1ES 1101-232 showed gamma-ray emission which was detected by the High Energy Stereoscopic System of Atmospheric Cherenkov Telescope. Astronomers observed it for 43 hours, which they studied the blazar for its inner jets and the extragalactic background light. In November 2023, an X-ray flare was detected in 1ES 1101-232. References BL Lacertae objects Crater (constellation) Blazars 3765110
1ES 1101-232
Astronomy
252
44,633,072
https://en.wikipedia.org/wiki/Psilocybe%20subhoogshagenii
Psilocybe subhoogshagenii is a species of psilocybin mushroom in the family Hymenogastraceae. Described as new to science in 2004, it is found in Colombia, where it grows on bare clay soil in tropical forest. See also List of psilocybin mushrooms List of Psilocybe species Psilocybe hoogshagenii References External links subhoogshagenii Entheogens Psychoactive fungi Psychedelic tryptamine carriers Fungi of Colombia Fungi described in 2004 Taxa named by Gastón Guzmán Fungus species
Psilocybe subhoogshagenii
Biology
116