id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
41,710,616 | https://en.wikipedia.org/wiki/Diseases%20from%20Space | Diseases from Space is a book published in 1979 that was authored by astronomers Fred Hoyle and Chandra Wickramasinghe, where they propose that many of the most common diseases which afflict humanity, such as influenza, the common cold and whooping cough, have their origins in extraterrestrial sources. The two authors argue the case for outer space being the main source for these pathogens or at least their causative agents.
The claim connecting terrestrial disease and extraterrestrial pathogens was rejected by the scientific community.
Overview
Fred Hoyle and Chandra Wickramasinghe spent over 20 years investigating the nature and composition of interstellar dust. Though many hypotheses regarding this dust had been postulated by various astronomers since the middle of the 19th century, all were found to be wanting as and when new data on the gas and dust clouds became available. Chandra Wickramasinghe proposed the existence of polymeric composition based on the molecule formaldehyde (H2CO).
In 1974 Wickramasinghe first proposed the hypothesis that some dust in interstellar space was largely organic (containing carbon and nitrogen), and followed this up with other research confirming the hypothesis. Wickramasinghe also proposed and confirmed the existence of polymeric compounds based on formaldehyde. Fred Hoyle and Wickramasinghe later proposed the identification of bicyclic aromatic compounds from an analysis of the ultraviolet extinction absorption at 2175A, thus demonstrating the existence of polycyclic aromatic hydrocarbon molecules in space.
Hoyle and Wickramasinghe went further and speculated that the overall spectroscopic data of cosmic dust and gas clouds also matched those for desiccated bacteria. This led them to conclude that diseases such as influenza and the common cold are incident from space and fall upon the Earth in what they term "pathogenic patches." Hoyle and Wickramasinghe viewed the process of evolution in a manner at variance with the standard Darwinian model. They speculated that genetic material in the form of incoming pathogens from the cosmos provided the mechanism for driving the evolutionary engine. Hoyle died in 2001, and Wickramasinghe still advocates for these views and beliefs.
Scientific consensus
The claim connecting terrestrial disease and extraterrestrial pathogens was rejected and dismissed by the scientific community. On 24 May 2003 The Lancet journal published a letter from Wickramasinghe, jointly signed by Milton Wainwright and Jayant Narlikar, in which they speculate that the virus that causes severe acute respiratory syndrome (SARS) could be extraterrestrial in origin instead of originating from chickens. The Lancet subsequently published three responses to this letter, showing that the hypothesis was not evidence-based, and casting doubts on the quality of the experiments referenced by Wickramasinghe in his letter.
Publication history
First published in 1979 by J.M. Dent & Sons Ltd.
Published in 1980 by Harper & Row.
Published in 1981 by Sphere Books Ltd.
See also
Directed panspermia
Fringe science
The Andromeda Strain, a 1969 novel about a disease from space
References
1979 non-fiction books
1979 controversies
Fringe science
Astronomical controversies
Panspermia
Books about diseases and disorders
Books about extraterrestrial life
Cosmic dust
Medical controversies
Space medicine
Space research | Diseases from Space | Astronomy,Biology | 665 |
5,707,633 | https://en.wikipedia.org/wiki/Splice%20site%20mutation | A splice site mutation is a genetic mutation that inserts, deletes or changes a number of nucleotides in the specific site at which splicing takes place during the processing of precursor messenger RNA into mature messenger RNA. Splice site consensus sequences that drive exon recognition are located at the very termini of introns. The deletion of the splicing site results in one or more introns remaining in mature mRNA and may lead to the production of abnormal proteins. When a splice site mutation occurs, the mRNA transcript possesses information from these introns that normally should not be included. Introns are supposed to be removed, while the exons are expressed.
The mutation must occur at the specific site at which intron splicing occurs: within non-coding sites in a gene, directly next to the location of the exon. The mutation can be an insertion, deletion, frameshift, etc. The splicing process itself is controlled by the given sequences, known as splice-donor and splice-acceptor sequences, which surround each exon. Mutations in these sequences may lead to retention of large segments of intronic DNA by the mRNA, or to entire exons being spliced out of the mRNA. These changes could result in production of a nonfunctional protein. An intron is separated from its exon by means of the splice site. Acceptor-site and donor-site relating to the splice sites signal to the spliceosome where the actual cut should be made. These donor sites, or recognition sites, are essential in the processing of mRNA. The average vertebrate gene consists of multiple small exons (average size, 137 nucleotides) separated by introns that are considerably larger.
Background
In 1993, Richard J. Roberts and Phillip Allen Sharp received the Nobel Prize in Physiology or Medicine for their discovery of "split genes". Using the model adenovirus in their research, they were able to discover splicing—the fact that pre-mRNA is processed into mRNA once introns were removed from the RNA segment. These two scientists discovered the existence of splice sites, thereby changing the face of genomics research. They also discovered that the splicing of the messenger RNA can occur in different ways, opening up the possibility for a mutation to occur.
Technology
Today, many different types of technologies exist in which splice sites can be located and analyzed for more information. The Human Splicing Finder is an online database stemming from the Human Genome Project data. The genome database identifies thousands of mutations related to medical and health fields, as well as providing critical research information regarding splice site mutations. The tool specifically searches for pre-mRNA splicing errors, the calculation of potential splice sites using complex algorithms, and correlation with several other online genomic databases, such as the Ensembl genome browser.
Role in Disease
Due to the sensitive location of splice sites, mutations in the acceptor or donor areas of splice sites can become detrimental to a human individual. In fact, many different types of diseases stem from anomalies within the splice sites.
Cancer
A study researching the role of splice site mutations in cancer supported that a splice site mutation was common in a set of women who were positive for breast and ovarian cancer. These women had the same mutation, according to the findings. An intronic single base-pair substitution destroys an acceptor site, thus activating a cryptic splice site, leading to a 59 base-pair insertion and chain termination. The four families with both breast and ovarian cancer had chain termination mutations in the N-terminal half of the protein. The mutation in this research example was located within the splice-site.
Splice-site mutations are recurrently found in key lymphoma genes like BCL7A or CD79B due to aberrant somatic hypermutation as the sequence targeted by AID overlaps with the sequences of the splice-sites.
Dementia
According to a research study conducted Hutton, M et al, a missense mutation occurring on the 5' region of the RNA associated with the tau protein was found to be correlated with inherited dementia (known as FTDP-17). The splice-site mutations all destabilize a potential stem–loop structure which is most likely involved in regulating the alternative splicing of exon10 in chromosome 17. Consequently, more usage occurs on the 5' splice site and an increased proportion of tau transcripts that include exon 10 are created. Such drastic increase in mRNA will increase the proportion of Tau containing four microtubule-binding repeats, which is consistent with the neuropathology described in several families with FTDP-17, a type inherited dementia.
Epilepsy
Some types of epilepsy may be brought on due to a splice site mutation.
In addition to a mutation in a stop codon, a splice site mutation on the 3' strand was found in a gene coding for cystatin B in Progressive Myoclonus Epilepsy patients. This combination of mutations was not found in unaffected individuals. By comparing sequences with and without the splice site mutation, investigators were able to determine that a G-to-C nucleotide transversion occurs at the last position of the first intron. This transversion occurs in the region that codes for the cystatin B gene. Individuals suffering from Progressive Myoclonus Epilepsy possess a mutated form of this gene, which results in decreased output of mature mRNA, and subsequently decreases in protein expression.
A study has also shown that a type of Childhood Absence Epilepsy (CAE) causing febrile seizures may be linked to a splice site mutation in the sixth intron of the GABRG2 gene. This splice site mutation was found to cause a nonfunctional GABRG2 subunit in affected individuals. According to this study, a point mutation was the culprit for the splice-donor site mutation, which occurred in intron 6. A nonfunctional protein product is produced, leading to the also nonfunctional subunit.
Hematological Disorders
Several genetic diseases may be the result of splice site mutations. For example, mutations that cause the incorrect splicing of β-globin mRNA are responsible of some cases of β-thalassemia. Another Example is TTP (thrombotic thrombocytopenic purpura). TTP is caused by deficiency of ADAMTS-13. A splice site mutation of ADAMTS-13 gene can therefore cause TTP. It is estimated that 15% of all point mutations causing human genetic diseases occur within a splice site.
Parathyroid Deficiency
When a splice site mutation occurs in intron 2 of the gene that produces the parathyroid hormone, a parathyroid deficiency can prevail. In one particular study, a G to C substitution in the splice site of intron 2 produces a skipping effect in the messenger RNA transcript. The exon that is skipped possesses the initiation start codon to produce parathyroid hormone. Such failure in initiation causes the deficiency.
Analysis
Using the model organism Drosophila melanogaster, data has been compiled regarding the genomic information and sequencing of this organism. A prediction model exists in which a researcher can upload his or her genomic information and use a splice site prediction database to gather information about where the splice sites could be located. The Berkeley Drosophila Project can be used to incorporate this research, as well as annotate high quality euchromatic data. The splice site predictor can be a great tool for researchers studying human disease in this model organism.
Splice site mutations can be analyzed using information theory.
References
Mutation
Gene expression | Splice site mutation | Chemistry,Biology | 1,610 |
19,477,293 | https://en.wikipedia.org/wiki/Biology%20of%20depression | The biology of depression is the attempt to identify a biochemical origin of depression, as opposed to theories that emphasize psychological or situational causes.
Scientific studies have found that different brain areas show altered activity in humans with major depressive disorder (MDD). Further, nutritional deficiencies in magnesium, vitamin D, and tryptophan have been linked with depression; these deficiencies may be caused by the individual's environment, but they have a biological impact. Several theories concerning the biologically based cause of depression have been suggested over the years, including theories revolving around monoamine neurotransmitters, neuroplasticity, neurogenesis, inflammation and the circadian rhythm. Physical illnesses, including hypothyroidism and mitochondrial disease, can also trigger depressive symptoms.
Neural circuits implicated in depression include those involved in the generation and regulation of emotion, as well as in reward. Abnormalities are commonly found in the lateral prefrontal cortex whose putative function is generally considered to involve regulation of emotion. Regions involved in the generation of emotion and reward such as the amygdala, anterior cingulate cortex (ACC), orbitofrontal cortex (OFC), and striatum are frequently implicated as well. These regions are innervated by a monoaminergic nuclei, and tentative evidence suggests a potential role for abnormal monoaminergic activity.
Genetic factors
Difficulty of gene studies
Historically, candidate gene studies have been a major focus of study. However, as the number of genes reduces the likelihood of choosing a correct candidate gene, Type I errors (false positives) are highly likely. Candidate genes studies frequently possess a number of flaws, including frequent genotyping errors and being statistically underpowered. These effects are compounded by the usual assessment of genes without regard for gene-gene interactions. These limitations are reflected in the fact that no candidate gene has reached genome-wide significance.
Gene candidates
5-HTTLPR
The 5-HTTLPR, or serotonin transporter promoter gene's short allele, has been associated with increased risk of depression; since the 1990s, however, results have been inconsistent. Other genes that have been linked to a gene–environment interaction include CRHR1, FKBP5 and BDNF, the first two of which are related to the stress reaction of the HPA axis, and the latter of which is involved in neurogenesis. Candidate gene analysis of 5-HTTLPR on depression was inconclusive on its effect, either alone or in combination with life stress.
A 2003 study proposed that a gene-environment interaction (GxE) may explain why life stress is a predictor for depressive episodes in some individuals, but not in others, depending on an allelic variation of the serotonin-transporter-linked promoter region (5-HTTLPR). This hypothesis was widely discussed in both the scientific literature and popular media, where it was dubbed the "Orchid gene", but has conclusively failed to replicate in much larger samples, and the observed effect sizes in earlier work are not consistent with the observed polygenicity of depression.
BDNF
BDNF polymorphisms have also been hypothesized to have a genetic influence, but early findings and research failed to replicate in larger samples, and the effect sizes found by earlier estimates are inconsistent with the observed polygenicity of depression.
SIRT1 and LHPP
A 2015 GWAS study in Han Chinese women positively identified two variants in intronic regions near SIRT1 and LHPP with a genome-wide significant association.
Norepinephrine transporter polymorphisms
Attempts to find a correlation between norepinephrine transporter polymorphisms and depression have yielded negative results.
One review identified multiple frequently studied candidate genes. The genes encoding for the 5-HTT and 5-HT2A receptor were inconsistently associated with depression and treatment response. Mixed results were found for brain-derived neurotrophic factor (BDNF) Val66Met polymorphisms. Polymorphisms in the tryptophan hydroxylase gene was found to be tentatively associated with suicidal behavior. A meta analysis of 182 case controlled genetic studies published in 2008 found Apolipoprotein E epsilon 2 to be protective, and GNB3 825T, MTHFR 677T, SLC6A4 44bp insertion or deletions, and SLC6A3 40 bpVNTR 9/10 genotype to confer risk.
Circadian rhythm
Depression may be related to abnormalities in the circadian rhythm, or biological clock.
A well synchronized circadian rhythm is critical for maintaining optimal health. Adverse changes and alterations in the circadian rhythm have been associated with various neurological disorders and mood disorders including depression.
Sleep
Sleep disturbance is the most prominent symptom in depressive patients. Studies about sleep electroencephalograms have shown characteristic changes in depression such as reductions in non-rapid eye movement sleep production, disruptions of sleep continuity and disinhibition of rapid eye movement (REM) sleep. Rapid eye movement (REM) sleep—the stage in which dreaming occurs—may be quick to arrive and intense in depressed people. REM sleep depends on decreased serotonin levels in the brain stem, and is impaired by compounds, such as antidepressants, that increase serotonergic tone in brain stem structures. Overall, the serotonergic system is least active during sleep and most active during wakefulness. Prolonged wakefulness due to sleep deprivation activates serotonergic neurons, leading to processes similar to the therapeutic effect of antidepressants, such as the selective serotonin reuptake inhibitors (SSRIs). Depressed individuals can exhibit a significant lift in mood after a night of sleep deprivation. SSRIs may directly depend on the increase of central serotonergic neurotransmission for their therapeutic effect, the same system that impacts cycles of sleep and wakefulness.
Light therapy
Research on the effects of light therapy on seasonal affective disorder suggests that light deprivation is related to decreased activity in the serotonergic system and to abnormalities in the sleep cycle, particularly insomnia. Exposure to light also targets the serotonergic system, providing more support for the important role this system may play in depression. Sleep deprivation and light therapy both target the same brain neurotransmitter system and brain areas as antidepressant drugs, and are now used clinically to treat depression. Light therapy, sleep deprivation and sleep time displacement (sleep phase advance therapy) are being used in combination quickly to interrupt a deep depression in people who are hospitalized for MDD (major depressive disorder).
Increased and decreased sleep length appears to be a risk factor for depression. People with MDD sometimes show diurnal and seasonal variation of symptom severity, even in non-seasonal depression. Diurnal mood improvement was associated with activity of dorsal neural networks. Increased mean core temperature was also observed. One hypothesis proposed that depression was a result of a phase shift.
Daytime light exposure correlates with decreased serotonin transporter activity, which may underlie the seasonality of some depression.
Monoamines
Monoamines are neurotransmitters that include serotonin, dopamine, norepinephrine, and epinephrine.
Monoamine hypothesis of depression
Many antidepressant drugs acutely increase synaptic levels of the monoamine neurotransmitter, serotonin, but they may also enhance the levels of norepinephrine and dopamine. The observation of this efficacy led to the monoamine hypothesis of depression, which postulates that the deficit of certain neurotransmitters is responsible for depression, and even that certain neurotransmitters are linked to specific symptoms. Normal serotonin levels have been linked to mood and behaviour regulation, sleep, and digestion; norepinephrine to the fight-or-flight response; and dopamine to movement, pleasure, and motivation. Some have also proposed the relationship between monoamines and phenotypes such as serotonin in sleep and suicide, norepinephrine in dysphoria, fatigue, apathy, cognitive dysfunction, and dopamine in loss of motivation and psychomotor symptoms. The main limitation for the monoamine hypothesis of depression is the therapeutic lag between initiation of antidepressant treatment and perceived improvement of symptoms. One explanation for this therapeutic lag is that the initial increase in synaptic serotonin is only temporary, as firing of serotonergic neurons in the dorsal raphe adapt via the activity of 5-HT1A autoreceptors. The therapeutic effect of antidepressants is thought to arise from autoreceptor desensitization over a period of time, eventually elevating firing of serotonergic neurons.
Serotonin
The serotonin "chemical imbalance" theory of depression, proposed in the 1960s, is not supported by the available scientific evidence. SSRIs alter the balance of serotonin inside and outside of neurons: their clinical antidepressant effect (which is robust in severe depression) is likely due to more complex changes in neuronal functioning which occur as a downstream consequence of this.
Initial studies of serotonin in depression examined peripheral measures such as the serotonin metabolite 5-Hydroxyindoleacetic acid (5-HIAA) and platelet binding. The results were generally inconsistent, and may not generalize to the central nervous system. However evidence from receptor binding studies and pharmacological challenges provide some evidence for dysfunction of serotonin neurotransmission in depression. Serotonin may indirectly influence mood by altering emotional processing biases that are seen at both the cognitive/behavioral and neural level. Pharmacologically reducing serotonin synthesis, and pharmacologically enhancing synaptic serotonin can produce and attenuate negative affective biases, respectively. These emotional processing biases may explain the therapeutic gap.
Dopamine
While various abnormalities have been observed in dopaminergic systems, results have been inconsistent. People with MDD have an increased reward response to dextroamphetamine compared to controls, and it has been suggested that this results from hypersensitivity of dopaminergic pathways due to natural hypoactivity. While polymorphisms of the D4 and D3 receptor have been implicated in depression, associations have not been consistently replicated. Similar inconsistency has been found in postmortem studies, but various dopamine receptor agonists show promise in treating MDD. There is some evidence that there is decreased nigrostriatal pathway activity in people with melancholic depression (psychomotor retardation). Further supporting the role of dopamine in depression is the consistent finding of decreased cerebrospinal fluid and jugular metabolites of dopamine, as well as post mortem findings of altered dopamine receptor D3 and dopamine transporter expression. Studies in rodents have supported a potential mechanism involving stress-induced dysfunction of dopaminergic systems.
Catecholamines
A number of lines of evidence indicative of decreased adrenergic activity in depression have been reported. Findings include the decreased activity of tyrosine hydroxylase, decreased size of the locus coeruleus, increased α2 adrenergic receptor density, and decreased α1 adrenergic receptor density. Furthermore, norepinephrine transporter knockout in mice models increases their tolerance to stress, implicating norepinephrine in depression.
One method used to study the role of monoamines is monoamine depletion. Depletion of tryptophan (the precursor of serotonin), tyrosine and phenylalanine (precursors to dopamine) does result in decreased mood in those with a predisposition to depression, but not in persons lacking the predisposition. On the other hand, inhibition of dopamine and norepinephrine synthesis with alpha-methyl-para-tyrosine does not consistently result in decreased mood.
Monoamine oxidase
An offshoot of the monoamine hypothesis suggests that monoamine oxidase A (MAO-A), an enzyme which metabolizes monoamines, may be overly active in depressed people. This would, in turn, cause the lowered levels of monoamines. This hypothesis received support from a PET study, which found significantly elevated activity of MAO-A in the brain of some depressed people. In genetic studies, the alterations of MAO-A-related genes have not been consistently associated with depression. Contrary to the assumptions of the monoamine hypothesis, lowered but not heightened activity of MAO-A was associated with depressive symptoms in adolescents. This association was observed only in maltreated youth, indicating that both biological (MAO genes) and psychological (maltreatment) factors are important in the development of depressive disorders. In addition, some evidence indicates that disrupted information processing within neural networks, rather than changes in chemical balance, might underlie depression.
Receptor binding
As of 2012, efforts to determine differences in neurotransmitter receptor expression or for function in the brains of people with MDD using positron emission tomography (PET) had shown inconsistent results. Using the PET imaging technology and reagents available as of 2012, it appeared that the D1 receptor may be underexpressed in the striatum of people with MDD. 5-HT1A receptor binding literature is inconsistent; however, it leans towards a general decrease in the mesiotemporal cortex. 5-HT2A receptor binding appears to be dysregulated in people with MDD. Results from studies on 5-HTT binding are variable, but tend to indicate higher levels in people with MDD. Results with D2/D3 receptor binding studies are too inconsistent to draw any conclusions. Evidence supports increased MAO activity in people with MDD, and it may even be a trait marker (not changed by response to treatment). Muscarinic receptor binding appears to be increased in depression, and, given ligand binding dynamics, suggests increased cholinergic activity.
Four meta analyses on receptor binding in depression have been performed, two on serotonin transporter (5-HTT), one on 5-HT1A, and another on dopamine transporter (DAT). One meta analysis on 5-HTT reported that binding was reduced in the midbrain and amygdala, with the former correlating with greater age, and the latter correlating with depression severity. Another meta-analysis on 5-HTT including both post-mortem and in vivo receptor binding studies reported that while in vivo studies found reduced 5-HTT in the striatum, amygdala and midbrain, post mortem studies found no significant associations. 5-HT1A was found to be reduced in the anterior cingulate cortex, mesiotemporal lobe, insula, and hippocampus, but not in the amygdala or occipital lobe. The most commonly used 5-HT1A ligands are not displaced by endogenous serotonin, indicating that receptor density or affinity is reduced. Dopamine transporter binding is not changed in depression.
Limitations
Since the 1990s, research has uncovered multiple limitations of the monoamine hypothesis, and its inadequacy has been criticized within the psychiatric community. First, serotonin system dysfunction cannot be the sole cause of depression, because not all patients treated with antidepressants show improvement, despite the fact that most patients still show a rapid increase in synaptic serotonin. Second, if significant mood improvements do occur, this is often not for at least two to four weeks. Proponents of the monoamine hypothesis argue that this lag is that the neurotransmitter activity enhancement is the result of auto receptor desensitization, which can take weeks. Intensive investigation has failed to find convincing evidence of a primary dysfunction of a specific monoamine system in people with MDD. The antidepressants that do not act through the monoamine system, such as tianeptine and opipramol, have been known for a long time. There have also been inconsistent findings with regard to levels of serum 5-HIAA, a metabolite of serotonin. Experiments with pharmacological agents that cause depletion of monoamines have shown that this depletion does not cause depression in healthy people. Another problem that presents is that drugs that deplete monoamines may actually have antidepressants properties. Further, some have argued that depression may be marked by a hyperserotonergic state. Already limited, the monoamine hypothesis has been further oversimplified when presented to the general public. Finally, several meta-analyses have shown that monoamine based antidepressants have a limited effect that becomes most obvious at higher levels of depression, primarily because of a decline in the placebo effect for patients with severe depression. These findings support the perspective that monoamines are not the primary mechanism of action for MDD .
Emotional processing and neural circuits
Emotional bias
People with MDD show a number of biases in emotional processing, such as a tendency to rate happy faces more negatively, and a tendency to allocate more attentional resources to sad expressions. Depressed people also have impaired recognition of happy, angry, disgusted, fearful and surprised, but not sad faces. Functional neuroimaging has demonstrated hyperactivity of various brain regions in response to negative emotional stimuli, and hypoactivity in response to positive stimuli. One meta analysis reported that depressed subjects showed decreased activity in the left dorsolateral prefrontal cortex and increased activity in the amygdala in response to negative stimuli. Another meta analysis reported elevated hippocampus and thalamus activity in a subgroup of depressed subjects who were medication naive, not elderly, and had no comorbidities. The therapeutic lag of antidepressants has been suggested to be a result of antidepressants modifying emotional processing leading to mood changes. This is supported by the observation that both acute and subchronic SSRI administration increases response to positive faces. Antidepressant treatment appears to reverse mood congruent biases in limbic, prefrontal, and fusiform areas. dlPFC response is enhanced and amygdala response is attenuated during processing of negative emotions, the former of which is thought to reflect increased top down regulation. The fusiform gyrus and other visual processing areas respond more strongly to positive stimuli with antidepressant treatment, which is thought to reflect a positive processing bias. These effects do not appear to be unique to serotonergic or noradrenergic antidepressants, but also occur in other forms of treatment such as deep brain stimulation.
Neural circuits
One meta analysis of functional neuroimaging in depression observed a pattern of abnormal neural activity hypothesized to reflect an emotional processing bias. Relative to controls, people with MDD showed hyperactivity of circuits in the salience network (SN), composed of the pulvinar nuclei, the insula, and the dorsal anterior cingulate cortex (dACC), as well as decreased activity in regulatory circuits composed of the striatum and dlPFC.
A neuroanatomical model called the limbic-cortical model has been proposed to explain early biological findings in depression. The model attempts to relate specific symptoms of depression to neurological abnormalities. Elevated resting amygdala activity was proposed to underlie rumination, as stimulation of the amygdala has been reported to be associated with the intrusive recall of negative memories. The ACC was divided into pregenual (pgACC) and subgenual regions (sgACC), with the former being electrophysiologically associated with fear, and the latter being metabolically implicated in sadness in healthy subjects. Hyperactivity of the lateral orbitofrontal and insular regions, along with abnormalities in lateral prefrontal regions was suggested to underlie maladaptive emotional responses, given the regions roles in reward learning. This model and another termed "the cortical striatal model", which focused more on abnormalities in the cortico-basal ganglia-thalamo-cortical loop, have been supported by recent literature. Reduced striatal activity, elevated OFC activity, and elevated sgACC activity were all findings consistent with the proposed models. However, amygdala activity was reported to be decreased, contrary to the limbic-cortical model. Furthermore, only lateral prefrontal regions were modulated by treatment, indicating that prefrontal areas are state markers (i.e., dependent upon mood), while subcortical abnormalities are trait markers (i.e., reflect a susceptibility).
Reward
While depression severity as a whole is not correlated with a blunted neural response to reward, anhedonia is directly correlated to reduced activity in the reward system. The study of reward in depression is limited by heterogeneity in the definition and conceptualizations of reward and anhedonia. Anhedonia is broadly defined as a reduced ability to feel pleasure, but questionnaires and clinical assessments rarely distinguish between motivational "wanting" and consummatory "liking". While a number of studies suggest that depressed subjects rate positive stimuli less positively and as less arousing, a number of studies fail to find a difference. Furthermore, response to natural rewards such as sucrose does not appear to be attenuated. General affective blunting may explain "anhedonic" symptoms in depression, as meta analysis of both positive and negative stimuli reveal reduced rating of intensity. As anhedonia is a prominent symptom of depression, direct comparison of depressed with healthy subjects reveals increased activation of the subgenual anterior cingulate cortex (sgACC), and reduced activation of the ventral striatum, and in particular the nucleus accumbens (NAcc) in response to positive stimuli. Although the finding of reduced NAcc activity during reward paradigms is fairly consistent, the NAcc is made up of a functionally diverse range of neurons, and reduced blood-oxygen-level dependent (BOLD) signal in this region could indicate a variety of things including reduced afferent activity or reduced inhibitory output. Nevertheless, these regions are important in reward processing, and dysfunction of them in depression is thought to underlie anhedonia. Residual anhedonia that is not well targeted by serotonergic antidepressants is hypothesized to result from inhibition of dopamine release by activation of 5-HT2C receptors in the striatum. The response to reward in the medial orbitofrontal cortex (OFC) is attenuated in depression, while lateral OFC response is enhanced to punishment. The lateral OFC shows sustained response to absence of reward or punishment, and it is thought to be necessary for modifying behavior in response to changing contingencies. Hypersensitivity in the lOFC may lead to depression by producing a similar effect to learned helplessness in animals.
Elevated response in the sgACC is a consistent finding in neuroimaging studies using a number of paradigms including reward related tasks. Treatment is also associated with attenuated activity in the sgACC, and inhibition of neurons in the rodent homologue of the sgACC, the infralimbic cortex (IL), produces an antidepressant effect. Hyperactivity of the sgACC has been hypothesized to lead to depression via attenuating the somatic response to reward or positive stimuli. Contrary to studies of functional magnetic resonance imaging response in the sgACC during tasks, resting metabolism is reduced in the sgACC. However, this is only apparent when correcting for the prominent reduction in sgACC volume associated with depression; structural abnormalities are evident at a cellular level, as neuropathological studies report reduced sgACC cell markers. The model of depression proposed from these findings by Drevets et al. suggests that reduced sgACC activity results in enhanced sympathetic nervous system activity and blunted HPA axis feedback. Activity in the sgACC may also not be causal in depression, as the authors of one review that examined neuroimaging in depressed subjects during emotional regulation hypothesized that the pattern of elevated sgACC activity reflected increased need to modulate automatic emotional responses in depression. More extensive sgACC and general prefrontal recruitment during positive emotional processing was associated with blunted subcortical response to positive emotions, and subject anhedonia. This was interpreted by the authors to reflect a downregulation of positive emotions by the excessive recruitment of the prefrontal cortex.
Neuroanatomy
While a number of neuroimaging findings are consistently reported in people with major depressive disorder, the heterogeneity of depressed populations presents difficulties interpreting these findings. For example, averaging across populations may hide certain subgroup related findings; while reduced dlPFC activity is reported in depression, a subgroup may present with elevated dlPFC activity. Averaging may also yield statistically significant findings, such as reduced hippocampal volumes, that are actually present in a subgroup of subjects. Due to these issues and others, including the longitudinal consistency of depression, most neural models are likely inapplicable to all depression.
Structural neuroimaging
Meta analyses performed using seed-based d mapping have reported grey matter reductions in a number of frontal regions. One meta analysis of early onset general depression reported grey matter reductions in the bilateral anterior cingulate cortex (ACC) and dorsomedial prefrontal cortex (dmPFC). One meta analysis on first episode depression observed distinct patterns of grey matter reductions in medication free, and combined populations; medication free depression was associated with reductions in the right dorsolateral prefrontal cortex, right amygdala, and right inferior temporal gyrus; analysis on a combination of medication free and medicated depression found reductions in the left insula, right supplementary motor area, and right middle temporal gyrus. Another review distinguishing medicated and medication free populations, albeit not restricted to people with their first episode of MDD, found reductions in the combined population in the bilateral superior, right middle, and left inferior frontal gyrus, along with the bilateral parahippocampus. Increases in thalamic and ACC grey matter was reported in the medication free and medicated populations respectively. A meta analysis performed using "activation likelihood estimate" reported reductions in the paracingulate cortex, dACC and amygdala.
Using statistical parametric mapping, one meta analysis replicated previous findings of reduced grey matter in the ACC, medial prefrontal cortex, inferior frontal gyrus, hippocampus and thalamus; however reductions in the OFC and ventromedial prefrontal cortex grey matter were also reported.
Two studies on depression from the ENIGMA consortium have been published, one on cortical thickness, and the other on subcortical volume. Reduced cortical thickness was reported in the bilateral OFC, ACC, insula, middle temporal gyri, fusiform gyri, and posterior cingulate cortices, while surface area deficits were found in medial occipital, inferior parietal, orbitofrontal and precentral regions. Subcortical abnormalities, including reductions in hippocampus and amygdala volumes, which were especially pronounced in early onset depression.
Multiple meta analysis have been performed on studies assessing white matter integrity using fractional anisotropy (FA). Reduced FA has been reported in the corpus callosum (CC) in both first episode medication naive, and general major depressive populations. The extent of CC reductions differs from study to study. People with MDD who have not taken antidepressants before have been reported to have reductions only in the body of the CC and only in the genu of the CC. On the other hand, general MDD samples have been reported to have reductions in the body of the CC, the body and genu of the CC, and only the genu of the CC. Reductions of FA have also been reported in the anterior limb of the internal capsule (ALIC) and superior longitudinal fasciculus.
Functional neuroimaging
Studies of resting state activity have utilized a number of indicators of resting state activity, including regional homogeneity (ReHO), amplitude of low frequency fluctuations (ALFF), fractional amplitude of low frequency fluctuations (fALFF), arterial spin labeling (ASL), and positron emission tomography (PET) measures of regional cerebral blood flow or metabolism.
Studies using ALFF and fALFF have reported elevations in ACC activity, with the former primarily reporting more ventral findings, and the latter more dorsal findings. A conjunction analysis of ALFF and CBF studies converged on the left insula, with previously untreated people having increased insula activity. Elevated caudate CBF was also reported A meta analysis combining multiple indicators of resting activity reported elevated anterior cingulate, striatal, and thalamic activity and reduced left insula, post-central gyrus and fusiform gyrus activity. An activation likelihood estimate (ALE) meta analysis of PET/SPECT resting state studies reported reduced activity in the left insula, pregenual and dorsal anterior cingulate cortex and elevated activity in the thalamus, caudate, anterior hippocampus and amygdala. Compared to the ALE meta analysis of PET/SPECT studies, a study using multi-kernel density analysis reported hyperactivity only in the pulvinar nuclei of the thalamus.
Brain regions
Research on the brains of people with MDD usually shows disturbed patterns of interaction between multiple parts of the brain. Several areas of the brain are implicated in studies seeking to more fully understand the biology of depression:
Subgenual cingulate
Studies have shown that Brodmann area 25, also known as subgenual cingulate, is metabolically overactive in treatment-resistant depression. This region is extremely rich in serotonin transporters and is considered as a governor for a vast network involving areas like hypothalamus and brain stem, which influences changes in appetite and sleep; the amygdala and insula, which affect the mood and anxiety; the hippocampus, which plays an important role in memory formation; and some parts of the frontal cortex responsible for self-esteem. Thus disturbances in this area or a smaller than normal size of this area contributes to depression. Deep brain stimulation has been targeted to this region in order to reduce its activity in people with treatment resistant depression.
Prefrontal cortex
One review reported hypoactivity in the prefrontal cortex of those with depression compared to controls. The prefrontal cortex is involved in emotional processing and regulation, and dysfunction of this process may be involved in the etiology of depression. One study on antidepressant treatment found an increase in PFC activity in response to administration of antidepressants. One meta analysis published in 2012 found that areas of the prefrontal cortex were hypoactive in response to negative stimuli in people with MDD. One study suggested that areas of the prefrontal cortex are part of a network of regions including dorsal and pregenual cingulate, bilateral middle frontal gyrus, insula and superior temporal gyrus that appear to be hypoactive in people with MDD. However the authors cautioned that the exclusion criteria, lack of consistency and small samples limit results.
Amygdala
The amygdala, a structure involved in emotional processing appears to be hyperactive in those with major depressive disorder. The amygdala in unmedicated depressed persons tended to be smaller than in those that were medicated, however aggregate data shows no difference between depressed and healthy persons. During emotional processing tasks right amygdala is more active than the left, however there is no differences during cognitive tasks, and at rest only the left amygdala appears to be more hyperactive. One study, however, found no difference in amygdala activity during emotional processing tasks.
Hippocampus
Atrophy of the hippocampus has been observed during depression, consistent with animal models of stress and neurogenesis.
Stress can cause depression and depression-like symptoms through monoaminergic changes in several key brain regions as well as suppression in hippocampal neurogenesis. This leads to alteration in emotion and cognition related brain regions as well as HPA axis dysfunction. Through the dysfunction, the effects of stress can be exacerbated including its effects on 5-HT. Furthermore, some of these effects are reversed by antidepressant action, which may act by increasing hippocampal neurogenesis. This leads to a restoration in HPA activity and stress reactivity, thus restoring the deleterious effects induced by stress on 5-HT.
The hypothalamic-pituitary-adrenal axis is a chain of endocrine structures that are activated during the body's response to stressors of various sorts. The HPA axis involves three structure, the hypothalamus which release CRH that stimulates the pituitary gland to release ACTH which stimulates the adrenal glands to release cortisol. Cortisol has a negative feedback effect on the pituitary gland and hypothalamus. In people with MDD this often shows increased activation in depressed people, but the mechanism behind this is not yet known. Increased basal cortisol levels and abnormal response to dexamethasone challenges have been observed in people with MDD. Early life stress has been hypothesized as a potential cause of HPA dysfunction. HPA axis regulation may be examined through a dexamethasone suppression tests, which tests the feedback mechanisms. Non-suppression of dexamethasone is a common finding in depression, but is not consistent enough to be used as a diagnostic tool. HPA axis changes may be responsible for some of the changes such as decreased bone mineral density and increased weight found in people with MDD. One drug, ketoconazole, currently under development has shown promise in treating MDD.
Hippocampal Neurogenesis
Reduced hippocampal neurogenesis leads to a reduction in hippocampal volume. A genetically smaller hippocampus has been linked to a reduced ability to process psychological trauma and external stress, and subsequent predisposition to psychological illness. Depression without familial risk or childhood trauma has been linked to a normal hippocampal volume but localised dysfunction.
Animal models
A number of animal models exist for depression, but they are limited in that depression involves primarily subjective emotional changes. However, some of these changes are reflected in physiology and behavior, the latter of which is the target of many animal models. These models are generally assessed according to four facets of validity; the reflection of the core symptoms in the model; the predictive validity of the model; the validity of the model with regard to human characteristics of etiology; and the biological plausibility.
Different models for inducing depressive behaviors have been utilized; neuroanatomical manipulations such as olfactory bulbectomy or circuit specific manipulations with optogenetics; genetic models such as 5-HT1A knockout or selectively bred animals; models involving environmental manipulation associated with depression in humans, including chronic mild stress, early life stress and learned helplessness. The validity of these models in producing depressive behaviors may be assessed with a number of behavioral tests. Anhedonia and motivational deficits may, for example, be assessed via examining an animal's level of engagement with rewarding stimuli such as sucrose or intracranial self-stimulation. Anxious and irritable symptoms may be assessed with exploratory behavior in the presence of a stressful or novelty environment, such as the open field test, novelty suppressed feeding, or the elevated plus-maze. Fatigue, psychomotor poverty, and agitation may be assessed with locomotor activity, grooming activity, and open field tests.
Animal models possess a number of limitations due to the nature of depression. Some core symptoms of depression, such as rumination, low self-esteem, guilt, and depressed mood cannot be assessed in animals as they require subjective reporting. From an evolutionary standpoint, the behavior correlates of defeats of loss are thought to be an adaptive response to prevent further loss. Therefore, attempts to model depression that seeks to induce defeat or despair may actually reflect adaption and not disease. Furthermore, while depression and anxiety are frequently comorbid, dissociation of the two in animal models is difficult to achieve. Pharmacological assessment of validity is frequently disconnected from clinical pharmacotherapeutics in that most screening tests assess acute effects, while antidepressants normally take a few weeks to work in humans.
Neurocircuits
Regions involved in reward are common targets of manipulation in animal models of depression, including the nucleus accumbens (NAc), ventral tegmental area (VTA), ventral pallidum (VP), lateral habenula (LHb) and medial prefrontal cortex (mPFC). Tentative fMRI studies in humans demonstrate elevated LHb activity in depression. The lateral habenula projects to the RMTg to drive inhibition of dopamine neurons in the VTA during omission of reward. In animal models of depression, elevated activity has been reported in LHb neurons that project to the ventral tegmental area (ostensibly reducing dopamine release). The LHb also projects to aversion reactive mPFC neurons, which may provide an indirect mechanism for producing depressive behaviors. Learned helplessness induced potentiation of LHb synapses are reversed by antidepressant treatment, providing predictive validity. A number of inputs to the LHb have been implicated in producing depressive behaviors. Silencing GABAergic projections from the NAc to the LHb reduces conditioned place preference induced in social aggression, and activation of these terminals induces CPP. Ventral pallidum firing is also elevated by stress induced depression, an effect that is pharmacologically valid, and silencing of these neurons alleviates behavioral correlates of depression. Tentative in vivo evidence from people with MDD suggests abnormalities in dopamine signalling. This led to early studies investigating VTA activity and manipulations in animal models of depression. Massive destruction of VTA neurons enhances depressive behaviors, while VTA neurons reduce firing in response to chronic stress. However, more recent specific manipulations of the VTA produce varying results, with the specific animal model, duration of VTA manipulation, method of VTA manipulation, and subregion of VTA manipulation all potentially leading to differential outcomes. Stress and social defeat induced depressive symptoms, including anhedonia, are associated with potentiation of excitatory inputs to dopamine D2 receptor-expressing medium spiny neurons (D2-MSNs) and depression of excitatory inputs to dopamine D1 receptor-expressing medium spiny neurons (D1-MSNs). Optogenetic excitation of D1-MSNs alleviates depressive symptoms and is rewarding, while the same with D2-MSNs enhances depressive symptoms. Excitation of glutaminergic inputs from the ventral hippocampus reduces social interactions, and enhancing these projections produces susceptibility to stress-induced depression. Manipulations of different regions of the mPFC can produce and attenuate depressive behaviors. For example, inhibiting mPFC neurons specifically in the intralimbic cortex attenuates depressive behaviors. The conflicting findings associated with mPFC stimulation, when compared to the relatively specific findings in the infralimbic cortex, suggest that the prelimbic cortex and infralimbic cortex may mediate opposing effects. mPFC projections to the raphe nuclei are largely GABAergic and inhibit the firing of serotonergic neurons. Specific activation of these regions reduce immobility in the forced swim test but do not affect open field or forced swim behavior. Inhibition of the raphe shifts the behavioral phenotype of uncontrolled stress to a phenotype closer to that of controlled stress.
Altered neuroplasticity
Recent studies have called attention to the role of altered neuroplasticity in depression. A review found a convergence of three phenomena:
Chronic stress reduces synaptic and dendritic plasticity
Depressed subjects show evidence of impaired neuroplasticity (e.g. shortening and reduced complexity of dendritic trees)
Anti-depressant medications may enhance neuroplasticity at both a molecular and dendritic level.
The conclusion is that disrupted neuroplasticity is an underlying feature of depression, and is reversed by antidepressants.
Blood levels of BDNF in people with MDD increase significantly with antidepressant treatment and correlate with decrease in symptoms. Post mortem studies and rat models demonstrate decreased neuronal density in the prefrontal cortex in people with MDD. Rat models demonstrate histological changes consistent with MRI findings in humans, however studies on neurogenesis in humans are limited. Antidepressants appear to reverse the changes in neurogenesis in both animal models and humans.
Inflammation
Various reviews have found that general inflammation may play a role in depression. One meta analysis of cytokines in people with MDD found increased levels of pro-inflammatory IL-6 and TNF-α levels relative to controls. The first theories came about when it was noticed that interferon therapy caused depression in a large number of people receiving it. Meta analysis on cytokine levels in people with MDD have demonstrated increased levels of IL-1, IL-6, C-reactive protein, but not IL-10. Increased numbers of T-Cells presenting activation markers, levels of neopterin, IFN-γ, sTNFR, and IL-2 receptors have been observed in depression. Various sources of inflammation in depressive illness have been hypothesized and include trauma, sleep problems, diet, smoking and obesity. Cytokines, by manipulating neurotransmitters, are involved in the generation of sickness behavior, which shares some overlap with the symptoms of depression. Neurotransmitters hypothesized to be affected include dopamine and serotonin, which are common targets for antidepressant drugs. Induction of indoleamine 2,3-dioxygenase by cytokines has been proposed as a mechanism by which immune dysfunction causes depression. One review found normalization of cytokine levels after successful treatment of depression. A meta analysis published in 2014 found the use of anti-inflammatory drugs such as NSAIDs and investigational cytokine inhibitors reduced depressive symptoms. Exercise can act as a stressor, decreasing the levels of IL-6 and TNF-α and increasing those of IL-10, an anti-inflammatory cytokine.
Inflammation is also intimately linked with metabolic processes in humans. For example, low levels of vitamin D have been associated with greater risk for depression. The role of metabolic biomarkers in depression is an active research area. Recent work has explored the potential relationship between plasma sterols and depressive symptom severity.
Oxidative stress
A marker of DNA oxidation, 8-Oxo-2'-deoxyguanosine, has been found to be increased in both the plasma and urine of people with MDD. This along with the finding of increased F2-isoprostanes levels found in blood, urine and cerebrospinal fluid indicate increased damage to lipids and DNA in people with MDD. Studies with 8-Oxo-2'-deoxyguanosine varied by methods of measurement and type of depression, but F2-isoprostane level was consistent across depression types. Authors suggested lifestyle factors, dysregulation of the HPA axis, immune system and autonomics nervous system as possible causes. Another meta-analysis found similar results with regards to oxidative damage products as well as decreased oxidative capacity. Oxidative DNA damage may play a role in MDD.
Mitochondrial dysfunction
Increased markers of oxidative stress relative to controls have been found in people with MDD. These markers include high levels of RNS and ROS which have been shown to influence chronic inflammation, damaging the electron transport chain and biochemical cascades in mitochondria. This lowers the activity of enzymes in the respiratory chain resulting in mitochondrial dysfunction. The brain is a highly energy-consuming and has little capacity to store glucose as glycogen and so depends greatly on mitochondria. Mitochondrial dysfunction has been linked to the dampened neuroplasticity observed in depressed brains.
Large-scale brain network theory
Instead of studying one brain region, studying large scale brain networks is another approach to understanding psychiatric and neurological disorders, supported by recent research that has shown that multiple brain regions are involved in these disorders. Understanding the disruptions in these networks may provide important insights into interventions for treating these disorders. Recent work suggests that at least three large-scale brain networks are important in psychopathology:
Central executive network
The central executive network is made up of fronto-parietal regions, including dorsolateral prefrontal cortex and lateral posterior parietal cortex. This network is involved in high level cognitive functions such as maintaining and using information in working memory, problem solving, and decision making. Deficiencies in this network are common in most major psychiatric and neurological disorders, including depression. Because this network is crucial for everyday life activities, those who are depressed can show impairment in basic activities like test taking and being decisive.
Default mode network
The default mode network includes hubs in the prefrontal cortex and posterior cingulate, with other prominent regions of the network in the medial temporal lobe and angular gyrus. The default mode network is usually active during mind-wandering and thinking about social situations. In contrast, during specific tasks probed in cognitive science (for example, simple attention tasks), the default network is often deactivated. Research has shown that regions in the default mode network (including medial prefrontal cortex and posterior cingulate) show greater activity when depressed participants ruminate (that is, when they engage in repetitive self-focused thinking) than when typical, healthy participants ruminate. People with MDD also show increased connectivity between the default mode network and the subgenual cingulate and the adjoining ventromedial prefrontal cortex in comparison to healthy individuals, individuals with dementia or with autism. Numerous studies suggest that the subgenual cingulate plays an important role in the dysfunction that characterizes major depression. The increased activation in the default mode network during rumination and the atypical connectivity between core default mode regions and the subgenual cingulate may underlie the tendency for depressed individual to get "stuck" in the negative, self-focused thoughts that often characterize depression. However, further research is needed to gain a precise understanding of how these network interactions map to specific symptoms of depression.
Salience network
The salience network is a cingulate-frontal operculum network that includes core nodes in the anterior cingulate and anterior insula. A salience network is a large-scale brain network involved in detecting and orienting the most pertinent of the external stimuli and internal events being presented. Individuals who have a tendency to experience negative emotional states (scoring high on measures of neuroticism) show an increase in the right anterior insula during decision-making, even if the decision has already been made. This atypically high activity in the right anterior insula is thought to contribute to the experience of negative and worrisome feelings. In major depressive disorder, anxiety is often a part of the emotional state that characterizes depression.
See also
Epigenetics of depression
The Mind Fixers
Anne Harrington § Mind Fixers: Psychiatry's Troubled Search for the Biology of Mental Illness
References
Further reading
Major depressive disorder
Mood disorders
Anatomy
Causes of mental disorders
Biological psychiatry
Behavioral neuroscience
sv:Depression#Biologiska hypoteser | Biology of depression | Biology | 10,073 |
1,862,934 | https://en.wikipedia.org/wiki/Inosinic%20acid | Inosinic acid or inosine monophosphate (IMP) is a nucleotide (that is, a nucleoside monophosphate). Widely used as a flavor enhancer, it is typically obtained from chicken byproducts or other meat industry waste. Inosinic acid is important in metabolism. It is the ribonucleotide of hypoxanthine and the first nucleotide formed during the synthesis of purine nucleotides. It can also be formed by the deamination of adenosine monophosphate by AMP deaminase. It can be hydrolysed to inosine.
The enzyme deoxyribonucleoside triphosphate pyrophosphohydrolase, encoded by YJR069C in Saccharomyces cerevisiae and containing (d)ITPase and (d)XTPase activities, hydrolyzes inosine triphosphate (ITP) releasing pyrophosphate and IMP.
Important derivatives of inosinic acid include the purine nucleotides found in nucleic acids and adenosine triphosphate, which is used to store chemical energy in muscle and other tissues.
In the food industry, inosinic acid and its salts such as disodium inosinate are used as flavor enhancers. It is known as E number reference E630.
Inosinate synthesis
The inosinate synthesis is complex, beginning with a 5-phosphoribosyl-1-pyrophosphate (PRPP). Enzymes taking part in IMP synthesis constitute a multienzyme complex in the cell. Evidence demonstrates that there are multifunctional enzymes, and some of them catalyze non-sequential steps in the pathway.
Synthesis of other purine nucleotides
Within a few steps inosinate becomes AMP or GMP. Both compounds are RNA nucleotides. AMP differs from inosinate by the replacement of IMP's carbon-6 carbonyl with an amino group. The interconversion of AMP and IMP occurs as part of the purine nucleotide cycle. GMP is formed by the inosinate oxidation to xanthylate (XMP), and afterwards adds an amino group on carbon 2. Hydrogen acceptor on inosinate oxidation is NAD+. Finally, carbon 2 gains the amino group by spending an ATP molecule (which becomes AMP+2Pi). While AMP synthesis requires GTP, GMP synthesis uses ATP. That difference offers an important regulation possibility.
Regulation of purine nucleotide biosynthesis
Inosinate and many other molecules inhibit the synthesis of 5-phosphoribosylamine from 5-phosphoribosyl-1-pyrophosphate (PRPP), disabling the enzyme that catalyzes the reaction: glutamine-5-phosphoribosyl-1-pyrophosphate-amidotransferase. In other words, when levels of inosinate are high, glutamine-5-phosphoribosyl-1-pyrophosphate-amidotransferase is inhibited, and, as a consequence, inosinate levels decrease. Also, as a result, adenylate and guanylate are not produced, which means that RNA synthesis cannot be completed because of the lack of these two important RNA nucleotides.
Applications
Inosinic acid can be converted into various salts including disodium inosinate (E631), dipotassium inosinate (E632), and calcium inosinate (E633). These three compounds are used as flavor enhancers for the basic taste umami or savoriness with a comparatively high effectiveness. They are mostly used in soups, sauces, and seasonings for the intensification and balance of the flavor of meat.
See also
References
Further reading
Berg, Jeremy M.; Bioquímica; Editorial Reverté; 6ena edició; Barcelona 2007.
Nelson, David L.; Principles of biochemistry; Editorial W.H Freeman and Company; 4th edition; New York 2005.
Food additives
Organic acids
Flavor enhancers
Nucleotides
Purines
E-number additives | Inosinic acid | Chemistry | 900 |
5,047,320 | https://en.wikipedia.org/wiki/K-W%20Line | The Koningshooikt–Wavre Line, abbreviated to KW Line (; ) and often known as the Dyle Line after the Dijle (Dyle) river, was a -long fortified line of defence prepared by the Belgian Army between Koningshooikt (Province of Antwerp) and Wavre (Province of Brabant) which was intended to protect Brussels from a possible German invasion. Construction on the KW Line began in September 1939 after World War II had begun but while Belgium itself remained a neutral state. It was subsequently extended southwards from Wavre towards Namur (Province of Namur). The line itself consisted of bunkers, anti-tank ditches, and barricades including so-called Cointet-elements and played a key role in Allied strategy during the German invasion of Belgium in May 1940. However, its role in the actual fighting was ultimately minimal. In 2009 an inventory of surviving emplacements was begun.
Background
In October 1936, Belgium abandoned its previous military alliance with France, fearful that after the German Remilitarization of the Rhineland, bringing German forces to its borders, it would get involved in a Franco-German war. It was recognised that a policy of neutrality could only be viable if Belgium possessed strong defences. Existing defence plans were therefore pursued more vigorously. A strong defence line was created along the Albert Canal, running eastwards from Antwerp to the modern fortress of Eben Emael and from there to the southwest along the River Meuse. Although this was a powerful position, it had been understood from the outset that Belgium could not resist a major German invasion alone and that despite its neutrality would need to cooperate with the French army to block a German advance. Eben Emael was too close to Germany, and the salient it created was too large, to make it practical for assisting French forces to occupy such a forward position. For this reason a shorter defence line was planned between Antwerp and Namur in the south. This should be the "line of main resistance", while the Albert Canal-Meuse line would function as a "covering line", delaying the enemy long enough for allied help to arrive and entrench itself. However, the Belgian government was hesitant to start any large-scale construction along this line as it could be seen by Germany as a breach of neutrality as well as an invitation to occupy the area to the east of it.
After the outbreak of the Second World War, Belgium was under considerable pressure from France and the United Kingdom to take their side even before a German invasion, partly because it was feared Belgian defences would collapse before allied troops had the time to reinforce them. To assuage such fears, the Belgian government hastily ordered the improvement of the Antwerp-Namur position in September 1939. The construction work was largely done by conscripts mobilised in September but also by about a thousand civilian workers hired from private contractors.
Location
The line in the north was anchored on the old fortress belt of Antwerp. This way the existing Fort Broechem, Fort Lier and Fort Koningshooikt could be used. From the latter fort southwards, the line continued to Haacht, protecting the city of Mechelen. From Haacht it ran to Leuven, which city was incorporated into the frontline. Originally, this had not been the case: in this sector between November 1939 and April 1940 the line was shifted to the east four times. The next twenty kilometres, the line followed the course of the River Dyle until Wavre was reached. The area between Koningshooikt and Wavre represented the main position where the Belgian army was expected to fight, and to the improvement of which most funds were directed. This explains the name Koningshooikt-Wavre line.
From February 1940 onwards, the line was extended to the south in the direction of Namur, to close the so-called "Gembloux Gap" between the Dyle and the Meuse. Also a branch westwards of Wavre was created, running to Waterloo, Halle and Kester, ending in Ninove. This covered the southern approaches of the Belgian capital Brussels. This extension was more symbolic than real, containing only thirty-eight pillboxes and an anti-tank line made out of Cointet elements, and it served to counter German allegations that the K-W line violated Belgian neutrality by being directed against Germany. The western branch pretended to block a possible French advance.
Structure
The K-W line was not a massive fortification line with modern forts sheltering the artillery, like the French Maginot Line. There were no permanent fortress garrisons occupying it. In case of war, regular infantry divisions had to entrench themselves along the line after having been withdrawn from the Albert Canal-Meuse covering line. Construction work was aimed at preparing this entrenchment by providing a pre-existing infrastructure, consisting of a telephone network, command bunkers, pillboxes for the machine guns, anti-tank obstacles and inundations. Little was done in the way of digging trenches, placing barbed wire entanglements or laying minefields beforehand.
About four hundred concrete pillboxes were constructed. Most of these were placed along a forward line of defence, directly behind main obstacles. To provide some depth, about a kilometre to the west a second line of pillboxes was present. However, when inundations were located in front of the forward line, these were considered to offer enough protection that a second pillbox line could be dispensed with. In front of Mechelen, the river Dyle curved to the west. This was seen as an especially vulnerable spot and between the Dyle and the River Nete a third line of pillboxes was constructed covering the eastern approaches of Mechelen. Multiple lines were also present west of Leuven, due to the many changes in the construction plans. The pillboxes were able to withstand hits by the German 15 cm sFH 18, the heaviest howitzer German infantry divisions were equipped with. They were not of a uniform construction; each pillbox was tailored to adapt to the specific terrain conditions of its location, avoiding dead angles and often allowing enfilading fire. To this end they contained up to three chambers in which a single machine gun could be placed. The machine guns were not permanent fixtures; the troops retreating from the covering line were supposed to bring their MG 08s, called "Maxims", along.
Thousands of Cointet-elements were installed on the K-W Line between the village Koningshooikt and the city Wavre to act as the main line of defence, against a possible German armoured invasion through the heartland of Belgium, forming a long iron wall. The Cointet-elements were also used as an anti-tank line in a sidebranch of the K-W Line, which was meant to defend the southern approaches to Brussels. This line branched off the main line in Waver and ran from there to Halle and then to Ninove, where it ended on the banks of the Dender. The Cointet-elements were placed next to each other in a zig-zag and connected with steel cables. Near main roads they were fixed to heavy concrete pillars that were fixed into the ground to allow local traffic passage. In May 1940 however, due to a relocation programme, the elements did not form a continuous line and thus were easily bypassed by the 3rd and 4th Panzer Divisions.
See also
Grebbe Line
Maginot Line
Peel-Raam Line
Schuster Line
Manstein plan
References
Further reading
External links
World War II defensive lines
Battle of France
Military history of Belgium during World War II
World War II sites in Belgium
1939 in Belgium
Belgian neutrality in World War II
K-W Line | K-W Line | Engineering | 1,578 |
15,217,276 | https://en.wikipedia.org/wiki/NKX2-2 | Homeobox protein Nkx-2.2 is a protein that in humans is encoded by the NKX2-2 gene.
Homeobox protein Nkx-2.2 contains a homeobox domain and may be involved in the morphogenesis of the central nervous system. This gene is found on chromosome 20 near NKX2-4, and these two genes appear to be duplicated on chromosome 14 in the form of TITF1 and NKX2-8. The encoded protein is likely to be a nuclear transcription factor.
The expression of Nkx2-2 is regulated by an antisense RNA called Nkx2-2as.
In the developing spinal cord, Nkx-2.2 regulates IRX3 thereby contributing to the proper differentiation of the ventral horn neurons.
References
Further reading
Transcription factors | NKX2-2 | Chemistry,Biology | 170 |
7,378,394 | https://en.wikipedia.org/wiki/Bilingual%20sign | A bilingual sign (or, by extension, a multilingual sign) is the representation on a panel (sign, usually a traffic sign, a safety sign, an informational sign) of texts in more than one language. The use of bilingual signs is usually reserved for situations where there is legally administered bilingualism (in bilingual regions or at national borders) or where there is a relevant tourist or commercial interest (airports, train stations, ports, border checkpoints, tourist attractions, international itineraries, international institutions, etc.). However, more informal uses of bilingual signs are often found on businesses in areas where there is a high degree of bilingualism, such as tourist venues, ethnic enclaves and historic neighborhoods. In addition, some signs feature synchronic digraphia, the use of multiple writing systems for a single language.
Bilingual signs are widely used in regions whose native languages do not use the Latin alphabet (although some countries like Spain or Poland use multilingual signs); such signs generally include transliteration of toponyms and optional translation of complementary texts (often into English). Beyond bilingualism, there is a general tendency toward the substitution of internationally standardized symbols and pictograms for text.
Around the world
The use of bilingual signs has experienced a remarkable expansion in recent years. The increase in bilingualism there has been paralleled by increases in international travel and a greater sensitivity to the needs of ethnic and linguistic minorities.
Europe
Bilingual signs arose in places like Belgium where, because of the cohabitation of Dutch-speaking and French-speaking communities (especially in the central part of the country near Brussels), bilingualism signaled a simple willingness to accommodate all citizens equally. As a result, all street signs in the Brussels-Capital Region are bilingual in Dutch and French.
Switzerland has several cantons (Bern, Fribourg, Valais and Graubünden) and towns (e.g. Biel/Bienne, Murten, Fribourg, Siders and Disentis/Mustér), where two, or in one case (Graubünden) even three languages have official status and therefore the signs are multilingual. With Biel/Bienne, both the German and the French name of the town are always officially written with the compound name; and similarly with Disentis/Mustér (German/Romansh).
Another example is the German-speaking South Tyrol, which was annexed to Italy during World War I and eventually became the focus of assimilation policies. In observance of international treaties, Italy was eventually compelled to acknowledge and accommodate its German-speaking citizens through the use of bilingual signs. The situation of the Slovene minority living in the Trieste, Gorizia and Udine provinces is very different as only in recent years have the bilingual signs become visible and only in smaller villages. In the French-speaking Aosta Valley, official road and direction signs are usually in both languages, Italian and French.
In Greece, virtually all signs are bilingual, with the Greek text in yellow and the English in white. If a sign is in Greek only, an equivalent sign in English will often be situated nearby.
In Spain, bilingual signs in the local language and Spanish appear irregularly in the autonomous communities of Galicia, Basque Country, Navarre, Catalonia, Valencian Community and the Balearic Islands.
Bilingual signs are also used in the Republic of Ireland, with all roads, towns, important buildings etc. named in both the Irish and English languages. The Irish appears on the top of the sign (usually in italic text) with the English underneath. The exception to this is in Gaeltacht regions, where only Irish language signage tends to be used.
In Germany, first bilingual German–Sorbian road and street signs as well as city-limit signs and train station signs were introduced in the 1950s in Lusatia. After reunification, at least bilingual city-limit signs were also adapted in some regions, were Danish or Frisian are spoken. In Brandenburg and Saxony, German and Sorbian place names nowadays have to be shown in the same size, with German names on the top.
In Finland, multilingual signs appeared at the end of the 19th century. The signs were in the official languages Swedish, Finnish and, during that period, also Russian. After the independence of Finland, the signs became bilingual Finnish–Swedish in the official bilingual areas of the country and bilingual Finnish–Sami in the northern parts.
Bilingual signs are used in the United Kingdom. In Wales, Welsh and English are official languages and most road signs are bilingual. Until 2016 each local authority decided which language is shown first, from 2016 new signage will feature Welsh first. In Scotland, Scottish Gaelic is increasingly visible on road signs, not only in the north-west and on the islands, but also on main primary routes. Railway station signs and signs on public buildings such as the Scottish Parliament are increasingly bilingual. In Northern Ireland, some signs in Irish and/or Ulster Scots are found. In Cornwall, some signs such as street names are found in English and Cornish; and similarly in the Isle of Man in English and Manx Gaelic.
In parts of Slovenia, where languages other than Slovene are official (Italian in parts of Slovenian Istria and Hungarian in parts of Prekmurje), the law requires all official signs (including road signs) to be in both official languages. This regulation is not always strictly enforced, but nevertheless all road signs in these areas are bilingual.
In many regions of Poland bilingual signs are used: Polish and Ruthenian in Lemkivshchyna, Polish and German in Upper Silesia, Polish and Lithuanian in Puńsk commune and Polish and Kashubian in Pomerania.
European airports have signs that are generally bilingual with the local language and English, although there are significant variations between countries. In multilingual countries such as Belgium and Switzerland, airports generally have signs in three or four languages. Some airports, such as Amsterdam Airport Schiphol, are used primarily by international travellers, and choose to use monolingual English signs, even though they are located in a country whose native language is not English.
North America
The Government of Canada and the Province of New Brunswick are officially bilingual in English and French, so all signs issued or regulated by those governments are bilingual regardless of where they are located. Provincial road signs are also bilingual in French-designated areas of Manitoba and Ontario. Each local authority decides which language is shown first. In Ottawa, the national capital, the municipal government is officially bilingual so all municipal traffic signs and road markers are bilingual. Since airports are regulated by the federal government, most airports in Canada have bilingual signs in English and French.
In the Province of Nova Scotia, particularly on Cape Breton island, a number of place-name signs are bilingual in English and Scottish Gaelic.
Although Nunavut, an Inuit territory, is officially multi-lingual in English, French, Inuktitut and Inuinnaqtun, municipal road signs have remained in English only, other than stop signs. Some other road signs in various parts of Canada include other indigenous languages, such as the English/Squamish road sign in British Columbia shown here.
Quebec is officially monolingual in French, and the use of other languages is restricted under the Charter of the French Language. Commercial signs in Quebec are permitted to include text in languages other than French as long as French is "markedly predominant".
At places near the U.S.–Mexico border, some signs are bilingual in English and Spanish, and some signs near the U.S.–Canada border are bilingual in English and French. Additionally, large urban centers such as New York City, Chicago and others have bilingual and multilingual signage at major destinations. There are a few English and Russian bilingual signs in western Alaska. In Texas, some signs are required to be in English and Spanish. In Texas areas where there are large numbers of Spanish speakers, many official signs as well as unofficial signs (e.g. stores, churches, billboards) are written in Spanish, some bilingual with English, but others in Spanish only. In and around New Britain, Connecticut, it is not uncommon to see signs in Spanish and Polish as well as English.
In 2016, Port Angeles, Washington, installed bilingual signs in English and the indigenous Klallam languages to preserve and revitalize the area's Klallam culture.
New York City's Chinatown has English–Chinese signs. Seattle's Chinatown/Japantown has English–Chinese and English–Japanese signs.
Asia
In the People's Republic of China, bilingual signs are mandated by the government in autonomous regions where a minority language shares official status with Chinese. In Xinjiang, signs are in Uyghur and Chinese; in Tibet, signs are in Tibetan and Chinese; and in Inner Mongolia, signs are in Mongolian (written in the classical alphabet) and Chinese. In Guangxi, the majority of signs are in Chinese, even though the Zhuang language is official in the region. Smaller autonomous areas also have similar policies. Signs in the Yanbian Korean Autonomous Prefecture, which borders North Korea, are in Korean and Chinese. Many areas of Qinghai province mandate bilingual signs in Tibetan and Chinese. In Beijing and Shanghai, due to international exposure of the 2008 Summer Olympics and Expo 2010, almost all city traffic signs are now bilingual with Chinese and English (during the Olympics, signs on Olympic venues were also in French). English use in signs is growing in other major cities as well.
In Hong Kong and Macau, government signs are normally bilingual with Traditional Chinese and English or Portuguese, respectively. This is because, in addition to Chinese, English and Portuguese are official languages of Hong Kong and Macau, respectively. Trilingual road signs in English, Portuguese and traditional Chinese are seen in some newly developed areas of Macau.
In Israel, road signs are often trilingual, in Hebrew, Arabic and English.
In India, road signs are often multilingual, in Hindi, English and other regional languages. In addition, signs in Hindustani often feature synchronic digraphia, with an Urdu literary standard written in Arabic script and a High Hindi standard written in Devanagari.
In Sri Lanka, official road signs are in Sinhala, Tamil and English.
In Turkey bilingual (Turkish and Kurdish) village signs are used in Eastern Anatolia Region. Airports and touristic areas include an English name after the Turkish name.
In the Gulf states such as Saudi Arabia, road signs are often bilingual, in English and Arabic. Other signs (e.g. building signs) may also be displayed in English and Arabic.
Gallery
See also
Bilingualism in Canada
Gaelic road signs in Scotland
Linguistic landscape
List of multilingual countries and regions
Road signs in the Republic of Ireland
Rules of the road
Bibliography
Francescato, G. Le aree bilingui e le regioni di confine. Angeli
Baldacci, O. Geografia e toponomastica. S.G.I.
Baines, Phil. Dixon, Catherin. Signs. UK: Laurence King Co., 2004 (trad.ital. Segnali: grafica urbana e territoriale. Modena: Logos, 2004)
Boudreau, A. Dubois, L. Bulot, T. Ledegen, G. Signalétiques et signalisations linguistiques et langagières des espaces de ville (configurations et enjeux sociolinguistiques). Revue de l'Université de Moncton Vol. 36 n.1. Moncton (Nouveau-Brunswick, Canada): Université de Moncton, 2005.
Bhatia, Tej K. Ritchie, William C. Handbook of Bilingualism. Oxford: Blackwell Publishing, 2006.
Shohamy, E. & Gorter, D. (Eds.), Linguistic Landscape: Expanding the Scenery. London: Routledge, 2009.
Shohamy, E., Ben-Rafael, E., & Barni, M. (Eds.) Linguistic Landscape and the City. Bristol: Multilingual Matters, 2010.
References
Traffic signs
Transport safety
Concepts in language policy
Bilingualism | Bilingual sign | Physics | 2,481 |
65,521,876 | https://en.wikipedia.org/wiki/Alice%20Marwick | Alice E. Marwick is a communication scholar, academic, and author, who currently works as an Associate Professor in the Communication department and Principal Researcher at the Center for Information, Technology and Public Life at the University of North Carolina at Chapel Hill, and an affiliated researcher with the Data and Society Research Institute. Marwick has written for publications such as the New York Times, and the Guardian. Her works include the examination of politics, race, social media and gender. She has been a keynote speaker for various universities throughout the United States.
Education
Marwick graduated with her political science and women's studies bachelor's degree from Wellesley College in Massachusetts in 1998. She received her Master of Arts degree in communication from the University of Washington in 2005. She received her PhD in the Department of Media, Culture and Communication from New York University in 2010.
Written works
She has written for the New York Review of Books. Marwick has authored two books: Status Update: Celebrity, Publicity and Branding in the Social Media Age (2013) and The Private Is Political: Networked Privacy and Social Media (2023). She also co-edited The Sage Handbook of Social Media (2016) with co-editors, Jean Burgess and Thomas Poell.
In Status Update, Marwick draws on ethnographic data from people within the San Francisco tech scene and examines how people use social media to obtain attention and popularity to reach a higher social standing. A review from the American Library Association says that her book is important because it takes a needed female perspective on a world that is misogynistic with its technological feats. Status Update includes an extensive examination of the phenomenon of Internet celebrity.
In The Private is Political, Marwick develops the theory of “networked privacy” to account for the ways that privacy can be compromised in—and across—social media platforms. The book highlights the social justice implications of privacy loss, particularly for marginalized groups.
In the Sage Handbook of Social Media, the authors emphasize the importance of social media within contemporary societies and examines its history to scholars and students, while examining the use of social media within multiple fields from marketing, to protesting, to political campaigns.
Other works
Marwick was a postdoctoral researcher at Microsoft Research New England. She was the former director of the McGannon Communication Research Center at Fordham University.
She has participated in podcasts examining far-right extremism, misinformation online, and the analyzation of the liberal left and the conservative right's social media habits.
She has also written for publications such as Public Culture and New Media and Society.
Awards and honors
She is a 2020 Andrew Carnegie Fellowship Award Recipient which is an award that provides support to top researchers in humanities. She was honored in 2017, as a 2017 Global Thinker from Foreign Policy Magazine for her work on examining the social aspects of fake news.
See also
Context collapse
References
Living people
Wellesley College alumni
University of Washington alumni
New York University alumni
University of North Carolina at Chapel Hill faculty
Communication scholars
Science and technology studies scholars
Year of birth missing (living people) | Alice Marwick | Technology | 618 |
43,147,750 | https://en.wikipedia.org/wiki/Internet%20metaphors | Internet metaphors provide users and researchers of the Internet a structure for understanding and communicating its various functions, uses, and experiences. An advantage of employing metaphors is that they permit individuals to visualize an abstract concept or phenomenon with which they have limited experience by comparing it with a concrete, well-understood concept such as physical movement through space. Metaphors to describe the Internet have been utilized since its creation and developed out of the need for the Internet to be understood by everyone when the goals and parameters of the Internet were still unclear. Metaphors helped to overcome the problems of the invisibility and intangibility of the Internet's infrastructure and to fill linguistic gaps where no literal expressions existed.
"Highways, webs, clouds, matrices, frontiers, railroads, tidal waves, libraries, shopping malls, and village squares are all examples of metaphors that have been used in discussions of the Internet." Over time these metaphors have become embedded in cultural communications, subconsciously shaping the cognitive frameworks and perceptions of users who guide the Internet's future development. Popular metaphors may also reflect the intentions of Internet designers or the views of government officials. Internet researchers tend to agree that popular metaphors should be re-examined often to determine if they accurately reflect the realities of the Internet, but many disagree on which metaphors are worth keeping and which ones should be left behind.
Overview
Internet metaphors guide future action and perception of the Internet's capabilities on an individual and societal level. Internet metaphors are contestable and sometimes may present political, educational, and cognitive issues. Tensions between producer and user, commercial and non-commercial interests, and uncertainty regarding privacy all influence the shape these metaphors take.
Common Internet metaphors such as the information superhighway are often criticized for failing to adequately reflect the reality of the Internet as they emphasize the speed of information transmission over the communal and relationship building aspects of the Internet. Internet researchers from a variety of disciplines are engaged in the analysis of metaphors across many domains in order to reveal their impact on user perception and determine which metaphors are best suited for conceptualizing the Internet. Results of this research have become the focus of a popular debate on which metaphors should be applied in political, educational, and commercial settings as well as which aspects of the Internet remain unaccounted for with current metaphors, limiting the scope of users understanding.
Metaphors of the Internet often reveal the intentions of designers and industry spokespeople. "For instance, those who use metaphors of consumption and shopping malls will devote resources to developing secure exchange mechanisms. Broadcasting metaphors carry with them assumptions about the nature of interactions between audiences and content providers that are more passive than those suggested by interactive game metaphors and applications. Computer security experts deploy metaphors that invoke fear, anxiety, and apocalyptic threat" (Wyatt, 2004, p. 244). The extent to which the Internet is understood across individuals and groups determines their ability to navigate and build Web sites and social networks, attend online school, send e-mail, and a variety of other functions. Internet metaphors provide a comprehensive picture of the Internet as a whole as well as describe and explain the various tools, purposes, and protocols that regulate the use of these communication technologies.
Without the use of metaphors the concept of the Internet is abstract and its infrastructure difficult to comprehend. When it was introduced, the Internet created a linguistic gap as no literal expressions existed to define its functions and properties. Internet metaphors arose out of this predicament so that it could be adequately described and explained to the public. Essentially all language now used to communicate about the Internet is of a metaphorical nature, although users are often unaware of this reality because it is embedded in a cultural context that is widely accepted. There are several types of metaphors that serve various purposes and can range from describing the nature of online relationships, modeling the Internet visually, to the specific functions of the Internet as a tool. Each metaphor has implications for the experience and understanding of the Internet by its users and tends to emphasize some aspects of the Internet over others. Some metaphors emphasize space (Matlock, Castro, Fleming, Gann, & Maglio, 2014).
Popular culture
Common recurring themes regarding the Internet appear in popular media and reflect pervasive cultural attitudes and perceptions. Although other models and constructed metaphors of the Internet found in scholarly research and theoretical frameworks may be more accurate sources on the effects of the Internet, mass media messages in popular culture are more likely to influence how people think about and interact with the Internet.
The very first metaphor to describe the Internet was the World Wide Web, proposed in 1989. However, uncertainty surrounding the structure and properties of the Internet was apparent in the newspapers of the 1990s that presented a vast array of contradicting visual models to explain the Internet. Spatial constructs were utilized to make the Internet appear as a tangible entity placed within a familiar geographical context. A popular metaphor adopted around the same time was cyberspace, coined by William Gibson in his novel Neuromancer to describe the world of computers and the society that gathers around them.
Howard Rheingold, an Internet enthusiast of the 1990s, propagated the metaphor of virtual communities and offered a vivid description of the Internet as "...a place for conversation or publication, like a giant coffee-house with a thousand rooms; it is also a world-wide digital version of the Speaker's Corner in London's Hyde Park, an unedited collection of letters to the editor, a floating flea market, a huge vanity publisher, and a collection of every odd-special interest group in the world" (Rheingold 1993, p. 130).
In 1991, Al Gore's choice to use the information superhighway as a metaphor shifted perceptions of the Internet as a communal enterprise to an economic model that emphasized the speed of information transmission. While this metaphor can still be found in popular culture, it has generally been dropped in favor of other metaphors due to its limited interpretation of other aspects of the Internet such as social networks. The most common types of metaphors in usage today relate to either social or functional aspects of the Internet or representations of its infrastructure through visual metaphors and models.
Social metaphors
Internet metaphors frequently arise from social exchanges and processes that occur online and incorporate common terms that describe offline social activities and realities. These metaphors often point to the fundamental elements that make up social interactions, even though online interactions differ in significant ways from face-to-face communication. Therefore, social metaphors tend to communicate more about the values of society rather than the technology of the Internet itself.
Metaphors such as the electronic neighborhood and virtual community point to ways in which individuals connect to others and build relationships by joining a social network. Global village is another metaphor that evokes the imagery of closeness and interconnectedness that might be found in a small village, but is applied to the worldwide community of Internet users. However, the global village metaphor has been criticized for suggesting that the entire world is connected by the Internet as the continued existence of social divides prevent many individuals from accessing the Internet.
The electronic frontier metaphor conceptualizes the Internet as a vast unexplored territory, a source of new resources, and a place to forge new social and business connections. Similar to the American ideology of the Western Frontier, the electronic frontier invokes the image of a better future to come through new opportunities afforded by the Internet. The Electronic Frontier Foundation is a non-profit digital rights group that adopted the use of this metaphor to denote their dedication to the protection of personal freedoms and fair use within the digital landscape. Social metaphors and their pervasive influence indicate the increasing importance placed on social interaction on the Internet.
Functional metaphors
Functional metaphors of the Internet shape our understanding of the medium itself and give us clues as to how we should actually use the Internet and interpret its infrastructure for design and policy making. These exist at the level of the Internet as a whole, at the level of a website, and the level of individual pages. The majority of these types of metaphors are based on the concept of various spaces and physical places; therefore, most are considered spatial metaphors. However, this aspect should not be considered the only defining feature of a functional metaphor as social metaphors are often spatial in nature.
Cyberspace is the most widely used spatial metaphor of the Internet and the implications of its use can be seen in the Oxford English Dictionary definition, which denotes cyberspace as a space within whose boundaries digital communications take place. The implications of this spatial metaphor in discourse on law can be seen in instances where the application of traditional laws governing real property are applied to Internet spaces. However, arguments against this type of ruling have claimed that the Internet is a borderless space, which should not be subject to the laws applied to places. Others have argued that the Internet is in fact a real space not sealed from the real world and can be zoned, trespassed upon, or divided up into holdings like real property.
Other functional metaphors are based on travel within space, such as surfing the Net, which suggests that the Internet is similar to an ocean. Mark McCahill coined 'surfing the internet' in an analogy with browsing a library shelf as an information space. Websites indicate components of a space, which are static and fixed, whereas webpages suggest pages of a book. Similarly, focal points of the Internet structure are called nodes. Home pages, chat rooms, windows, and the idea that one can jump from one page to the next also invoke spatial imagery that guide the functions that users perform on the Internet. Other metaphors refer to the Internet as another dimension beyond typical spaces, such as portals and gateways, which refer to access and communication functions. Firewalls invoke the image of physically blocking the incoming of information such as viruses and pop-up ads.
Designers of computer systems often use spatial metaphors as a way of controlling the complexity of interfaces. Designers create actions, procedures, and concepts of systems based on similar actions, procedures, and concepts of other domains such as physical spaces so that they will be familiar to users. In designing hypertext, a system that links topics on a screen to related information, navigational metaphors such as landmarks, routes, and way-finding have often been implemented for users' ease of understanding how hypertext functions.
Visual metaphors
Visual metaphors are popular in conceptualizing the Internet and are often deployed in commercial promotions through visual media and imagery. The most common visual metaphor is a network of wires with nodes and route lines plotted on a geographically based map. However, maps of Internet infrastructure produced for network marketing are rarely based on actual pathways of wires and cable on the ground, but are instead based on circuit diagrams similar to those seen on subway maps. The globe, or the Earth viewed from space, with network arcs of data flow wrapped around it, is another dominant metaphor for the Internet in Western contexts and is connected with the metaphor of the global village.
Many abstract visual metaphors based on organic structures and patterns are found in literature on the Internet's infrastructure. Often, these metaphors are used as a visual shorthand in explanations as they allow one to refer to the Internet as a definite object without having to explain the intricate details of its functioning. Clouds are the most common of abstract metaphors employed for this purpose in cloud computing and have been used since the creation of the Internet. Other abstract metaphors of the Internet draw on the fractal branching of trees and leaves, and the lattices of coral and webs, while others are based on the aesthetics of astronomy such as gas nebulas, and star clusters.
Technical methods such as algorithms are often used to create huge, complex graphs or maps of raw data from networks and the topology of connections. The typical result of this process are visual representations of the Internet that are elaborate and visually striking, resembling organic structures. These artistic, abstract representations of the Internet have been featured in art galleries, sold as wall posters, used on book covers, and have been claimed to be a picture of the whole Internet by many fans. However, there are no instructions on how these images may be interpreted. The main function of these representations has sometimes been explained as a metaphor for the complexity of the Internet.
See also
Series of tubes
References
Cognitive linguistics
Internet terminology
Metaphors
Metaphors by type
Social constructionism | Internet metaphors | Technology | 2,520 |
690,736 | https://en.wikipedia.org/wiki/Complete%20coloring | In graph theory, a complete coloring is a (proper) vertex coloring in which every pair of colors appears on at least one pair of adjacent vertices. Equivalently, a complete coloring is minimal in the sense that it cannot be transformed into a proper coloring with fewer colors by merging pairs of color classes. The achromatic number of a graph is the maximum number of colors possible in any complete coloring of .
A complete coloring is the opposite of a harmonious coloring, which requires every pair of colors to appear on at most one pair of adjacent vertices.
Complexity theory
Finding is an optimization problem. The decision problem for complete coloring can be phrased as:
INSTANCE: a graph and positive integer
QUESTION: does there exist a partition of into or more disjoint sets such that each is an independent set for and such that for each pair of distinct sets is not an independent set.
Determining the achromatic number is NP-hard; determining if it is greater than a given number is NP-complete, as shown by Yannakakis and Gavril in 1978 by transformation from the minimum maximal matching problem.
Note that any coloring of a graph with the minimum number of colors must be a complete coloring, so minimizing the number of colors in a complete coloring is just a restatement of the standard graph coloring problem.
Algorithms
For any fixed k, it is possible to determine whether the achromatic number of a given graph is at least k, in linear time.
The optimization problem permits approximation and is approximable within a approximation ratio.
Special classes of graphs
The NP-completeness of the achromatic number problem holds also for some special classes of graphs:
bipartite graphs,
complements of bipartite graphs (that is, graphs having no independent set of more than two vertices), cographs and interval graphs, and even for trees.
For complements of trees, the achromatic number can be computed in polynomial time. For trees, it can be approximated to within a constant factor.
The achromatic number of an n-dimensional hypercube graph is known to be proportional to , but the constant of proportionality is not known precisely.
References
External links
A compendium of NP optimization problems
A Bibliography of Harmonious Colourings and Achromatic Number by Keith Edwards
Graph coloring
NP-complete problems | Complete coloring | Mathematics | 481 |
12,223,381 | https://en.wikipedia.org/wiki/VistA%20Imaging | VistA Imaging is an FDA-listed Image Management system used in the Department of Veterans Affairs healthcare facilities nationwide. It is one of the most widely used image management systems in routine healthcare use, and is used to manage many different varieties of images associated with a patient's medical record. The system was started as a research project by Ruth Dayhoff in 1986 and was formally launched in 1991.
Hardware requirements
The VistA Imaging System uses hardware components to provide short- and long-term storage. It takes advantage of network servers for storage. It uses a DICOM gateway system to communicate with commercial Picture Archiving and Communication Systems (PACS) and modalities such as CT, MR, and Computed Radiography (x-ray) devices for image capture. It utilizes a background processor for moving the images to the proper storage device and for managing storage space.
Types of data managed
The system not only manages radiologic images, but also is able to capture and manage EKGs, pathology images, gastroenterology (endoscopic) images, laparoscopic images, scanned paperwork, or essentially any type of health care image.
Integration with Electronic Health Record systems
VistA Imaging is currently integrated into the VistA EMR (electronic medical record) system used nationwide in Department of Veterans Affairs hospitals. This integration is able to provide increased efficiency of retrieval of images. It has also been used as a separate software package and can be used with EHRs other than VistA.
VistA Imaging now connects to a nationwide backbone that allows clinicians to access the 350 million images stored in the VA system via Remote Image View software.
The VA has developed interfaces for more than 250 medical devices in VistA Imaging, the images from which can be accessed through the desktop VistA Imaging Viewer. The Department of Defense will use the VistA Imaging Viewer to enhance its own EHR.
Usage in a National Network of Healthcare Records
As part of the US national mandate to co-ordinate care between Department of Defense and the VA, VistA Imaging is forming a cornerstone of the effort to exchange medical imagery between the two systems. “When soldiers come back from Iraq and Afghanistan and eventually enter the VA system, images will be able to move from DOD to VA seamlessly." Eventually, DOD and VA should be able to share all image file types from all sites. Additional enhancements to VistA Imaging include development of a central archive for all VA images (whether acquired through VistA or a commercial system) and new indexing and search capabilities.
Availability
The software for VistA Imaging has been made available through the Freedom of Information Act so that it is in the public domain. Due to its designation as a medical device, however, it can not be designated as free open source software and therefore can not be altered or implemented without FDA approval.
Although it can be used in healthcare facilities that are outside the Department of Veterans Affairs, this is possible only if the proprietary modules that have been integrated into it are also licensed and implementation is registered with the FDA. This has effectively limited its use to government institutions who have licensed the proprietary modules.
The source code can be downloaded from the OSEHRA VistA-M.git tree.
Proprietary modules required
VistA Imaging uses proprietary modules not in the public domain. This makes its public domain use limited.
Information retrieval after a natural disaster
The VistA Imaging system was robust enough to be restored after Hurricane Katrina damaged the data facility at the New Orleans VA. This type of backup proved superior to a paper record system.
References
External links
United States Department of Veterans Affairs
Health care software
Medical imaging
Nuclear medicine
Endoscopy
Medical equipment
Public-domain software
DICOM software | VistA Imaging | Biology | 724 |
21,743,174 | https://en.wikipedia.org/wiki/Double%20scaling%20limit | In theoretical physics, a double scaling limit is a limit in which the coupling constant is sent to zero while another quantity is sent to zero or infinity at the same moment.
The adjective "double" is a kind of misnomer because the procedure represents an ordinary scaling. However, the adjective is meant to emphasize that two parameters are simultaneously approaching singular values.
The double scaling limit is often applied to matrix models, string theory, and other theories to obtain their simplified versions.
Theoretical physics | Double scaling limit | Physics | 98 |
2,003,273 | https://en.wikipedia.org/wiki/CCNA | CCNA (Cisco Certified Network Associate) is an entry-level information technology (IT) certification offered by Cisco Systems. CCNA certification is widely recognized in the IT industry as the foundational step for careers in IT positions and networking roles.
Cisco exams routinely change in response to evolving IT trends. In 2020, Cisco announced an update to its certification program that "Consolidated and updated associate-level training and certification." Cisco has consolidated the previous different types of Cisco-certified Network Associate with a general CCNA certification.
The exams content covers proprietary technology such as Cisco IOS and its associated command-line interface commands. Cisco along with third-party learning partners offer multiple training methods to achieve certification. Training methods include virtual classroom, in-person classroom, and book-based learning. Free alternatives are also available, such as community sourced practice exams and YouTube video lectures.
Exam
To achieve a CCNA certification, candidates must achieve a passing score on a proctored Cisco exam No. 200-301. After completion of the exam, candidates receive a score report along with a score breakdown by exam section and the score for the given exam.
The exam tests a candidate's knowledge and skills required to install, operate, and troubleshoot a small to medium size enterprise branch network devices. The exam covers a broad range of fundamentals, including network fundamentals, network access, IP connectivity, IP services, security fundamentals, automation, and programmability.
Prerequisites
There are no prerequisites to take the CCNA certification exam. However, if the learning curve is too steep, another starting point of Cisco networking qualifications is the CCST (Cisco Certified Support Technician) in Networking, IT Support, or Cybersecurity.
Expiry
The CCNA Certification expires after three years. Renewal requires certification holders to register for and pass the same or higher level Cisco re-certification exam(s) every three years.
See also
Cisco Networking Academy
Cisco certifications
CompTIA Network+ Certification
DevNet
Cyber Ops
CCNP
CCIE Certification
References
External links
CCNA Certification
Information technology qualifications
Computer security qualifications
Cisco Systems | CCNA | Technology | 426 |
40,351,604 | https://en.wikipedia.org/wiki/HD%20142527 | HD 142527 is a binary star system in the constellation of Lupus. The primary star belongs to the Herbig Ae/Be star class, while the companion, discovered in 2012, is a red dwarf star or accreting protoplanet with a projected separation of less than 0.1″. The system is notable for its circumbinary protoplanetary disk and its discovery has helped refine models of planet formation. The orbit of companion is strongly inclined to the circumbinary protoplanetary disk.
HD 142527 is listed in the International Variable star index as a UX Orionis variable, with a visible-light magnitude ranging from 8.27 to 8.60.
Protoplanetary disk
HD 142527 is an extremely young star system, aged about 1 million years old so it retains its protoplanetary disk, which has a mass of 15% of the Sun and a diameter of 980 AU.
Studies have shown eddies and vortex structures forming in the disk under the influence of two large planets. The system is important as it allows astronomers to observe the accretion process in planetary formation.
In early 2013 an article was published by astronomers working with the ALMA telescope in Chile, which refers to the discovery of two massive flows of matter in the system. Dust and gas is transferred from the periphery to the center through gravitational interaction with two giant planets that have a mass several times greater than the mass of Jupiter. Thus, the flows act as "pumps", pumping material from the edge of the center, "feeding" star. The planets themselves have not been detected so far, due to a dense shroud of gas. However, astronomers have proposed models that describe their existence.
Japanese astronomers have discovered particles of ice in the disk.
References
Lupus (constellation)
Binary stars
Herbig Ae/Be stars
142527
078092
CD-41 10447
Circumstellar disks
J15564188-4219232
F-type giants | HD 142527 | Astronomy | 407 |
1,569,600 | https://en.wikipedia.org/wiki/Thermal%20expansion | Thermal expansion is the tendency of matter to increase in length, area, or volume, changing its size and density, in response to an increase in temperature (usually excluding phase transitions).
Substances usually contract with decreasing temperature (thermal contraction), with rare exceptions within limited temperature ranges (negative thermal expansion).
Temperature is a monotonic function of the average molecular kinetic energy of a substance. As energy in particles increases, they start moving faster and faster, weakening the intermolecular forces between them and therefore expanding the substance.
When a substance is heated, molecules begin to vibrate and move more, usually creating more distance between themselves.
The relative expansion (also called strain) divided by the change in temperature is called the material's coefficient of linear thermal expansion and generally varies with temperature.
Prediction
If an equation of state is available, it can be used to predict the values of the thermal expansion at all the required temperatures and pressures, along with many other state functions.
Contraction effects (negative expansion)
A number of materials contract on heating within certain temperature ranges; this is usually called negative thermal expansion, rather than "thermal contraction". For example, the coefficient of thermal expansion of water drops to zero as it is cooled to and then becomes negative below this temperature; this means that water has a maximum density at this temperature, and this leads to bodies of water maintaining this temperature at their lower depths during extended periods of sub-zero weather.
Other materials are also known to exhibit negative thermal expansion. Fairly pure silicon has a negative coefficient of thermal expansion for temperatures between about . ALLVAR Alloy 30, a titanium alloy, exhibits anisotropic negative thermal expansion across a wide range of temperatures.
Factors
Unlike gases or liquids, solid materials tend to keep their shape when undergoing thermal expansion.
Thermal expansion generally decreases with increasing bond energy, which also has an effect on the melting point of solids, so high melting point materials are more likely to have lower thermal expansion. In general, liquids expand slightly more than solids. The thermal expansion of glasses is slightly higher compared to that of crystals. At the glass transition temperature, rearrangements that occur in an amorphous material lead to characteristic discontinuities of coefficient of thermal expansion and specific heat. These discontinuities allow detection of the glass transition temperature where a supercooled liquid transforms to a glass.
Absorption or desorption of water (or other solvents) can change the size of many common materials; many organic materials change size much more due to this effect than due to thermal expansion. Common plastics exposed to water can, in the long term, expand by many percent.
Effect on density
Thermal expansion changes the space between particles of a substance, which changes the volume of the substance while negligibly changing its mass (the negligible amount comes from mass–energy equivalence), thus changing its density, which has an effect on any buoyant forces acting on it. This plays a crucial role in convection of unevenly heated fluid masses, notably making thermal expansion partly responsible for wind and ocean currents.
Coefficients
The coefficient of thermal expansion describes how the size of an object changes with a change in temperature. Specifically, it measures the fractional change in size per degree change in temperature at a constant pressure, such that lower coefficients describe lower propensity for change in size. Several types of coefficients have been developed: volumetric, area, and linear. The choice of coefficient depends on the particular application and which dimensions are considered important. For solids, one might only be concerned with the change along a length, or over some area.
The volumetric thermal expansion coefficient is the most basic thermal expansion coefficient, and the most relevant for fluids. In general, substances expand or contract when their temperature changes, with expansion or contraction occurring in all directions. Substances that expand at the same rate in every direction are called isotropic. For isotropic materials, the area and volumetric thermal expansion coefficient are, respectively, approximately twice and three times larger than the linear thermal expansion coefficient.
In the general case of a gas, liquid, or solid, the volumetric coefficient of thermal expansion is given by
The subscript "p" to the derivative indicates that the pressure is held constant during the expansion, and the subscript V stresses that it is the volumetric (not linear) expansion that enters this general definition. In the case of a gas, the fact that the pressure is held constant is important, because the volume of a gas will vary appreciably with pressure as well as temperature. For a gas of low density this can be seen from the ideal gas law.
For various materials
This section summarizes the coefficients for some common materials.
For isotropic materials the coefficients linear thermal expansion α and volumetric thermal expansion αV are related by .
For liquids usually the coefficient of volumetric expansion is listed and linear expansion is calculated here for comparison.
For common materials like many metals and compounds, the thermal expansion coefficient is inversely proportional to the melting point.
In particular, for metals the relation is:
for halides and oxides
In the table below, the range for α is from 10−7 K−1 for hard solids to 10−3 K−1 for organic liquids. The coefficient α varies with the temperature and some materials have a very high variation; see for example the variation vs. temperature of the volumetric coefficient for a semicrystalline polypropylene (PP) at different pressure, and the variation of the linear coefficient vs. temperature for some steel grades (from bottom to top: ferritic stainless steel, martensitic stainless steel, carbon steel, duplex stainless steel, austenitic steel). The highest linear coefficient in a solid has been reported for a Ti-Nb alloy.
(The formula is usually used for solids.)
In solids
When calculating thermal expansion it is necessary to consider whether the body is free to expand or is constrained. If the body is free to expand, the expansion or strain resulting from an increase in temperature can be simply calculated by using the applicable coefficient of thermal expansion.
If the body is constrained so that it cannot expand, then internal stress will be caused (or changed) by a change in temperature. This stress can be calculated by considering the strain that would occur if the body were free to expand and the stress required to reduce that strain to zero, through the stress/strain relationship characterised by the elastic or Young's modulus. In the special case of solid materials, external ambient pressure does not usually appreciably affect the size of an object and so it is not usually necessary to consider the effect of pressure changes.
Common engineering solids usually have coefficients of thermal expansion that do not vary significantly over the range of temperatures where they are designed to be used, so where extremely high accuracy is not required, practical calculations can be based on a constant, average, value of the coefficient of expansion.
Length
Linear expansion means change in one dimension (length) as opposed to change in volume (volumetric expansion).
To a first approximation, the change in length measurements of an object due to thermal expansion is related to temperature change by a coefficient of linear thermal expansion (CLTE). It is the fractional change in length per degree of temperature change. Assuming negligible effect of pressure, one may write:
where is a particular length measurement and is the rate of change of that linear dimension per unit change in temperature.
The change in the linear dimension can be estimated to be:
This estimation works well as long as the linear-expansion coefficient does not change much over the change in temperature , and the fractional change in length is small . If either of these conditions does not hold, the exact differential equation (using ) must be integrated.
Effects on strain
For solid materials with a significant length, like rods or cables, an estimate of the amount of thermal expansion can be described by the material strain, given by and defined as:
where is the length before the change of temperature and is the length after the change of temperature.
For most solids, thermal expansion is proportional to the change in temperature:
Thus, the change in either the strain or temperature can be estimated by:
where
is the difference of the temperature between the two recorded strains, measured in degrees Fahrenheit, degrees Rankine, degrees Celsius, or kelvin, and is the linear coefficient of thermal expansion in "per degree Fahrenheit", "per degree Rankine", "per degree Celsius", or "per kelvin", denoted by , , , or , respectively. In the field of continuum mechanics, thermal expansion and its effects are treated as eigenstrain and eigenstress.
Area
The area thermal expansion coefficient relates the change in a material's area dimensions to a change in temperature. It is the fractional change in area per degree of temperature change. Ignoring pressure, one may write:
where is some area of interest on the object, and is the rate of change of that area per unit change in temperature.
The change in the area can be estimated as:
This equation works well as long as the area expansion coefficient does not change much over the change in temperature , and the fractional change in area is small . If either of these conditions does not hold, the equation must be integrated.
Volume
For a solid, one can ignore the effects of pressure on the material, and the volumetric (or cubical) thermal expansion coefficient can be written:
where is the volume of the material, and is the rate of change of that volume with temperature.
This means that the volume of a material changes by some fixed fractional amount. For example, a steel block with a volume of 1 cubic meter might expand to 1.002 cubic meters when the temperature is raised by 50 K. This is an expansion of 0.2%. If a block of steel has a volume of 2 cubic meters, then under the same conditions, it would expand to 2.004 cubic meters, again an expansion of 0.2%. The volumetric expansion coefficient would be 0.2% for 50 K, or 0.004% K−1.
If the expansion coefficient is known, the change in volume can be calculated
where is the fractional change in volume (e.g., 0.002) and is the change in temperature (50 °C).
The above example assumes that the expansion coefficient did not change as the temperature changed and the increase in volume is small compared to the original volume. This is not always true, but for small changes in temperature, it is a good approximation. If the volumetric expansion coefficient does change appreciably with temperature, or the increase in volume is significant, then the above equation will have to be integrated:
where is the volumetric expansion coefficient as a function of temperature T, and and are the initial and final temperatures respectively.
Isotropic materials
For isotropic materials the volumetric thermal expansion coefficient is three times the linear coefficient:
This ratio arises because volume is composed of three mutually orthogonal directions. Thus, in an isotropic material, for small differential changes, one-third of the volumetric expansion is in a single axis. As an example, take a cube of steel that has sides of length . The original volume will be and the new volume, after a temperature increase, will be
We can easily ignore the terms as ΔL is a small quantity which on squaring gets much smaller and on cubing gets smaller still.
So
The above approximation holds for small temperature and dimensional changes (that is, when and are small), but it does not hold if trying to go back and forth between volumetric and linear coefficients using larger values of . In this case, the third term (and sometimes even the fourth term) in the expression above must be taken into account.
Similarly, the area thermal expansion coefficient is two times the linear coefficient:
This ratio can be found in a way similar to that in the linear example above, noting that the area of a face on the cube is just . Also, the same considerations must be made when dealing with large values of .
Put more simply, if the length of a cubic solid expands from 1.00 m to 1.01 m, then the area of one of its sides expands from 1.00 m2 to 1.02 m2 and its volume expands from 1.00 m3 to 1.03 m3.
Anisotropic materials
Materials with anisotropic structures, such as crystals (with less than cubic symmetry, for example martensitic phases) and many composites, will generally have different linear expansion coefficients in different directions. As a result, the total volumetric expansion is distributed unequally among the three axes. If the crystal symmetry is monoclinic or triclinic, even the angles between these axes are subject to thermal changes. In such cases it is necessary to treat the coefficient of thermal expansion as a tensor with up to six independent elements. A good way to determine the elements of the tensor is to study the expansion by x-ray powder diffraction. The thermal expansion coefficient tensor for the materials possessing cubic symmetry (for e.g. FCC, BCC) is isotropic.
Temperature dependence
Thermal expansion coefficients of solids usually show little dependence on temperature (except at very low temperatures) whereas liquids can expand at different rates at different temperatures. There are some exceptions: for example, cubic boron nitride exhibits significant variation of its thermal expansion coefficient over a broad range of temperatures. Another example is paraffin which in its solid form has a thermal expansion coefficient that is dependent on temperature.
In gases
Since gases fill the entirety of the container which they occupy, the volumetric thermal expansion coefficient at constant pressure, , is the only one of interest.
For an ideal gas, a formula can be readily obtained by differentiation of the ideal gas law, . This yields
where is the pressure, is the molar volume (, with the total number of moles of gas), is the absolute temperature and is equal to the gas constant.
For an isobaric thermal expansion, , so that and the isobaric thermal expansion coefficient is:
which is a strong function of temperature; doubling the temperature will halve the thermal expansion coefficient.
Absolute zero computation
From 1787 to 1802, it was determined by Jacques Charles (unpublished), John Dalton, and Joseph Louis Gay-Lussac that, at constant pressure, ideal gases expanded or contracted their volume linearly (Charles's law) by about 1/273 parts per degree Celsius of temperature's change up or down, between 0° and 100 °C. This suggested that the volume of a gas cooled at about −273 °C would reach zero.
In October 1848, William Thomson, a 24 year old professor of Natural Philosophy at the University of Glasgow, published the paper On an Absolute Thermometric Scale.
In a footnote Thomson calculated that "infinite cold" (absolute zero) was equivalent to −273 °C (he called the temperature in °C as the "temperature of the air thermometers" of the time). This value of "−273" was considered to be the temperature at which the ideal gas volume reaches zero. By considering a thermal expansion linear with temperature (i.e. a constant coefficient of thermal expansion), the value of absolute zero was linearly extrapolated as the negative reciprocal of 0.366/100 °C – the accepted average coefficient of thermal expansion of an ideal gas in the temperature interval 0–100 °C, giving a remarkable consistency to the currently accepted value of −273.15 °C.
In liquids
The thermal expansion of liquids is usually higher than in solids because the intermolecular forces present in liquids are relatively weak and its constituent molecules are more mobile. Unlike solids, liquids have no definite shape and they take the shape of the container. Consequently, liquids have no definite length and area, so linear and areal expansions of liquids only have significance in that they may be applied to topics such as thermometry and estimates of sea level rising due to global climate change. Sometimes, αL is still calculated from the experimental value of αV.
In general, liquids expand on heating, except cold water; below 4 °C it contracts, leading to a negative thermal expansion coefficient. At higher temperatures it shows more typical behavior, with a positive thermal expansion coefficient.
Apparent and absolute
The expansion of liquids is usually measured in a container. When a liquid expands in a vessel, the vessel expands along with the liquid. Hence the observed increase in volume (as measured by the liquid level) is not the actual increase in its volume. The expansion of the liquid relative to the container is called its apparent expansion, while the actual expansion of the liquid is called real expansion or absolute expansion. The ratio of apparent increase in volume of the liquid per unit rise of temperature to the original volume is called its coefficient of apparent expansion. The absolute expansion can be measured by a variety of techniques, including ultrasonic methods.
Historically, this phenomenon complicated the experimental determination of thermal expansion coefficients of liquids, since a direct measurement of the change in height of a liquid column generated by thermal expansion is a measurement of the apparent expansion of the liquid. Thus the experiment simultaneously measures two coefficients of expansion and measurement of the expansion of a liquid must account for the expansion of the container as well. For example, when a flask with a long narrow stem, containing enough liquid to partially fill the stem itself, is placed in a heat bath, the height of the liquid column in the stem will initially drop, followed immediately by a rise of that height until the whole system of flask, liquid and heat bath has warmed through. The initial drop in the height of the liquid column is not due to an initial contraction of the liquid, but rather to the expansion of the flask as it contacts the heat bath first.
Soon after, the liquid in the flask is heated by the flask itself and begins to expand. Since liquids typically have a greater percent expansion than solids for the same temperature change, the expansion of the liquid in the flask eventually exceeds that of the flask, causing the level of liquid in the flask to rise. For small and equal rises in temperature, the increase in volume (real expansion) of a liquid is equal to the sum of the apparent increase in volume (apparent expansion) of the liquid and the increase in volume of the containing vessel. The absolute expansion of the liquid is the apparent expansion corrected for the expansion of the containing vessel.
Examples and applications
The expansion and contraction of the materials must be considered when designing large structures, when using tape or chain to measure distances for land surveys, when designing molds for casting hot material, and in other engineering applications when large changes in dimension due to temperature are expected.
Thermal expansion is also used in mechanical applications to fit parts over one another, e.g. a bushing can be fitted over a shaft by making its inner diameter slightly smaller than the diameter of the shaft, then heating it until it fits over the shaft, and allowing it to cool after it has been pushed over the shaft, thus achieving a 'shrink fit'. Induction shrink fitting is a common industrial method to pre-heat metal components between 150 °C and 300 °C thereby causing them to expand and allow for the insertion or removal of another component.
There exist some alloys with a very small linear expansion coefficient, used in applications that demand very small changes in physical dimension over a range of temperatures. One of these is Invar 36, with expansion approximately equal to 0.6 K−1. These alloys are useful in aerospace applications where wide temperature swings may occur.
Pullinger's apparatus is used to determine the linear expansion of a metallic rod in the laboratory. The apparatus consists of a metal cylinder closed at both ends (called a steam jacket). It is provided with an inlet and outlet for the steam. The steam for heating the rod is supplied by a boiler which is connected by a rubber tube to the inlet. The center of the cylinder contains a hole to insert a thermometer. The rod under investigation is enclosed in a steam jacket. One of its ends is free, but the other end is pressed against a fixed screw. The position of the rod is determined by a micrometer screw gauge or spherometer.
To determine the coefficient of linear thermal expansion of a metal, a pipe made of that metal is heated by passing steam through it. One end of the pipe is fixed securely and the other rests on a rotating shaft, the motion of which is indicated by a pointer. A suitable thermometer records the pipe's temperature. This enables calculation of the relative change in length per degree temperature change.
The control of thermal expansion in brittle materials is a key concern for a wide range of reasons. For example, both glass and ceramics are brittle and uneven temperature causes uneven expansion which again causes thermal stress and this might lead to fracture. Ceramics need to be joined or work in concert with a wide range of materials and therefore their expansion must be matched to the application. Because glazes need to be firmly attached to the underlying porcelain (or other body type) their thermal expansion must be tuned to 'fit' the body so that crazing or shivering do not occur. Good example of products whose thermal expansion is the key to their success are CorningWare and the spark plug. The thermal expansion of ceramic bodies can be controlled by firing to create crystalline species that will influence the overall expansion of the material in the desired direction. In addition or instead the formulation of the body can employ materials delivering particles of the desired expansion to the matrix. The thermal expansion of glazes is controlled by their chemical composition and the firing schedule to which they were subjected. In most cases there are complex issues involved in controlling body and glaze expansion, so that adjusting for thermal expansion must be done with an eye to other properties that will be affected, and generally trade-offs are necessary.
Thermal expansion can have a noticeable effect on gasoline stored in above-ground storage tanks, which can cause gasoline pumps to dispense gasoline which may be more compressed than gasoline held in underground storage tanks in winter, or less compressed than gasoline held in underground storage tanks in summer.
Heat-induced expansion has to be taken into account in most areas of engineering. A few examples are:
Metal-framed windows need rubber spacers.
Rubber tires need to perform well over a range of temperatures, being passively heated or cooled by road surfaces and weather, and actively heated by mechanical flexing and friction.
Metal hot water heating pipes should not be used in long straight lengths.
Large structures such as railways and bridges need expansion joints in the structures to avoid sun kink.
A gridiron pendulum uses an arrangement of different metals to maintain a more temperature stable pendulum length.
A power line on a hot day is droopy, but on a cold day it is tight. This is because the metals expand under heat.
Expansion joints absorb the thermal expansion in a piping system.
Precision engineering nearly always requires the engineer to pay attention to the thermal expansion of the product. For example, when using a scanning electron microscope small changes in temperature such as 1 degree can cause a sample to change its position relative to the focus point.
Liquid thermometers contain a liquid (usually mercury or alcohol) in a tube, which constrains it to flow in only one direction when its volume expands due to changes in temperature.
A bi-metal mechanical thermometer uses a bimetallic strip and bends due to the differing thermal expansion of the two metals.
See also
References
External links
Glass Thermal Expansion Thermal expansion measurement, definitions, thermal expansion calculation from the glass composition
Water thermal expansion calculator
DoITPoMS Teaching and Learning Package on Thermal Expansion and the Bi-material Strip
Engineering Toolbox – List of coefficients of Linear Expansion for some common materials
Article on how αV is determined
MatWeb: Free database of engineering properties for over 79,000 materials
USA NIST Website – Temperature and Dimensional Measurement workshop
Hyperphysics: Thermal expansion
Understanding Thermal Expansion in Ceramic Glazes
Thermal Expansion Calculators
Thermal expansion via density calculator
Thermodynamics
Heat transfer
Physical properties
Building defects | Thermal expansion | Physics,Chemistry,Materials_science,Mathematics | 4,905 |
261,109 | https://en.wikipedia.org/wiki/Fork%20bomb | In computing, a fork bomb (also called rabbit virus) is a denial-of-service (DoS) attack wherein a process continually replicates itself to deplete available system resources, slowing down or crashing the system due to resource starvation.
History
Around 1978, an early variant of a fork bomb called wabbit was reported to run on a System/360. It may have descended from a similar attack called RABBITS reported from 1969 on a Burroughs 5500 at the University of Washington.
Implementation
Fork bombs operate both by consuming CPU time in the process of forking, and by saturating the operating system's process table. A basic implementation of a fork bomb is an infinite loop that repeatedly launches new copies of itself.
In Unix-like operating systems, fork bombs are generally written to use the fork system call. As forked processes are also copies of the first program, once they resume execution from the next address at the frame pointer, they continue forking endlessly within their own copy of the same infinite loop. this has the effect of causing an exponential growth in processes. As modern Unix systems generally use a copy-on-write resource management technique when forking new processes, a fork bomb generally will not saturate such a system's memory.
Microsoft Windows operating systems do not have an equivalent functionality to the Unix fork system call; a fork bomb on such an operating system must therefore create a new process instead of forking from an existing one, such as with batch echo %0^|%0 > $_.cmd & $_. In this batch script, %0|%0 is written to $_.cmd, which is then executed by & $_.
A classic example of a fork bomb is one written in Unix shell :(){ :|:& };:, possibly dating back to 1999, which can be more easily understood as
fork() {
fork | fork &
}
fork
In it, a function is defined (fork()) as calling itself (fork), then piping (|) its result into itself, all in a background job (&).
The code using a colon : as the function name is not valid in a shell as defined by POSIX, which only permits alphanumeric characters and underscores in function names. However, its usage is allowed in GNU Bash as an extension.
Prevention
As a fork bomb's mode of operation is entirely encapsulated by creating new processes, one way of preventing a fork bomb from severely affecting the entire system is to limit the maximum number of processes that a single user may own. On Linux, this can be achieved by using the ulimit utility; for example, the command ulimit -u 30 would limit the affected user to a maximum of thirty owned processes.
On PAM-enabled systems, this limit can also be set in /etc/security/limits.conf,
and on *BSD, the system administrator can put limits in /etc/login.conf.
Modern Linux systems also allow finer-grained fork bomb prevention through cgroups and process number (PID) controllers.
See also
Billion laughs attack
Deadlock (computer science)
Logic bomb
Time bomb (software)
References
External links
Denial-of-service attacks
Process (computing) | Fork bomb | Technology | 673 |
3,339,824 | https://en.wikipedia.org/wiki/Hoop%20conjecture | The hoop conjecture, proposed by Kip Thorne in 1972, states that an imploding object forms a black hole when, and only when, a circular hoop with a specific critical circumference could be placed around the object and rotated about its diameter. In simpler terms, the entirety of the object's mass must be compressed to the point that it resides in a perfect sphere whose radius is equal to that object's Schwarzschild radius, if this requirement is not met, then a black hole will not be formed. The critical circumference required for the imaginary hoop is given by the following equation listed below.
where
is the critical circumference;
is the object's Schwarzschild radius;
Thorne calculated the effects of gravitation on objects of different shapes (spheres, and cylinders that are infinite in one direction), and concluded that the object needed to be compressed in all three directions before gravity led to the formation of a black hole. With cylinders, the event horizon was formed when the object could fit inside the hoop described above. The mathematics to prove the same for objects of all shapes was too difficult for him at that time, but he formulated his hypothesis as the hoop conjecture.
By Penrose singularity theorem of 1964 it is known that if there is trapped null surface (and some other conditions) then a singularity must form, in 1983 Schoen and Yau proved how much matter must be crammed into a given volume to create a closed trapped surface, sometimes referred as the Schoen-Yau black hole existence theorem and more recently in 2023 using Gromov's "cube inequality" some tori inequalities used in the results of 1983 have been generalized to cube ones which are more akin to Thorn's circular hoops.
See also
General relativity
Bounding sphere
Black hole stability conjecture
References
Thorne, Kip, Black Holes and Time Warps: Einstein's Outrageous Legacy, W. W. Norton & Company; Reprint edition, January 1, 1995. .
General relativity | Hoop conjecture | Physics | 416 |
69,998,614 | https://en.wikipedia.org/wiki/TVE%20test%20card | The TVE colour test card (Spanish: Carta de ajuste en color de TVE) was an electronic analogue TV test card adopted by Televisión Española with the introduction of PAL colour broadcasts in 1975. It is notable for its unique design, created by the Danish engineer (1939–2011) in 1973, under the supervision of Erik Helmer Nielsen at the Philips TV & Test Equipment laboratory in Amager, south of Copenhagen, the same team that developed the popular Philips PM5544 test pattern. It replaced a previous black and white version developed by Eduardo Gavilán.
The test card was considered part of the regular TV schedule, figuring among daily program listings published in newspapers and magazines. It was said to be the most viewed program in some days due to people watching the test card while waiting for broadcasts to start in the afternoon. It was also relevant in the context of general work strikes, where the test card was sometimes broadcast in place of regular programming, marking it a visible sign of the strike's success.
It was used on several TVE channels, like TVE 1, TVE 2, Canal Clásico, Teledeporte or TVE Internacional.
With the start of continuous 24-hour broadcasting on TVE's channels, the test card was phased out. It stopped being broadcast on La Primera in 1996 and on La 2 in the early morning hours of 6 January 2001, although it continued to be broadcast sporadically on Teledeporte and TVE Internacional until 2005.
Operation and features
As Televisión Española adopted the PAL colour system in 1975, the test card has specific elements that allow proper colour adjustments. Being a creation of the same team behind the Philips PM5544 test card, it has many elements in common with it (like colour and grey bars or castellations), but introduces some differences (for example, different resolution gratings and coloured background rectangle and circle).
There were two generations of the TVE test card. The original was generated by a heavily modified PM5544 which displays the station name at the bottom of the circle using a programmable character generator. From the early 1990s onwards the appearance of the test card changed, with the station name becoming a graphic and clock font now being identical to that of the PM5644 (which was available by that time and likely to explain such changes) thus the original hardware was likely replaced.
Castellations
The alternating white and black boxes around the perimeter are called castellations. They are used to set overscan (castellations should be visible) and check for the low-frequency response of the entire transmission chain.
Grid
The background features a grid composed of perfect squares of 100% intensity white lines.
This element allows:
Verify image geometry (horizontal and vertical size and linearity, cushion or barrel distortion effects);
Adjust CRT convergence (the three electron guns, one for each primary color, need to target the same place);
Adjust CRT focus;
Check CRT color purity when displaying the 50% intensity gray background.
Rectangle
This element is composed of an orange rectangle, framed with a white line, and located at the image center.
It allows for:
Checking proper chrominance delay, essential for good PAL system operation;
Visualizing low-frequency image distortions;
Adjusting maximum color saturation.
Signal values of this element are:
Circle
This element is composed of a light blue circle, also located at the center of the image. With a diameter of 512 lines, it overlaps the rectangle mentioned previously. The circle provides a quick overview of image geometry.
Signal values of this element are:
Box
Located at the top of the circle and composed of 100% white lines, it allows for verification of the low-frequency response of the transmission chain.
Colour bars
Inside the circle, there's a section of colour bars with 75% amplitude and 100% saturation (EBU color bars), that allows checking chrominance parameters on a vectorscope or waveform monitor.
The signal values of these bars are:
Centre Grid
This element is composed of 100% white lines located at the centre of the image, between the colour bars and the greyscale. It helps with image centring adjustment and allows checking for CRT convergence at the centre of the screen.
Greyscale Bars
Beneath the colour bars, there's a greyscale bar with six steps. This allows checking gamma correction of the television receiver, and linearity response of the transmission chain.
The brightness value of each step varies with a ratio of 20%, as follows:
Grating Bars
Located within the circle, the gratings are composed of alternating white and black lines.
Horizontal frequency response (horizontal resolution) can be determined by five frequency gratings of 0.5, 1.25, 2.25, 4.2, and 4.8 MHz. The last two gratings must show interference from the 4.43 MHz PAL colour carrier.
Pulse Signal
A pulse signal bar is placed under the frequency gratings, consisting of a black rectangle with a white vertical line, corresponding to a 2T pulse. This signal shows the status of the transmission chain at high frequencies, as well as ghosting due to signal echoes.
Station Identification
Other elements like TV network identification ("TVE","La Primera","TVE2","Teleporte","Canal Clásico","TVE Internacional"), specific TV channel logos or a clock were usually added to the test pattern.
See also
Philips PM5540
Telefunken FuBK
References
RTVE
Telecommunications-related introductions in 1973
1975 establishments in Spain
2001 disestablishments in Spain
Danish inventions
Test cards
Broadcast engineering | TVE test card | Engineering | 1,168 |
49,032,691 | https://en.wikipedia.org/wiki/OST%20Family | Members of the organic solute transporter (OST) family (TC# 2.A.82) (Slc51 genes) have been characterized from a small bottom feeding species of fish called the little skate, Raja erinacea. Members have also been characterized from humans and mice. The OST family is a member of the larger group of secondary carriers, the APC superfamily.
Substrates
Substrates for OST transporters include a variety of organic compounds, most being anionic. Transport of estrone sulfate by the two subunit Ost transporter of Raja erinacea (TC# 2.A.82.1.1) is Na+-independent, ATP-independent, saturable and inhibited by other steroids and anionic drugs. Bile acids such as taurocholate as well as digoxin and prostaglandin E2 are substrates of this system, while estradiol 17β-D-glucuronide and p-aminohippurate are apparently not. Mammalian homologues (e.g., 2.A.82.1.2) similarly exhibit broad substrate specificity, transporting the same compounds, possibly by an anion:anion exchange mechanism.
Transport reaction
The generalized transport reaction catalyzed by OSTα/OSTβ is:
organic anion (out) ⇌ organic anion (in)
Structure
Each transport system consists of two polypeptide chains, designated α and β. For the human protein (TC# 2.A.82.1.2), the α-subunit is of 340 amino acyl residues (aas) with 7 putative transmembrane segments (TMSs) while the β-subunit is of 128 aas with 1 putative TMS near the N-terminus (residues 40-56). Neither OSTα nor OSTβ alone has activity, both serving not only for heterodimerization and trafficking but also for function. The two proteins are highly expressed in many human tissues. The β-subunit is not required to target the α-subunit to the plasma membrane, but coexpression of both genes is required to convert OSTα to the mature glycosylated protein in enterocyte basolateral membranes and possibly for trafficking through the golgi apparatus. OSTαβ proteins are made in a variety of tissues including the small intestine, colon, liver, biliary tract, kidney, and adrenal gland. In polarized epithelial cells, they are localized to the basolateral membrane and function in the export or uptake of bile acids and steroids. Homologues of OSTα are found in many eukaryotes including animals (both vertebrates and invertebrates), plants, fungi and slime molds. Homologues of OSTβ are found only in vertebrate animals.
Crystal structures
As of early 2016, no crystal structures had been determined. However, bioinformatics utilizing combinations of homology modelling and mutation experiments have been used to explore the heterdimer nature of the system as well as the mechanisms of substrate recognition and transport.
See also
Organic cation transport proteins
Organic anion-transporting polypeptide
Solute carrier family
Osmoregulation
Organic anion transporter 1
Transporter Classification Database
References
Further reading
Membrane proteins
Enzymes of known structure
Solute carrier family
Transmembrane proteins
Transmembrane transporters
Transport proteins
Integral membrane proteins | OST Family | Biology | 715 |
58,130,936 | https://en.wikipedia.org/wiki/Coniferous%20resin%20salve | Spruce resin salve is a traditional wound treatment method that has gained new popularity again after clinical studies in the 21st century. The pure coniferous resin from Norway spruce is antimicrobial against a wide range of bacteria and fungi and positively associates with progressive healing of the wound. The improvement is not limited to the healing of the infected wounds only, suggesting that the resin has positive influences on mechanisms that play a role in wound repair.
History
The first reports of using resins or rosins in medicine are from antiquity. Resins have been used for nearly every kind of human disorder and disease. First medical publication of the use of coniferous resin in medical practice in Finland is from 1578. Swedish physician Benedictus Olai wrote about natural resin in treatment of old leg wounds in the first medical textbook of the Swedish kingdom.
Elias Lönnrot presented the first recipe for resin salve in the Flora Fennica book 1866.
Biological effects
Natural resin is a complex composition of components such as resin acids, lignans and coumaric acid. The levels of these components are dependent on what type of coniferous tree resin it is and when it is collected i.e. fresh physiological resin or matured resin collected from trunk of the tree
In vitro studies have shown that natural resin is strongly antimicrobial against a broad spectrum of common bacteria, fungi and yeasts. The antimicrobial effect is based on resin acid that breaks down the cell wall and the cell membrane and because of that the cell can no longer produce energy and eventually it dies. Microbiological studies have proven that resin is also effective on antibiotic resistant microbes (MRSA & VRE). Spruce resin affects both gram positive and gram negative bacteria.
Reducing the bacterial and fungal contamination of the wound is generally known to improve the wound healing.
In clinical tests, (Abilar 10%®) Resin salve has shown to improve wound healing and reduce pain on various wounds including pressure ulcers, complicated surgical wounds and diabetic foot ulcers.
Contra indications
Persons allergic to resin acids should not use these types of product since it may cause them to develop a topical skin rash. One unselected general population study of 793 Danish adults in 1992 shows a prevalence of colophony allergy of 0.4% in men and 1.0% in women.
References
Traditional medicine
Resins
Conifers | Coniferous resin salve | Physics | 486 |
1,937,745 | https://en.wikipedia.org/wiki/Critical%20hit | In many role-playing games and video games, a critical hit (or crit) is a chance that a successful attack will deal more damage than a normal blow.
The concept of critical hits originates from wargames and role-playing games, as a way to simulate luck, and crossed over into video games in the 1986 JRPG Dragon Quest, set at a fixed rate of 1/64 (~1.56%). However, many other video games that use critical hits may have ways of increasing the likelihood of them occurring, such as by increasing the player character's level or attack statistic.
Both role-playing games and video games may also opt to use a less traditional version of critical hits, either by using different names, offering different effects than dealing more damage, including specific targets or weakpoint(s), and rarely by the inclusion of critical miss effects.
Origin
Critical hits originate from the Reiswitzian Kriegsspiel, which they were added into shortly after the death of Georg von Reisswitz in 1827.
The 1975 role-playing game Empire of the Petal Throne introduced the concept of critical hits (though not the phrase) into role-playing. Using these rules, a player who rolls a 20 on a 20-sided die does double the normal damage, and a 20 followed by a 19 or 20 counts as a killing blow. According to creator M.A.R. Barker, "this simulates the 'lucky hit' on a vital organ."
Types
Critical hits are meant to simulate an occasional "lucky hit". The concept represents the effect of hitting an artery, or finding a weak point, such as a stab merely in the leg causing less damage than a stab in the Achilles tendon. Critical hits are almost always random, although character attributes or situational modifiers may come into play. For example, games in which the player characters have a "Luck" attribute will often base the likelihood of critical hits occurring on this statistic: a character with high Luck will deal a higher percentage of critical hits, while a character with low Luck may, in some games, be struck by more critical hits. In the role-playing game Dungeons & Dragons, when a player character attacks an opponent the player typically rolls a 20-sided die; a roll of 20 (a 5% chance) results in a critical hit.
The most common kind of critical hit simply deals additional damage, most commonly dealing double the normal damage that would have been dealt, but many other formulas exist as well (such as ignoring defense of the target or always awarding the maximum possible damage). Critical hits also occasionally do "special damage" to represent the effects of specific wounds (for example, losing use of an arm or eye, or being reduced to a limp). Critical hits usually occur only with normal weapon attacks, not with magic or other special abilities, but this depends on the individual game's rules.
Many tabletop and video games use "ablative" hit point systems. That is, wounded characters often have no game differences from unwounded characters other than a reduction in hit points. Critical hits originally provided a way to simulate wounds to a specific part of the body. These systems usually use lookup charts and other mechanics to determine which wound was inflicted. In RPGs with non-humanoid characters or monsters, unlikely or bizarre results could occur, such as a Beholder with a "lost leg". Most systems now simply award extra damage on a critical hit, trading realism for ease of play. The effect of a critical hit is to break up the monotony of a battle with high, unusual results.
In the Brazilian RPG Tagmar, according to the result of a dice roll, the victim of a critical hit is significantly wounded or even instantly killed (regardless of hit points).
The roleplaying game Rolemaster is known for its extended system of criticals. One long-standing claim from its company ICE is that it is not the normal hits that kill, but the critical. By integrating criticals even on low results by varying the critical severity (from A (minor) - J (extreme)) and the large variety of criticals (e.g. Slash, Krush, Puncture, Heat, Cold, Electricity, Impact, Unarmed Strikes and even some bizarre ones such as Internal Disruption and Essence criticals), every combat plays out differently. Critical results vary from simple additional hits, and added bleeding and stuns to limbs lopped off and internal organs destroyed. Player characters are not immune to the effects of a critical hit in this system.
Many games call critical hits by other names. For example, in Chrono Trigger, a double hit is a normal attack in which a player character strikes an enemy twice in the same turn. The EarthBound series refers to critical hits as a smash hit (known in-game as "SMAAAASH!!"). The American NES release of Dragon Warrior II referred to an enemy's critical hits as "heroic attacks". In the Mario & Luigi subseries, critical hits are known as "lucky hits", whereas the word "critical" is instead used for attacks that are elementally effective (e.g. fire against plants). Players frequently use the abbreviation crit or critical for "critical hit".
Team Fortress 2 uses a Critical and "Mini-Crit" system. Criticals deal three times the normal damage (and are not weaker at long range, unlike most damage), whereas "mini-crits" only increase damage by 35%. In addition to most weapons having a random chance to crit, some weapons have mechanics that guarantee them when used correctly, such as sniping weapons being capable of headshots (see below).
Critical miss
The negative counterpart of the critical hit is variously known as the critical miss, critical fumble, or critical failure. The concept is less frequently borrowed than that of critical hits. Many tabletop role-playing games use some variation on this concept (such as a "botch" in the Storyteller System), but few computer role-playing games implement critical misses except where the game is directly based on a tabletop game in which such rules appear. Video games are more likely to have a separate system for determining whether attacks miss, using mechanics such as accuracy and evasion.
Headshot
In shooter games, the concept of a critical hit is often substituted by the headshot, where a player attempts to place a shot on an opposed player or non-player character's head area or other weak spot, which is generally fatal, or otherwise devastating, when successfully placed. Headshots require considerable accuracy as players often have to compensate for target movement and a very specific area of the enemy's body. It is commonly used in first-person shooter video games such as Counter-Strike 2, Tactical Ops, and Unreal Tournament. In some games, even when the target is stationary, the player may have to compensate for movement generated by the telescopic sight.
The concept of head shots had been around in arcade light gun shooter electro-mechanical games since the late 1960s. In Sega's Duck Hunt, which began location testing in 1968 and released in January 1969, the player could shoot anywhere on the screen, including anywhere on the target's body. It awarded the player a higher score for a head shot, earning 15 points, whereas a standard body shot earned 10 points.
The earliest commercial first-person shooter video game to make use of headshots was GoldenEye 007 for the Nintendo 64; however, headshots and other location based damage for humanoid type creatures had earlier appeared in the original Team Fortress modification for Quake released the same year, although they were demonstrated and tested in a standalone TF Sniper "modification" created by the same team earlier that year.
Notes
Role-playing game terminology
Video game terminology | Critical hit | Technology | 1,605 |
57,060,791 | https://en.wikipedia.org/wiki/Postia%20amylocystis | Postia amylocystis is a species of poroid fungus in the family Fomitopsidaceae. Found In China, the fungus was described as new to science in 1994 by mycologists Yu-Cheng Dai and Pertti Renvall. The original type collections were made in the Changbai Mountain Range, where the fungus was found growing on a decayed trunk of Manchurian lime (Tilia mandshurica). Characteristics that distinguish P. amylocystis from other Postia species include thick-walled cystidia in the hymenium, and narrow, sausage-shaped (allantoid) spores. The specific epithet amylocystis refers to the amyloid cystidia, and hints at a possible phylogenetic relationship to Amylocystis lapponica.
References
Fungi described in 1994
Fungi of China
Fomitopsidaceae
Taxa named by Yu-Cheng Dai
Fungus species | Postia amylocystis | Biology | 185 |
3,855,235 | https://en.wikipedia.org/wiki/Lithium%20triborate | Lithium triborate (LiB3O5) or LBO is a non-linear optical crystal. It has a wide transparency range, moderately high nonlinear coupling, high damage threshold and desirable chemical and mechanical properties. This crystal is often used for second harmonic generation (SHG, also known as frequency doubling), for example of Nd:YAG lasers (1064 nm → 532 nm). LBO can be both critically and non-critically phase-matched. In the latter case the crystal has to be heated or cooled depending on the wavelength.
Lithium triborate was discovered and developed by Chen Chuangtian and others of the Fujian Institute of Research on the Structure of Matter, Chinese Academy of Sciences. It has been patented.
Chemical properties
Point group: mm2
Lattice parameters: a=8.4473 Å, b=7.3788 Å, c=5.1395 Å
Mohs hardness: 6
Transmission range: 0.16 – 2.6 μm
Damage threshold: 25 J/cm2 (1064 nm, 10 ns pulses)
Thermal expansion coefficients: x: 10.8×10−5/K, y: −8.8×10−5/K, z: 3.4×10−5/K
Specific heat: 1060 J/kg·K
Melting point: 834 °C
Applications of lithium triborate (LBO) crystal
Lithium triborate (LBO) crystals are applicable in various nonlinear optical applications:
Frequency doubling and frequency tripling of high peak power pulsed Nd doped, Ti-sapphire lasers and dye lasers
NCPM (non-critical phase matching) for frequency conversion of CW and quasi CW radiation
OPO (Optical parametric oscillator) of both Type 1 and Type 2 phase-matching
References
External links
LBO Crystal (Lithium Triborate) at www.redoptronics.com
Borates
Nonlinear optical materials
Crystals
Lithium compounds | Lithium triborate | Chemistry,Materials_science | 397 |
20,124,848 | https://en.wikipedia.org/wiki/Splash%20zone | In offshore construction, the splash zone is the transition from air to water when lowering heavy burdens into the sea. The overall efforts applied on the crane change dramatically when the load starts touching water, up to the point where it is completely submerged. Its buoyancy reduces the static mass that the crane has to support, but contact with the waves creates widely fluctuating dynamic forces.
Simulation of these changing efforts are necessary to correctly dimension cranes and lifting equipment. See for example DNV-RP-H103 (Det Norske Veritas recommended practices) for a mention of the piston effect created in the splash zone between two walls.
Special made Access Tools are often made for doing inspections or maintenance in the splash zone, typical down to 15 m depth. This zone is very difficult to access for divers or remotely operated vehicles (ROV's) due to waves and current. Rigging of equipment in this zone also needs special precautions do to the same. By using Remotely Operated Equipment (Robots) that holds on to the structures, work and inspections can be done. Earlier this Zone was looked at as unaccessible.
Offshore engineering | Splash zone | Engineering | 234 |
61,886,551 | https://en.wikipedia.org/wiki/V%C4%9Bra%20Trnkov%C3%A1 | Věra Šedivá-Trnková (March 16, 1934 – 27 May 2018) was a Czech mathematician known for her work in topology and in category theory.
Early life and education
Trnková was born on March 16, 1934, in Berehove, then in Czechoslovakia and now in Ukraine; her father was a forester. By the time she was in high school, her family lived in Prague, and she went to Charles University for study in mathematics. There, she worked with Miroslav Katětov on general topology, earning a master's degree in 1957 with the thesis Collectionwise normal and strongly paracompact spaces on strengthened definitions for normal spaces.
She continued her work on topology at Charles University as a doctoral student of Eduard Čech, earning a candidate's degree (the Czech equivalent of a Ph.D.) in 1961 with the dissertation Non-F-Topologies. Much later, in 1989, she was also given the Dr.Sc. degree, corresponding to a habilitation.
Later life and career
In 1960, while still a student, Trnková became an assistant professor at Charles University. She was promoted to docent (associate professor), senior researcher, and full professor in 1967, 1986, and 1991, respectively.
She became Professor Emeritus in 1999, although she remained active in both teaching and research until a few years before her death on 27 May 2018.
Research
Despite beginning her career working in general topology, Trnková shifted as early as 1962 to category theory. Her work in this area included the study of formal completions of categories, the embeddings of categories into categories of topological spaces, category-theoretic automata theory, and the isomorphism of product objects in categories.
She became the author of over 100 research papers and two monographs:
Combinatorial, algebraic and topological representations of groups, semigroups and categories (with Aleš Pultr, North-Holland Mathematical Library 22, North-Holland, 1980)
Automata and algebras in categories (with Jiří Adámek, Mathematics and its Applications 37, Kluwer, 1990)
References
External links
Home page (archived 26 March 2015)
Věra Trnková on nLab
1934 births
2018 deaths
Czech mathematicians
Women mathematicians
Charles University alumni
Academic staff of Charles University
Topologists
Category theorists | Věra Trnková | Mathematics | 471 |
2,922,083 | https://en.wikipedia.org/wiki/Kappa%20Canis%20Majoris | Kappa Canis Majoris, Latinized from κ Canis Majoris, is a solitary, blue-white hued star in the constellation Canis Major. It is visible to the naked eye with an apparent visual magnitude of +3.87. Based upon an annual parallax shift of 7.70 mas as seen from Earth, this star is located about 660 light years from the Sun.
This is a B-type main-sequence star with a stellar classification of B1.5 Ve, although Hiltner et al. (1969) classified it as B1.5 IVe suggesting it is a subgiant star. The 'e' suffix indicates it is a rapidly rotating Be star with a circumstellar decretion disk of heated gas. The radius of the emitting disk is about , or about 3.7 times the radius of the star. It is classified as a Gamma Cassiopeiae type variable star and its brightness varies from magnitude +3.4 to +3.97. The star became 50% brighter between 1963 and 1978, increasing from magnitude 3.96 or so to 3.52.
Naming
In Chinese, (), meaning Bow and Arrow, refers to an asterism consisting of κ Canis Majoris, δ Canis Majoris, η Canis Majoris, HD 63032, HD 65456, ο Puppis, k Puppis, ε Canis Majoris and π Puppis. Consequently, κ Canis Majoris itself is known as (, .)
References
B-type main-sequence stars
Gamma Cassiopeiae variable stars
Canis Majoris, Kappa
Canis Major
Durchmusterung objects
Canis Majoris, 13
050013
032759
2538 | Kappa Canis Majoris | Astronomy | 355 |
227,167 | https://en.wikipedia.org/wiki/Dementia%20with%20Lewy%20bodies | Dementia with Lewy bodies (DLB) is a type of dementia characterized by changes in sleep, behavior, cognition, movement, and regulation of automatic bodily functions. Memory loss is not always an early symptom. The disease worsens over time and is usually diagnosed when cognitive impairment interferes with normal daily functioning. Together with Parkinson's disease dementia, DLB is one of the two Lewy body dementias. It is a common form of dementia, but the prevalence is not known accurately and many diagnoses are missed. The disease was first described on autopsy by Kenji Kosaka in 1976, and he named the condition several years later.
REM sleep behavior disorder (RBD)—in which people lose the muscle paralysis (atonia) that normally occurs during REM sleep and act out their dreams—is a core feature. RBD may appear years or decades before other symptoms. Other core features are visual hallucinations, marked fluctuations in attention or alertness, and parkinsonism (slowness of movement, trouble walking, or rigidity). A presumptive diagnosis can be made if several disease features or biomarkers are present; the diagnostic workup may include blood tests, neuropsychological tests, imaging, and sleep studies. A definitive diagnosis usually requires an autopsy.
Most people with DLB do not have affected family members, although occasionally DLB runs in a family. The exact cause is unknown but involves formation of abnormal clumps of protein in neurons throughout the brain. Manifesting as Lewy bodies (discovered in 1912 by Frederic Lewy) and Lewy neurites, these clumps affect both the central and the autonomic nervous systems. Heart function and every level of gastrointestinal function—from chewing to defecation—can be affected, constipation being one of the most common symptoms. Low blood pressure upon standing can also occur. DLB commonly causes psychiatric symptoms, such as altered behavior, depression, or apathy.
DLB typically begins after the age of fifty, and people with the disease have an average life expectancy, with wide variability, of about four years after diagnosis. There is no cure or medication to stop the disease from progressing, and people in the latter stages of DLB may be unable to care for themselves. Treatments aim to relieve some of the symptoms and reduce the burden on caregivers. Medicines such as donepezil and rivastigmine can temporarily improve cognition and overall functioning, and melatonin can be used for sleep-related symptoms. Antipsychotics are usually avoided, even for hallucinations, because severe reactions occur in almost half of people with DLB, and their use can result in death. Management of the many different symptoms is challenging, as it involves multiple specialties and education of caregivers.
Classification and terminology
Dementia with Lewy bodies (DLB) is a type of dementia, a group of diseases involving progressive neurodegeneration of the central nervous system. It is one of the two Lewy body dementias, along with Parkinson's disease dementia.
Dementia with Lewy bodies can be classified in other ways. The atypical parkinsonian syndromes include DLB, along with other conditions. Also, DLB is a synucleinopathy, meaning that it is characterized by abnormal deposits of alpha-synuclein protein in the brain. The synucleinopathies include Parkinson's disease, multiple system atrophy, and other rarer conditions.
The vocabulary of diseases associated with Lewy pathology causes confusion. Lewy body dementia (the umbrella term that encompasses the clinical diagnoses of dementia with Lewy bodies and Parkinson's disease dementia) differs from Lewy body disease (the term used to describe pathological findings of Lewy bodies on autopsy). Because individuals with Alzheimer's disease (AD) are often found on autopsy to also have Lewy bodies, DLB has been characterized as an Alzheimer disease-related dementia; the term Lewy body variant of Alzheimer disease is no longer used because the predominant pathology for these individuals is related to Alzheimer's. Even the term Lewy body disease may not describe the true nature of this group of diseases; a unique genetic architecture may predispose individuals to specific diseases with Lewy bodies, and naming controversies continue.
Signs and symptoms
DLB is dementia that occurs with "some combination of fluctuating cognition, recurrent visual hallucinations, rapid eye movement (REM) sleep behavior disorder (RBD), and parkinsonism", according to Armstrong (2019), when Parkinson's disease is not well established before the dementia occurs. DLB has widely varying symptoms and is more complex than many other dementias. Several areas of the nervous system (such as the autonomic nervous system and numerous regions of the brain) can be affected by Lewy pathology, in which the alpha-synuclein deposits cause damage and corresponding neurologic deficits.
In DLB, there is an identifiable set of early signs and symptoms; these are called the prodromal, or pre-dementia, phase of the disease. These early signs and symptoms can appear 15 years or more before dementia develops. The earliest symptoms are constipation and dizziness from autonomic dysfunction, hyposmia (reduced ability to smell), RBD, anxiety, and depression. RBD may appear years or decades before other symptoms. Memory loss is not always an early symptom.
Manifestations of DLB can be divided into essential, core, and supportive features. Dementia is the essential feature and must be present for diagnosis, while core and supportive features are further evidence in support of diagnosis (see diagnostic criteria below).
Essential feature
A dementia diagnosis is made after cognitive decline progresses to a point of interfering with normal daily activities, or social or occupational function. While dementia is an essential feature of DLB, it does not always appear early on, and is more likely to be present as the condition progresses.
Core features
While specific symptoms may vary, the core features of DLB are fluctuating cognition, alertness or attention; REM sleep behavior disorder; one or more of the cardinal features of parkinsonism, not due to medication or stroke; and repeated visual hallucinations.
The 2017 Fourth Consensus Report of the DLB Consortium determined these to be core features based on the availability of high-quality evidence indicating they are highly specific to the condition.
Fluctuating cognition and alertness
Fluctuations in cognitive function are the most characteristic feature of the Lewy body dementias. They are the most frequent symptom of DLB, and are often distinguishable from those of other dementias by concomitant fluctuations of attention and alertness, described by Tsamakis and Mueller (2021) as "spontaneous variations of cognitive abilities, alertness, or arousal". They are further distinguishable by a "marked amplitude between best and worst performances", according to McKeith (2002). These fluctuations vary in severity, frequency and duration; episodes last anywhere from seconds to weeks, interposed between periods of more normal functioning. When relatively lucid periods coincide with medical appointments, cognitive testing may inaccurately reflect disease severity, with subsequent assessments of cognition showing improvements from baseline.
Unlike the deficits in memory and orientation that are characteristic of Alzheimer disease, the distinct impairments in cognition seen in DLB are most commonly in three domains: attention, executive function, and visuospatial function. These fluctuating impairments are present early in the course of the disease. Individuals with DLB may be easily distracted, have a hard time focusing on tasks, or appear to be "delirium-like", "zoning out", or in states of altered consciousness with spells of confusion, agitation or incoherent speech. They may have disorganized speech and their ability to organize their thoughts may change during the day.
Executive function describes attentional and behavioral controls, memory and cognitive flexibility that aid problem solving and planning. Problems with executive function surface in activities requiring planning and organizing. Deficits can manifest in impaired job performance, inability to follow conversations, difficulties with multitasking, or mistakes in driving, such as misjudging distances or becoming lost.
The person with DLB may experience disorders of wakefulness or sleep disorders (in addition to REM sleep behavior disorder) that can be severe. These disorders include daytime sleepiness, drowsiness or napping more than two hours a day, insomnia, periodic limb movements, restless legs syndrome and sleep apnea.
REM sleep behavior disorder
REM sleep behavior disorder (RBD) is a parasomnia in which individuals lose the paralysis of muscles (atonia) that is normal during rapid eye movement (REM) sleep, and consequently act out their dreams or make other abnormal movements or vocalizations. About 80% of those with DLB have RBD. Abnormal sleep behaviors may begin before cognitive decline is observed, and may appear decades before any other symptoms, often as the first clinical indication of DLB and an early sign of a synucleinopathy.
On autopsy, 94 to 98% of individuals with polysomnography-confirmed RBD have a synucleinopathy—most commonly DLB or Parkinson's disease in about equal proportions. More than three out of four people with RBD are diagnosed with a neurodegenerative condition within ten years, but additional neurodegenerative diagnoses may emerge up to 50 years after RBD diagnosis. RBD may subside over time.
Individuals with RBD may not be aware that they act out their dreams. RBD behaviors may include yelling, screaming, laughing, crying, unintelligible talking, nonviolent flailing, or more violent punching, kicking, choking, or scratching. The reported dream enactment behaviors are frequently violent, and involve a theme of being chased or attacked. People with RBD may fall out of bed or injure themselves or their bed partners, which may cause bruises, fractures, or subdural hematomas. Because people are more likely to remember or report violent dreams and behaviors—and to be referred to a specialist when injury occurs—recall or selection bias may explain the prevalence of violence reported in RBD.
Parkinsonism
Parkinsonism is a clinical syndrome characterized by slowness of movement (called bradykinesia), rigidity, postural instability, and tremor; it is found in DLB and many other conditions like Parkinson's disease, Parkinson's disease dementia, and others. Parkinsonism occurs in more than 85% of people with DLB, who may have one or more of these cardinal features, although tremor at rest is less common.
Motor symptoms may include shuffling gait, problems with balance, falls, blank expression, reduced range of facial expression, and low speech volume or a weak voice. Presentation of motor symptoms is variable, but they are usually symmetric, presenting on both sides of the body. Only one of the cardinal symptoms of parkinsonism may be present, and the symptoms may be less severe than in persons with Parkinson's disease.
Visual hallucinations
Up to 80% of people with DLB have visual hallucinations, typically early in the course of the disease. They are recurrent and frequent; may be scenic, elaborate and detailed; and usually involve animated perceptions of animals or people, including children and family members. Examples of visual hallucinations "vary from 'little people' who casually walk around the house, 'ghosts' of dead parents who sit quietly at the bedside, to 'bicycles' that hang off of trees in the back yard".
These hallucinations can sometimes provoke fear, although their content is more typically neutral. In some cases, the person with DLB has insight that the hallucinations are not real. Among those with more disrupted cognition, the hallucinations can become more complex, and they may be less aware that their hallucinations are not real.
Visual misperceptions or illusions are also common in DLB but differ from visual hallucinations. While visual hallucinations occur in the absence of real stimuli, visual illusions occur when real stimuli are incorrectly perceived; for example, a person with DLB may misinterpret a floor lamp for a person.
Supportive features
Supportive features of DLB have less diagnostic weight, but they provide evidence for the diagnosis. Supportive features may be present early in the progression, and persist over time; they are common but they are not specific to the diagnosis. The supportive features are:
marked sensitivity to antipsychotics (neuroleptics);
marked dysautonomia (autonomic dysfunction) in which the autonomic nervous system does not work properly;
hallucinations in senses other than vision (hearing, touch, taste, and smell);
hypersomnia (excessive sleepiness);
hyposmia (reduced ability to smell);
delusions (fixed false beliefs) organized around a common theme;
postural instability, loss of consciousness, and frequent falls;
apathy, anxiety, or depression.
Partly because of loss of cells that release the neurotransmitter dopamine, people with DLB may have neuroleptic malignant syndrome, impairments in cognition or alertness, or irreversible exacerbation of parkinsonism including severe rigidity, and dysautonomia from the use of antipsychotics.
Dysautonomia (autonomic dysfunction) occurs when Lewy pathology affects the peripheral autonomic nervous system (the nerves dealing with the unconscious functions of organs such as the intestines, heart, and urinary tract). The first signs of autonomic dysfunction are often subtle. Manifestations include blood pressure problems such as orthostatic hypotension (significantly reduced blood pressure upon standing) and supine hypertension (significantly elevated blood pressure when lying horizontally); constipation, urinary problems, and sexual dysfunction; loss of or reduced ability to smell; and excessive sweating, drooling, or salivation, and problems swallowing (dysphagia).
Alpha-synuclein deposits can affect cardiac muscle and blood vessels. "Degeneration of the cardiac sympathetic nerves is a neuropathological feature" of the Lewy body dementias, according to Yamada Almost all people with synucleinopathies have cardiovascular dysfunction, although most are asymptomatic. Between of individuals with DLB have orthostatic hypotension due to reduced blood flow, which can result in lightheadedness, feeling faint, and blurred vision.
From chewing to defecation, alpha-synuclein deposits affect every level of gastrointestinal function. Almost all persons with DLB have upper gastrointestinal tract dysfunction (such as gastroparesis, delayed gastric emptying) or lower gastrointestinal dysfunction (such as constipation and prolonged stool transit time). Persons with Lewy body dementia almost universally experience nausea, gastric retention, or abdominal distention from delayed gastric emptying. Problems with gastrointestinal function can affect medication absorption. Constipation can present a decade before diagnosis, and is one of the most common symptoms for people with Lewy body dementia. Dysphagia is milder than in other synucleinopathies and presents later. Urinary difficulties (urinary retention, waking at night to urinate, increased urinary frequency and urgency, and over- or underactive bladder) typically appear later and may be mild or moderate. Sexual dysfunction usually appears early in synucleinopathies, and may include erectile dysfunction and difficulty achieving orgasm or ejaculating.
Among the other supportive features, psychiatric symptoms are often present when the individual first comes to clinical attention and are more likely, compared to AD, to cause more impairment. About one-third of people with DLB have depression, and they often have anxiety as well. Anxiety leads to increased risk of falls, and apathy may lead to less social interaction.
Agitation, behavioral disturbances, and delusions typically appear later in the course of the disease. Delusions may have a paranoid quality, involving themes like a house being broken in to, infidelity, or abandonment. Individuals with DLB who misplace items may have delusions about theft. Capgras delusion may occur, in which the person with DLB loses knowledge of the spouse, caregiver, or partner's face, and is convinced that an imposter has replaced them. Hallucinations in other modalities are sometimes present, but are less frequent.
Sleep disorders (disrupted sleep cycles, sleep apnea, and arousal from periodic limb movement disorder) are common in DLB and may lead to hypersomnia. Loss of sense of smell may occur several years before other symptoms.
Causes
Like other synucleinopathies, the exact cause of DLB is unknown. No trigger for the build-up of alpha-synuclein deposits in the central nervous system has been conclusively identified. Synucleinopathies are typically caused by interactions of genetic and environmental influences; infectious causes have also been considered, but arguments in their favor are controversial and lacking in support. Most people with DLB do not have affected family members, although occasionally DLB runs in a family. The heritability of DLB is thought to be around 30% (that is, about 70% of disease severity is due to external factors or chance).
There is overlap in the genetic risk factors for DLB, Alzheimer's disease (AD), Parkinson's disease, and Parkinson's disease dementia. The APOE gene has three common variants. One, APOE ε4, is a risk factor for DLB and Alzheimer's disease, whereas APOE ε2 may be protective against both. Mutations in GBA, a gene for a lysosomal enzyme, are associated with both DLB and Parkinson's disease. Rarely, mutations in SNCA, the gene for alpha-synuclein, or LRRK2, a gene for a kinase enzyme, can cause any of DLB, Alzheimer's disease, Parkinson's disease or Parkinson's disease dementia. This suggests some shared genetic pathology may underlie all four diseases.
The greatest risk of developing DLB is being over the age of 50. Having REM sleep behavior disorder or Parkinson's disease confers a higher risk for developing DLB. The risk of developing DLB has not been linked to any specific lifestyle factors. Risk factors for rapid conversion of RBD to a synucleinopathy include impairments in color vision or the ability to smell, mild cognitive impairment, and abnormal dopaminergic imaging.
Pathophysiology
DLB is characterized by the development of abnormal collections of alpha-synuclein protein within diseased brain neurons, manifesting as Lewy bodies and Lewy neurites. When these clumps of protein form, neurons function less optimally and eventually die. Neuronal loss in DLB leads to profound dopamine dysfunction and marked cholinergic pathology; other neurotransmitters might be affected, but less is known about them. Damage in the brain is widespread, and affects many domains of functioning.
Loss of acetylcholine-producing neurons is thought to account for degeneration in memory and learning, while the death of dopamine-producing neurons appears to be responsible for degeneration of behavior, cognition, mood, movement, motivation, and sleep. The extent of Lewy body neuronal damage is a key determinant of dementia in the Lewy body disorders.
The precise mechanisms contributing to DLB are not well understood and are a matter of some controversy. The role of alpha-synuclein deposits is unclear, because individuals with no signs of DLB have been found on autopsy to have advanced alpha-synuclein pathology. The relationship between Lewy pathology and widespread cell death is contentious. It is not known if the pathology spreads between cells or follows another pattern. The mechanisms that contribute to cell death, how the disease advances through the brain, and the timing of cognitive decline are all poorly understood. There is no model to account for the specific neurons and brain regions that are affected.
Autopsy studies and amyloid imaging studies using Pittsburgh compound B (PiB) indicate that tau protein pathology and amyloid plaques, which are hallmarks of AD, are also common in DLB and more common than in Parkinson's disease dementia. Amyloid-beta (Aβ) deposits are found in the tauopathies—neurodegenerative diseases characterized by neurofibrillary tangles of hyperphosphorylated tau protein—but the mechanism underlying dementia is often mixed, and Aβ is also a factor in DLB.
A proposed pathophysiology for RBD implicates neurons in the reticular formation that regulate REM sleep. RBD might appear decades earlier than other symptoms in the Lewy body dementias because these cells are affected earlier, before spreading to other brain regions.
Diagnosis
Dementia with Lewy bodies can only be definitively diagnosed after death with an autopsy of the brain (or in rare familial cases, via a genetic test), so diagnosis of the living is referred to as probable or possible.
Diagnosing DLB can be challenging because of the wide range of symptoms with differing levels of severity in each individual. DLB is often misdiagnosed or, in its early stages, confused with Alzheimer's disease. The majority of individuals with Lewy body dementias receive an inaccurate initial diagnosis—such as Alzheimer's, parkinsonism, other dementias or a psychiatric diagnosis—resulting in reduced support and increased fear and uncertainty, sometimes for many years. Comparing the rates of detection of DLB in autopsy studies to those diagnosed while in clinical care indicates that as many as one in three diagnoses of DLB may be missed. Another complicating factor is that DLB commonly occurs along with Alzheimer's; autopsy reveals that half of people with DLB have some level of changes attributed to AD in their brains, which contributes to the wide-ranging variety of symptoms and diagnostic difficulty.
Living with an uncertain diagnosis and prognosis is a concern expressed by both individuals with DLB and their caregivers and difficulty gaining a diagnosis and differing interactions with healthcare professionals are common experiences; once diagnosed, there are still difficulties finding a doctor knowledgeable in treating DLB. Despite the difficulty in diagnosis, a prompt diagnosis is important because of the serious risks of sensitivity to antipsychotics and the need to inform both the person with DLB and the person's caregivers about those medications' side effects. The management of DLB is difficult in comparison to many other neurodegenerative diseases, so an accurate diagnosis is important.
Criteria
The 2017 Fourth Consensus Report established diagnostic criteria for probable and possible DLB, recognizing advances in detection since the earlier Third Consensus (2005) version. The 2017 criteria are based on essential, core, and supportive clinical features, and diagnostic biomarkers.
The essential feature is dementia; for a DLB diagnosis, it must be significant enough to interfere with social or occupational functioning.
The four core clinical features (described in the Signs and symptoms section) are fluctuating cognition, visual hallucinations, REM sleep behavior disorder, and signs of parkinsonism. Supportive clinical features are marked sensitivity to antipsychotics; marked autonomic dysfunction; nonvisual hallucinations; hypersomnia (excessive sleepiness); hyposmia (reduced ability to smell); false beliefs and delusions organized around a common theme; postural instability, loss of consciousness and frequent falls; and apathy, anxiety, or depression.
Direct laboratory-measurable biomarkers for DLB diagnosis are not known, but several indirect methods can lend further evidence for diagnosis. The indicative diagnostic biomarkers are: reduced dopamine transporter uptake in the basal ganglia shown on PET or SPECT imaging; low uptake of 123iodine-metaiodobenzylguanidine shown on myocardial scintigraphy; and loss of atonia during REM sleep evidenced on polysomnography. Supportive diagnostic biomarkers (from PET, SPECT, CT, or MRI brain imaging studies or EEG monitoring) are: lack of damage to medial temporal lobe (damage is more likely in AD); reduced occipital activity; and prominent slow-wave activity on EEG.
Probable DLB can be diagnosed when dementia and at least two core features are present, or when one core feature and at least one indicative biomarker are present. Possible DLB can be diagnosed when dementia and only one core feature are present or, if no core features are present, then at least one indicative biomarker is present.
DLB is distinguished from Parkinson's disease dementia by the time frame in which dementia symptoms appear relative to parkinsonian symptoms. DLB is diagnosed when cognitive symptoms begin before or at the same time as parkinsonian motor signs. Parkinson's disease dementia would be the diagnosis when Parkinson's disease is well established before the dementia occurs (the onset of dementia is more than a year after the onset of parkinsonian symptoms). Known as the one-year rule, the distinction is acknowledged to be arbitrary; it recognizes overlap between the conditions along with key differences, while allowing for variations in treatment and prognosis and providing a framework for research.
DLB is listed in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) as major or mild neurocognitive disorder with Lewy bodies. The differences between the DSM and DLB Consortium diagnostic criteria are: 1) the DSM does not include low dopamine transporter uptake as a supportive feature, and 2) unclear diagnostic weight is assigned to biomarkers in the DSM. Lewy body dementias are classified by the World Health Organization in its ICD-11, the International Statistical Classification of Diseases and Related Health Problems, in chapter 06, as neurodevelopmental disorders, code 6D82.
Clinical history and testing
Diagnostic tests can be used to establish some features of the condition and distinguish them from symptoms of other conditions. Diagnosis may include taking the person's medical history, a physical exam, assessment of neurological function, brain imaging, neuropsychological testing to assess cognitive function, sleep studies, myocardial scintigraphy, or laboratory testing to rule out conditions that may cause symptoms similar to dementia, such as abnormal thyroid function, syphilis, HIV, and vitamin deficiencies.
Typical dementia screening tests used are the mini-mental state examination (MMSE) and the Montreal Cognitive Assessment (MoCA). The pattern of cognitive impairment in DLB is distinct from other dementias, such as AD; the MMSE mainly tests for the memory and language impairments more commonly seen in those other dementias and may be less suited for assessing cognition in the Lewy body dementias, where testing of visuospatial and executive function is indicated. The MoCA may be better suited to assessing cognitive function in DLB, and the Clinician Assessment of Fluctuation scale and the Mayo Fluctuation Composite Score may help understand cognitive decline relative to fluctuations in DLB. For tests of attention, digit span, serial sevens, and spatial span can be used for simple screening, and the Revised Digit Symbol Subtest of the Wechsler Adult Intelligence Scale may show defects in attention that are characteristic of DLB. The Frontal Assessment Battery, Stroop test and Wisconsin Card Sorting Test are used for evaluation of executive function, and there are many other screening instruments available.
If DLB is suspected when parkinsonism and dementia are the only presenting features, PET or SPECT imaging may show reduced dopamine transporter activity. A DLB diagnosis may be warranted if other conditions with reduced dopamine transporter uptake can be ruled out.
RBD is diagnosed either by sleep study recording or, when sleep studies cannot be performed, by medical history and validated questionnaires. In individuals with dementia and a history of RBD, a probable DLB diagnosis can be justified (even with no other core feature or biomarker) based on a sleep study showing REM sleep without atonia because it is so highly predictive. Conditions similar to RBD, like severe sleep apnea and periodic limb movement disorder, must be ruled out. Prompt evaluation and treatment of RBD is indicated when a prior history of violence or injury is present as it may increase the likelihood of future violent dream enactment behaviors. Individuals with RBD may not be able to provide a history of dream enactment behavior, so bed partners are also consulted. The REM Sleep Behavior Disorder Single-Question Screen offers diagnostic sensitivity and specificity in the absence of polysomnography with one question: "Have you ever been told, or suspected yourself, that you seem to 'act out your dreams' while asleep (for example, punching, flailing your arms in the air, making running movements, etc.)?" Because some individuals with DLB do not have RBD, normal findings from a sleep study cannot rule out DLB.
Since 2001, 123iodine-metaiodobenzylguanidine myocardial scintigraphy has been used diagnostically in East Asia (principally Japan), but not in the US; studies validating its use in differential diagnoses are lacking as of 2022. MIBG is taken up by sympathetic nerve endings, such as those that innervate the heart, and is labeled for scintigraphy with radioactive 123iodine. Autonomic dysfunction resulting from damage to nerves in the heart in patients with DLB is associated with lower cardiac uptake of
There is no genetic test to determine if an individual will develop DLB and, according to the Lewy Body Dementia Association, genetic testing is not routinely recommended because there are only rare instances of hereditary DLB.
Differential
Many neurodegenerative conditions share cognitive and motor symptoms with dementia with Lewy bodies. The differential diagnosis includes Alzheimer's disease; such synucleinopathies as Parkinson's disease dementia, Parkinson's disease, and multiple system atrophy; vascular dementia; and progressive supranuclear palsy, corticobasal degeneration, and corticobasal syndrome.
The symptoms of DLB are easily confused with delirium, or more rarely with psychosis; prodromal subtypes of delirium-onset DLB and psychiatric-onset DLB have been proposed. Mismanagement of delirium is a particular concern because of the risks to people with DLB associated with antipsychotics. A careful examination for features of DLB is warranted in individuals with unexplained delirium. PET or SPECT imaging showing reduced dopamine transporter uptake can help distinguish DLB from delirium.
Lewy pathology affects the peripheral autonomic nervous system; autonomic dysfunction is observed less often in AD, frontotemporal, or vascular dementias, so its presence can help differentiate them. MRI scans almost always show abnormalities in the brains of people with vascular dementia, which can begin suddenly.
Alzheimer's disease
DLB is distinguishable from AD even in the prodromal phase. Short-term memory impairment is seen early in AD and is a prominent feature, while fluctuating attention is uncommon; impairment in DLB is more often seen first as fluctuating cognition. In contrast to AD—in which the hippocampus is among the first brain structures affected, and episodic memory loss related to encoding of memories is typically the earliest symptom—memory impairment occurs later in DLB. People with amnestic mild cognitive impairment (in which memory loss is the main symptom) may progress to AD, whereas those with non-amnestic mild cognitive impairment (which has more prominent impairments in language, visuospatial, and executive domains) are more likely to progress towards DLB. Memory loss in DLB has a different progression from AD because frontal structures are involved earlier, with later involvement of temporoparietal brain structures. Verbal memory is not as severely affected as in AD.
While 74% of people with autopsy-confirmed DLB had deficits in planning and organization, they show up in only 45% of people with AD. Visuospatial processing deficits are present in most individuals with DLB, and they show up earlier and are more pronounced than in AD. Hallucinations typically occur early in the course of DLB, are less common in early AD, but usually occur later in AD. AD pathology frequently co-occurs in DLB and is associated with more rapid decline; cerebrospinal fluid (CSF) testing may reveal an "Alzheimer's pattern" of higher tau and lower amyloid beta.
PET or SPECT imaging can be used to detect reduced dopamine transporter uptake and distinguish AD from DLB. Severe atrophy of the hippocampus is more typical of AD than DLB. Before dementia develops (during the mild cognitive impairment phase), MRI scans show normal hippocampal volume. After dementia develops, MRI shows more atrophy among individuals with AD, and a slower reduction in volume over time among people with DLB than those with AD. Compared to people with AD, FDG-PET brain scans in people with DLB often show a cingulate island sign.
In East Asia, particularly Japan, is used in the differential diagnosis of DLB and AD, because reduced labeling of cardiac nerves is seen only in Lewy body disorders. Other indicative and supportive biomarkers are useful in distinguishing DLB and AD (preservation of medial temporal lobe structures, reduced occipital activity, and slow-wave EEG activity).
Synucleinopathies
Dementia with Lewy bodies and Parkinson's disease dementia are clinically similar after dementia occurs in Parkinson's disease. Delusions in Parkinson's disease dementia are less common than in DLB, and persons with Parkinson's disease are typically less caught up in their visual hallucinations than those with DLB. There is a lower incidence of tremor at rest in DLB than in Parkinson's disease, and signs of parkinsonism in DLB are more symmetrical. In multiple system atrophy, autonomic dysfunction appears earlier and is more severe, and is accompanied by uncoordinated movements, while visual hallucinations and fluctuating cognition are less common than in DLB. Urinary difficulty is one of the earliest symptoms with multiple system atrophy, and is often severe.
Frontotemporal dementias
Corticobasal syndrome, corticobasal degeneration and progressive supranuclear palsy are frontotemporal dementias with features of parkinsonism and impaired cognition. Similar to DLB, imaging may show reduced dopamine transporter uptake. Corticobasal syndrome and degeneration, and progressive supranuclear palsy, are usually distinguished from DLB by history and examination. Motor movements in corticobasal syndrome are asymmetrical. There are differences in posture, gaze and facial expressions in the most common variants of progressive supranuclear palsy, and falling backwards is more common relative to DLB. Visual hallucinations and fluctuating cognition are unusual in corticobasal degeneration and progressive supranuclear palsy.
Management
Palliative care is offered to ameliorate symptoms, but there are no medications that can slow, stop, or improve the relentless progression of the disease. No medications for DLB are approved by the US Food and Drug Administration (FDA) as of 2023, although donepezil is licensed in Japan and the Philippines for the treatment of DLB. As of 2020, there has been little study on the best management for non-motor symptoms such as sleep disorders and autonomic dysfunction; most information on management of autonomic dysfunction in DLB is based on studies of people with Parkinson's disease.
Management can be challenging because of the need to balance treatment of different symptoms: cognitive dysfunction, neuropsychiatric features, impairments related to the motor system, and other nonmotor symptoms. Individuals with DLB have widely different symptoms that fluctuate over time, and treating one symptom can worsen another; suboptimal care can result from a lack of coordination among the physicians treating different symptoms. A multidisciplinary approach—going beyond early and accurate diagnosis to include educating and supporting the caregivers—is favored.
Medication
Pharmacological management of DLB is complex because of adverse effects of medications and the wide range of symptoms to be treated (cognitive, motor, neuropsychiatric, autonomic, and sleep). Anticholinergic and dopaminergic agents can have adverse effects or result in psychosis in individuals with DLB, and a medication that addresses one feature might worsen another. For example, acetylcholinesterase inhibitors (AChEIs) for cognitive symptoms can lead to complications in dysautonomia features; treatment of movement symptoms with dopamine agonists may worsen neuropsychiatric symptoms; and treatment of hallucinations and psychosis with antipsychotics may worsen other symptoms or lead to a potentially fatal reaction.
Extreme caution is required in the use of antipsychotic medication in people with DLB because of their sensitivity to these agents. Severe and life-threatening reactions occur in almost half of people with DLB, and can be fatal after a single dose. Antipsychotics with D2 dopamine receptor-blocking properties are used only with great caution. According to Boot (2013), "electing not to use neuroleptics is often the best course of action". People with Lewy body dementias who take neuroleptics are at risk for neuroleptic malignant syndrome, a life-threatening illness. There is no evidence to support the use of antipsychotics to treat the Lewy body dementias, and they carry the additional risk of stroke when used in the elderly with dementia.
Medications (including tricyclic antidepressants and treatments for urinary incontinence) with anticholinergic properties that cross the blood–brain barrier can cause memory loss. The antihistamine medication diphenhydramine (Benadryl), sleep medications like zolpidem, and benzodiazepines may worsen confusion or neuropsychiatric symptoms. Some general anesthetics may cause confusion or delirium upon waking in persons with Lewy body dementias, and may result in permanent decline.
Cognitive symptoms
There is strong evidence for the use of AChEIs to treat cognitive problems; these medications include rivastigmine and donepezil. Both are first-line treatments in the UK. Even when the AChEIs do not lead to improvement in cognitive symptoms, people taking them may have less deterioration overall, although there may be adverse gastrointestinal effects. The use of these medications can reduce the burden on caregivers and improve activities of daily living for the individual with DLB. The AChEIs are initiated carefully as they may aggravate autonomic dysfunction or sleep behaviors. There is less evidence for the efficacy of memantine in DLB, but it may be used alone or with an AChEI because of its low side effect profile. Anticholinergic drugs are avoided because they worsen cognitive symptoms.
To improve daytime alertness, there is mixed evidence for the use of stimulants such as methylphenidate and dextroamphetamine; although worsening of neuropsychiatric symptoms is not common, they can increase the risk of psychosis. Modafinil and armodafinil may be effective for daytime sleepiness.
Motor symptoms
Motor symptoms in DLB appear to respond somewhat less to medications used to treat Parkinson's disease, like levodopa, and these medications can increase neuropsychiatric symptoms. Almost one out of every three individuals with DLB develops psychotic symptoms from levodopa. If such medications are needed for motor symptoms, cautious introduction with slow increases to the lowest possible dose may help avoid psychosis.
The anticonvulsant zonisamide has been approved in Japan since 2009 for treating Parkinson's disease and since 2018 to treat parkinsonism in DLB. There is high certainty according to the GRADE certainty rating approach that it is effective for treating motor symptoms in DLB.
Neuropsychiatric symptoms
Neuropsychiatric symptoms of DLB (aggression, anxiety, apathy, delusions, depression and hallucinations) do not always require treatment. The first line of defense in decreasing visual hallucinations is to reduce the use of dopaminergic drugs, which can worsen hallucinations. If new neuropsychiatric symptoms appear, the use of medications (such as anticholinergics, tricyclic antidepressants, benzodiazepines and opioids) that might be contributing to these symptoms is reviewed.
Among the AChEIs, donepezil and rivastigmine can help reduce neuropsychiatric symptoms and improve the frequency and severity of hallucinations in the less severe stages of DLB. For treating psychosis and agitation in DLB, there is low evidence for memantine, olanzapine and aripiprazole, and very low evidence for the efficacy of quetiapine. Although clozapine has been shown effective in Parkinson's disease, there is very low evidence for its use to treat visual hallucinations in DLB, and its use requires regular blood monitoring.
Apathy may be treated with AChEIs, and they may also reduce hallucinations, delusions, anxiety and agitation. Most medications to treat anxiety and depression have not been adequately investigated for DLB. Antidepressants may affect sleep and worsen RBD. Mirtazapine and SSRIs can be used to treat depression, depending on how well they are tolerated, and guided by general advice for the use of antidepressants in dementia. Antidepressants with anticholinergic properties may worsen hallucinations and delusions. People with Capgras syndrome may not tolerate AChEIs.
Sleep disorders
The first steps in managing sleep disorders are to evaluate the use of medications that impact sleep and provide education about sleep hygiene. Sleep medications are carefully evaluated for each individual as they carry increased risk of falls, increased daytime sleepiness, and worsening cognition.
Injurious dream enactment behaviors are a treatment priority. Frequency and severity of RBD may be lessened by treating sleep apnea, if it is present. RBD may be treated with melatonin or clonazepam. Melatonin may be more helpful in preventing injuries, and it offers a safer alternative, because clonazepam can produce deteriorating cognition, and worsen sleep apnea.
Memantine is useful for some people. Modafinil may be used for hypersomnia, but no trials support its use in DLB. Antidepressants (SSRIs, SNRIs, tricyclics, and MAOIs), AChEIs, beta blockers, caffeine, and tramadol may worsen RBD.
Autonomic symptoms
Decreasing the dosage of dopaminergic or atypical antipsychotic drugs may be needed with orthostatic hypotension, and high blood pressure drugs can sometimes be stopped. When non-pharmacological treatments for orthostatic hypotension have been exhausted, fludrocortisone, droxidopa, or midodrine are options, but these drugs have not been specifically studied for DLB as of 2020. Delayed gastric emptying can be worsened by dopaminergic medications, and constipation can be worsened by opiates and anticholinergic medications. Muscarinic antagonists used for urinary symptoms might worsen cognitive impairment in people with Lewy body dementias.
Other
There is no high-quality evidence for non-pharmacological management of DLB, but some interventions have been shown effective for addressing similar symptoms that occur in other dementias. For example, organized activities, music therapy, physical activity and occupational therapy may help with psychosis or agitation, while exercise and gait training can help with motor symptoms. Cognitive behavioral therapy can be tried for depression or hallucinations, although there is no evidence for its use in DLB. Cues can be used to help with memory retrieval.
For autonomic dysfunction, several non-medication strategies may be helpful. Dietary changes include avoiding meals high in fat and sugary foods, eating smaller and more frequent meals, after-meal walks, and increasing fluids or dietary fiber to treat constipation. Stool softeners and exercise also help with constipation. Excess sweating can be helped by avoiding alcohol and spicy foods, and using cotton bedding and loose fitting clothing.
Physical exercise in a sitting or recumbent position, and exercise in a pool, can help maintain conditioning. Compression stockings and elevating the head of the bed may also help, and increasing fluid intake or table salt can be tried to reduce orthostatic hypotension. To lessen the risk of fractures in individuals at risk for falls, bone mineral density screening and testing of vitamin D levels are used, and caregivers are educated on the importance of preventing falls. Physiotherapy has been shown helpful for Parkinson's disease dementia, but as of 2020, there is no evidence to support physical therapy in people with DLB.
Caregiving
Demands placed on caregivers are higher than in AD because of the neuropsychiatric symptoms associated with DLB. Contributing factors to the caregiver burden in DLB are emotional fluctuations, apathy, psychosis, aggression, agitation, and night-time behaviors such as parasomnias, that lead to a loss of independence earlier than in AD. Caregivers may experience depression and exhaustion, and they may need support from other people. Other family members who are not present in the daily caregiving may not observe the fluctuating behaviors or recognize the stress on the caregiver, and conflict can result when family members are not supportive.
Teaching caregivers how to manage neuropsychiatric symptoms (such as agitation and psychosis) is recommended, although education for caregivers has not been studied as thoroughly as in AD or Parkinson's disease. Caregiver education reduces not only distress for the caregiver, but symptoms for the individual with dementia. Caregiver training, watchful waiting, identifying sources of pain, and increasing social interaction can help minimize agitation. Individuals with dementia may not be able to communicate that they are in pain, and pain is a common trigger of agitation.
Visual hallucinations associated with DLB create a particular burden on caregivers. Caregivers can be educated to distract or change the subject when confronted with hallucinations, and that this is more effective than arguing over the reality of the hallucination. Coping strategies may help and are worth trying, even though there is no evidence for their efficacy. These strategies include having the person with DLB look away or look at something else, focus on or try to touch the hallucination, wait for it to go away on its own, and speak with others about the visualization. Delusions and hallucinations may be reduced by increasing lighting in the evening, and making sure there is no light at night when the individual with DLB is sleeping.
With the increased risk of side effects from antipsychotics for people with DLB, educated caregivers are able to act as advocates for the person with DLB. If evaluation or treatment in an emergency room is needed, the caregiver may be able to explain the risks associated with neuroleptic use for persons with DLB. Medical alert bracelets or notices about medication sensitivity are available and can save lives.
Individuals and their caregivers can be counselled about the need to improve bedroom safety for RBD symptoms. Sleep-related injuries from falling or jumping out of bed can be avoided by lowering the height of the bed, placing a mattress next to the bed to soften the impact of a fall, and removing sharp objects from around the bed. Sharp surfaces near the bed can be padded, bed alarm systems may help with sleepwalking, and bed partners may find it safer to sleep in another room. According to St Louis and Boeve, firearms should be locked away, out of the bedroom.
A home safety assessment can be done when there is risk of falling. Handrails and shower chairs can help avoid falls. Driving ability may be impaired early in DLB because of visual hallucinations, movement issues related to parkinsonism, and fluctuations in cognitive ability, and at some point it becomes unsafe for the person to drive. Driving ability is assessed as part of management, and family members generally determine when driving privileges are removed.
Prognosis
As of 2021, no cure is known for DLB. The prognosis for DLB has not been well studied; early studies had methodological limitations, such as small sample size and selection bias. Relative to AD and other dementias, DLB generally leads to higher rates of disability, hospitalization and institutionalization, and lower life expectancy and quality of life, with increased costs of care. Depression, apathy, and visual hallucinations contribute to the reduced quality of life. Decline may be more rapid when the APOE gene is present, or when AD—or its biomarkers—is also present. The severity of orthostatic hypotension also predicts a worse prognosis. Visuospatial deficits early in the course of DLB were thought to be a predictor of rapid decline, but more recent studies did not find an association.
The trajectory of cognitive decline in DLB is difficult to establish because of the high rate of missed diagnoses; the typical delay of a year in the US, and 1.2 years in the UK, for diagnosis of DLB mean that a baseline from which deterioration can be measured is often absent. Compared to AD, which is better studied, memory is thought to be retained longer, while verbal fluency may be lost faster, but the most common tools used to assess cognition may miss the most common cognitive deficits in DLB, and better studies are needed.
There are more neuropsychiatric symptoms in DLB than AD, and they may emerge earlier, so those with DLB may have a less favorable prognosis, with more rapid cognitive decline, more admissions to residential care, and a lower life expectancy. An increased rate of hospitalization compared to AD is most commonly related to hallucinations and confusion, followed by falls and infection.
Life expectancy is difficult to predict, and limited study data are available. Survival may be defined from the point of disease onset, or from the point of diagnosis. There is wide variability in survival times, as DLB may be rapidly or slowly progressing. A 2019 meta-analysis found an average survival time after diagnosis of 4.1 years—indicating survival in DLB 1.6 years less than after a diagnosis of Alzheimer's. A 2017 review found survival from disease onset between 5.5 and 7.7 years, and survival from diagnosis between 1.9 and 6.3 years. The difference in survival between AD and DLB could be because DLB is harder to diagnose, and may be diagnosed later in the course of the disease. An online survey with 658 respondents found that, after diagnosis, more than 10% died within a year, 10% lived more than 7 years, and some live more than 10 years; some people with Lewy body dementias live for 20 years. Shorter life expectancy is more likely when visual hallucinations, abnormal gait, and variable cognition are present early on.
Fear and anxiety feature strongly for both people with Lewy body dementia and their caregivers; a range of emotional responses to living with Lewy bodies includes fear of hallucinations, fear of falls and frightening nightmares as a result of RBD, and being fearful of the effects of tiredness and fatigue. The symptoms of fluctuations, depression, delirium and violence are also experienced as frightening. An immense amount of physical support from friends and family is often required to maintain social and supporting relationships. Individuals with Lewy body dementias describe feeling a burden in the wider social context, as they reduce attending social events due to their increasing physical needs. Frequently reported burden dimensions include personal strain and interference with personal life, which can lead to relationship dissatisfaction and resentment.
In the late phase of the disease, people may be unable to care for themselves. Falls—caused by many factors including parkinsonism, dysautonomia, and frailness—increase morbidity and mortality. Failure to thrive and aspiration pneumonia, a complication of dysphagia (difficulty swallowing) that results from dysautonomia, commonly cause death among people with the Lewy body dementias. Cardiovascular disease and sepsis are also common causes of death.
Epidemiology
The Lewy body dementias are as a group the second most common form of neurodegenerative dementia after AD as of 2021. DLB itself is one of the three most common types of dementia, along with AD and vascular dementia.
The diagnostic criteria for DLB before 2017 were highly specific, but not very sensitive, so that more than half of cases were missed historically. Dementia with Lewy bodies was under-recognized as of 2021, and there is little data on its epidemiology. The incidence and prevalence of DLB are not known accurately, but estimates are increasing with better recognition of the condition since 2017.
About 0.4% of those over the age of 65 are affected with DLB, and between per 1,000 people develop the condition each year. Symptoms usually appear between the ages of 50 and 80 (median 76), and it is not uncommon for it to be diagnosed before the age of 65.
DLB is thought to be slightly more common in men than women, but this finding has been challenged and is inconsistent across studies. Women may be over-represented in community samples and under-represented in clinical populations, where RBD is more frequently diagnosed in men; the diagnosis appears to have a higher prevalence for men in those under 75, while women appear to be diagnosed later and with greater cognitive impairment. Studies in Japan, France and Britain show a more equal prevalence between men and women than in the US.
An estimated 10 to 15% of diagnosed dementias are Lewy body type, but estimates range as high as 23% for those in clinical studies. A French study found an incidence among persons 65 years and older almost four times higher than a US study per 100,000 person-years), but the US study may have excluded people with only mild or no parkinsonism, while the French study screened for parkinsonism. Neither of the studies assessed systematically for RBD, so DLB may have been underdiagnosed in both studies. A door-to-door study in Japan found a prevalence of 0.53% for persons 65 and older, and a Spanish study found similar results.
History
Frederic Lewy (1885–1950) was the first to discover the abnormal protein deposits in the early 1900s. In 1912, studying Parkinson's disease (paralysis agitans), he described findings of these inclusion bodies in the vagus nerve, the nucleus basalis of Meynert and other brain regions. He published a book, The Study on Muscle Tone and Movement. Including Systematic Investigations on the Clinic, Physiology, Pathology, and Pathogenesis of Paralysis agitans, in 1923 and except for one brief paper a year later, never mentioned his findings again.
In 1961, Okazaki et al. published an account of diffuse Lewy-type inclusions associated with dementia in two autopsied cases. Dementia with Lewy bodies was fully described in an autopsied case by Japanese psychiatrist and neuropathologist Kenji Kosaka in 1976. Kosaka first proposed the term Lewy body disease four years later, based on 20 autopsied cases. DLB was thought to be rare until it became easier to diagnose in the 1980s after the discovery of alpha-synuclein immunostaining that highlighted Lewy bodies in post mortem brains. described thirty-four more cases in 1984, which were mentioned along with four UK cases by in 1987 in the journal Brain, bringing attention to the Japanese work in the Western world. A year later, published the first general description of diffuse Lewy body disease.
In the 1990s, with Japanese, UK, and US researchers finding that DLB was a common dementia, there were still no available diagnostic guidelines, with each group using different terminology. The different groups of researchers began to realize that a collaborative approach was needed if research was to advance. The DLB Consortium was established, and, in 1996, the term dementia with Lewy bodies was agreed upon, and the first criteria for diagnosing DLB were elaborated.
Two 1997 discoveries highlighted the importance of Lewy body inclusions in neurodegenerative processes: a mutation in the SNCA gene that encodes the alpha-synuclein protein was found in kindreds with Parkinson's disease, and Lewy bodies and neurites were found to be immunoreactive for alpha-synuclein. Thus, alpha-synuclein aggregation was established as the primary building block of the synucleinopathies.
Between 1995 and 2005, the DLB Consortium issued three consensus reports on DLB. DLB was included in the fourth text revision of the DSM (DSM-IV-TR, published in 2000) under "Dementia due to other general medical conditions". In the 2010s, the possibility of a genetic basis for LBD began to emerge. The Fourth Consensus Report was issued in 2017, giving increased diagnostic weighting to RBD and myocardial scintigraphy.
Society and culture
The British author and poet Mervyn Peake died in 1968 and was diagnosed posthumously as a probable case of DLB in a 2003 study published in JAMA Neurology. Based on signs in his work and letters of progressive deterioration, fluctuating cognitive decline, deterioration in visuospatial function, declining attention span, and visual hallucinations and delusions, his may be the earliest known case where DLB was found to have been the likely cause of death.
At the time of his suicide on August 11, 2014, Robin Williams, the American actor and comedian, had been diagnosed with Parkinson's disease. According to his widow, Williams had experienced depression, anxiety, and increasing paranoia. His widow said that his autopsy found diffuse Lewy body disease, while the autopsy used the term diffuse Lewy body dementia. Dennis Dickson, a spokesperson for the Lewy Body Dementia Association, clarified the distinction by stating that diffuse Lewy body dementia is more commonly called diffuse Lewy body disease and refers to the underlying disease process. According to Dickson, "Lewy bodies are generally limited in distribution, but in DLB, the Lewy bodies are spread widely throughout the brain, as was the case with Robin Williams." Ian G. McKeith, professor and researcher of Lewy body dementias, commented that Williams' symptoms and autopsy findings were explained by DLB.
Research directions
The identification of prodromal biomarkers for DLB will enable treatments to begin sooner, improve the ability to select subjects and measure efficacy in clinical trials, and help families and clinicians plan for early interventions and awareness of potential adverse effects from the use of antipsychotics. Criteria were established in 2020 to help researchers better recognize DLB in the pre-dementia phase. Three syndromes of prodromal DLB have been proposed: 1) mild cognitive impairment with Lewy bodies (MCI-LB); 2) delirium-onset DLB; and 3) psychiatric-onset DLB. The three early syndromes may overlap. As of 2020, the DLB Diagnostic Study Group's position is that criteria for MCI-LB can be recommended, but that it remains difficult to distinguish delirium-onset and psychiatric-onset DLB without better biomarkers. Nonetheless, severe late-onset psychiatric disorders can be an indication to consider Lewy body dementia, and unexplained delirium raises the possibility of prodromal DLB.
The diagnosis of DLB is made using the DLB Consortium criteria, but a 2017 study of skin samples from 18 people with DLB found that all of them had deposits of phosphorylated alpha-synuclein, while none of the controls did, suggesting that skin samples offer diagnostic potential. Other potential biomarkers under investigation are quantitative electroencephalography, imaging examination of brain structures, and measures of synucleinopathy in CSF. While commercial skin biopsy tests for DLB are available in the US, and the FDA has given a 'breakthrough device' authorization for CSF testing, these tests are not widely available and their role in clinical practice has not been established as of 2022. Other tests to detect alpha-synuclein with blood tests are under study as of 2021.
Cognitive training, deep brain stimulation and transcranial direct-current stimulation have been studied more in Parkinson's and Alzheimer's disease than they have in dementia with Lewy bodies, and all are potential therapies for DLB. Four clinical trials for treating parkinsonian symptoms in DLB have been completed as of 2021, but more studies are needed to assess risk vs. benefits, adverse effects, and longer-term therapeutic protocols.
Strategies for future interventions involve modifying the course of the disease using immunotherapy, gene therapy, and stem cell therapy, and reducing amyloid beta and alpha-synuclein accumulation. Therapies under study as of 2019 aim to reduce brain levels of alpha-synuclein (with the pharmaceuticals ambroxol, NPT200-11, and E2027), or to use immunotherapy to reduce widespread neuroinflammation resulting from alpha-synuclein deposits.
Notes
References
Works cited
Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License .
Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License
Further reading
– lists of typical and atypical antipsychotics
External links
Aging-associated diseases
Ailments of unknown cause
Geriatrics
Lewy body dementia
Cognitive disorders | Dementia with Lewy bodies | Biology | 12,977 |
13,857,654 | https://en.wikipedia.org/wiki/Eutely | Eutelic organisms have a fixed number of somatic cells when they reach maturity, the exact number being relatively constant for any one species. This phenomenon is also referred to as cell constancy. Development proceeds by cell division until maturity; further growth occurs via cell enlargement only. This growth is known as auxetic growth. It is shown by members of the now obsolete phylum Aschelminthes. In some cases, individual organs show eutelic properties while the organism itself does not.
Background
In 1909, Eric Martini coined the term eutely to describe the idea of cell constancy and to introduce a term literature sources would be able to use to identify organisms with a fixed amount and arrangement of cells and tissues. Since the introduction of eutely in the early 1900s, textbooks and theories of cytology and ontogeny have not used the term consistently. Advancements in the field of eutely has been developed by morphologists.
Studying of eutelic organisms has proved challenging, as most eutelic organisms are microscopic. Additionally, there is potential for mistakes in cell counting (often completed via an automated cell counter) and observation when larger organisms have numerous cells. In organisms of small size, errors in the examination and explanation of units may entirely negate reconstructions and deductions. Therefore, investigation of most eutelic organisms is done with intense scrutiny and review.
There are two distinct classes of organisms which display eutely:
Eutelic organisms whose somatic cells show a fixed, or complete pattern of cell and tissue number and arrangement
Eutelic organisms whose somatic cells show a limited, or incomplete pattern of cell and tissue number and arrangement
Examples
Eutely has been confirmed to certain degrees in various forms of diversity and sections of the tree of life. Examples include rotifers, many species of nematodes (including ascaris and the organism Caenorhabditis elegans whose male individuals have 1,033 cells), tardigrades, larvaceans and dicyemida. Additionally, examples of cell constancy have been seen among arthropods, specifically within sensory and nervous organs. Circumstances of partial cell consistency has been discovered in various insects and larvae. Annelids have been observed to provide evidence demonstrating constancy in the amount and arrangement of cells in larva of various species and in certain nervous cells of leeches. The phyla Rotifera and Gastrotricha are thought to show absolute cell constancy.
Preliminary studies of nematodes led scientists to believe only single organs of nematodes showed eutely. However, later evidence proved complete cell constancy for the tissues of multiple nematode forms. Within the Acanthocephalan phylum, various degrees of constancy have been studied but cell number and arrangement constancy has been seen in at least one family. Within the Platyhelminthes, subphylum Turbellaria display evidence of constancy. Organisms within class Trematoda display constant gland cell number as do epithelial cells in a few Miracidia.
Phylogeny
Since eutelic organisms have displayed such wide variety and diversity in lineage and ancestry, there is yet to be an attempt to establish a phylogenetic relationship. Previous researchers have made efforts to determine the relationship between trematodes and rotifers as their complete constancy qualities suggest a close relatedness. In the most primitive Protozoa species, close to where animals differentiated from plants, certain microscopic flagellates might provide clues to how cell constancy in animals developed. These organisms have the ability to establish colonies with distinct amounts of cells. This quality is assumed to have been passed on to all subsequent metazoan groups: the development of constant cell numbers. However, this trait is thought to have been lost when new conditions or more influential genes were introduced to the developmental program.
Hydatina senta
Until 2001, roundworm species Caenorhabditis elegans was considered to be the model organism for complete cell constancy. However, research has unveiled that in the epidermis of these organisms, as the mean cell number increased, as did the variance in cell number within that species. These studies revealed that variability in most taxa assumed to be eutelic is not abnormally low. A relationship between mean cell number and cell number variation was established following a law possessing an exponent of 2 upon a variety of multicellular eutelic taxa.
Hydatina senta (Phylum Rotifera, Order Bdelloidea) is a species of rotifers which demonstrate the most complete cell constancy of any species studied before 1912. Studies have revealed 958 somatic cells in female Hydatina. Hydatina somatic cell nuclei have spatial zones and are easily counted and compared to other counts from that of the same species. The cells from gastric glands and the vitellarium of Hydatina were examined, counted, and statistically analyzed. These cells were chosen because of their prominent nuclei which aided in counting. All gastric glands contained six nuclei with no variation from that value, and of 770 vitellaria studied, 767 showed eight nuclei, with two showing ten nuclei and one showing twelve. However, it was concluded that these variants were in a phase of senescence before they returned to their original complete cell count.
Incomplete constancy
Most examples of eutelic organisms display no certain proof for absolute cell number and arrangement constancy or inconstancy. In many species, the number of cells differs slightly between some organisms.
Eutelic response to injury
In all organisms displaying cell constancy, nuclei division by mitosis in mature cells is not achieved. Embryonic cells are able to undergo mitosis, however, this function is lost when cells become differentiated. There is no evidence showing that this ability is ever regained, even after injury which normally functions to trigger mitosis and cell regeneration. In 1927, a scientist named Jurczik observed that upon removing the arms of the rotifer Stephanoceros, they were unable to regenerate and grow back. Jurczik attributed this to the failure of the cells to mitotically divide. A study of Hydatina senta and Acanthocephala in 1922 by histologist Harley J. Van Cleave at the University of Illinois revealed physiological and morphological corrections of nucleus-cytoplasm intracellular protein interactions. Some of the nuclei studied showed abnormal and elongate shapes. Van Cleave concluded that the change in shape and form of the nuclei is attributed to morphological readjustments of nuclear surface proteins to make up for changes in physiology leading to a phase of senescence. This nuclear surface change has been proposed to be caused by mechanical division or fragmentation of the vitellaria's original cells by microscopic mechanisms yet to be discovered. This state of senescence is presumed to be a readjustment stage on the organism's way back to its absolute constancy state or before cell death.
References
External links
Listing of eutelic animals and other eutelic websites
Cell biology | Eutely | Biology | 1,468 |
2,692,641 | https://en.wikipedia.org/wiki/Psi%20Tauri | Psi Tauri, which is Latinized from ψ Tauri, is a solitary star in the zodiac constellation of Taurus. It has a yellow-white hue and is visible to the naked eye with an apparent visual magnitude of +5.22. The distance to this system, as determined using an annual parallax shift of as seen from the Earth, is 90 light years. It is drifting further away with a radial velocity of +9 km/s.
This object is an F-type main sequence star with a stellar classification of F1 V, which indicates it is undergoing core hydrogen fusion. It is about 1.4 billion years old and is spinning with a projected rotational velocity of 45 km/s. The star has 1.6 times the mass and radius of the Sun. It is radiating 4.8 times the Sun's luminosity from its photosphere at an effective temperature of 7,088 K.
References
F-type main-sequence stars
Tauri, Psi
Taurus (constellation)
Durchmusterung objects
Tauri, 042
025867
019205
1269 | Psi Tauri | Astronomy | 228 |
1,845,113 | https://en.wikipedia.org/wiki/Locally%20compact%20quantum%20group | In mathematics and theoretical physics, a locally compact quantum group is a relatively new C*-algebraic approach toward quantum groups that generalizes the Kac algebra, compact-quantum-group and Hopf-algebra approaches. Earlier attempts at a unifying definition of quantum groups using, for example, multiplicative unitaries have enjoyed some success but have also encountered several technical problems.
One of the main features distinguishing this new approach from its predecessors is the axiomatic existence of left and right invariant weights. This gives a noncommutative analogue of left and right Haar measures on a locally compact Hausdorff group.
Definitions
Before we can even begin to properly define a locally compact quantum group, we first need to define a number of preliminary concepts and also state a few theorems.
Definition (weight). Let be a C*-algebra, and let denote the set of positive elements of . A weight on is a function such that
for all , and
for all and .
Some notation for weights. Let be a weight on a C*-algebra . We use the following notation:
, which is called the set of all positive -integrable elements of .
, which is called the set of all -square-integrable elements of .
, which is called the set of all -integrable elements of .
Types of weights. Let be a weight on a C*-algebra .
We say that is faithful if and only if for each non-zero .
We say that is lower semi-continuous if and only if the set is a closed subset of for every .
We say that is densely defined if and only if is a dense subset of , or equivalently, if and only if either or is a dense subset of .
We say that is proper if and only if it is non-zero, lower semi-continuous and densely defined.
Definition (one-parameter group). Let be a C*-algebra. A one-parameter group on is a family of *-automorphisms of that satisfies for all . We say that is norm-continuous if and only if for every , the mapping defined by is continuous (surely this should be called strongly continuous?).
Definition (analytic extension of a one-parameter group). Given a norm-continuous one-parameter group on a C*-algebra , we are going to define an analytic extension of . For each , let
,
which is a horizontal strip in the complex plane. We call a function norm-regular if and only if the following conditions hold:
It is analytic on the interior of , i.e., for each in the interior of , the limit exists with respect to the norm topology on .
It is norm-bounded on .
It is norm-continuous on .
Suppose now that , and let
Define by . The function is uniquely determined (by the theory of complex-analytic functions), so is well-defined indeed. The family is then called the analytic extension of .
Theorem 1. The set , called the set of analytic elements of , is a dense subset of .
Definition (K.M.S. weight). Let be a C*-algebra and a weight on . We say that is a K.M.S. weight ('K.M.S.' stands for 'Kubo-Martin-Schwinger') on if and only if is a proper weight on and there exists a norm-continuous one-parameter group on such that
is invariant under , i.e., for all , and
for every , we have .
We denote by the multiplier algebra of .
Theorem 2. If and are C*-algebras and is a non-degenerate *-homomorphism (i.e., is a dense subset of ), then we can uniquely extend to a *-homomorphism .
Theorem 3. If is a state (i.e., a positive linear functional of norm ) on , then we can uniquely extend to a state on .
Definition (Locally compact quantum group). A (C*-algebraic) locally compact quantum group is an ordered pair , where is a C*-algebra and is a non-degenerate *-homomorphism called the co-multiplication, that satisfies the following four conditions:
The co-multiplication is co-associative, i.e., .
The sets and are linearly dense subsets of .
There exists a faithful K.M.S. weight on that is left-invariant, i.e., for all and .
There exists a K.M.S. weight on that is right-invariant, i.e., for all and .
From the definition of a locally compact quantum group, it can be shown that the right-invariant K.M.S. weight is automatically faithful. Therefore, the faithfulness of is a redundant condition and does not need to be postulated.
Duality
The category of locally compact quantum groups allows for a dual construction with which one can prove that the bi-dual of a locally compact quantum group is isomorphic to the original one. This result gives a far-reaching generalization of Pontryagin duality for locally compact Hausdorff abelian groups.
Alternative formulations
The theory has an equivalent formulation in terms of von Neumann algebras.
See also
Locally compact space
Locally compact field
Locally compact group
References
Johan Kustermans & Stefaan Vaes. "Locally Compact Quantum Groups." Annales Scientifiques de l’École Normale Supérieure. Vol. 33, No. 6 (2000), pp. 837–934.
Thomas Timmermann. "An Invitation to Quantum Groups and Duality – From Hopf Algebras to Multiplicative Unitaries and Beyond." EMS Textbooks in Mathematics, European Mathematical Society (2008).
C*-algebras
Functional analysis
Quantum groups
Harmonic analysis
Representation theory | Locally compact quantum group | Mathematics | 1,201 |
59,495,493 | https://en.wikipedia.org/wiki/NGC%206621 | NGC 6621 is an interacting spiral galaxy in the constellation Draco. It lies at a distance of about 260 million light-years. NGC 6621 interacts with NGC 6622, with their closest approach having taken place about 100 million years ago. The pair was discovered by Edward D. Swift and Lewis A. Swift on June 2, 1885. Originally NGC 6621 was assigned to the southeast galaxy, but now it refers to the northern one. NGC 6621 and NGC 6622 are included in the Atlas of Peculiar Galaxies as Arp 81 in the category "spiral galaxies with large high surface brightness companions".
NGC 6621 is the larger of the two, and is a very disturbed spiral galaxy. The encounter has pulled a long tail out of NGC 6621 that has now wrapped at the north behind its body. The collision has also triggered extensive star formation between the two galaxies. The most intense star formation takes place in the region between the two nuclei, where a large population of luminous clusters, also known as super star clusters, has been observed. At this region is observed the most tidal stress. Many large clusters are also observed in the tail and the nucleus of NGC 6621. The brightest and bluest clusters are less than 100 million years old, with the youngest being less than 10 million years old. The side of the galaxy further from the companion features noticeably less star formation activity.
NGC 6621 is characterised as a luminous infrared galaxy, with its infrared luminosity being 1011.24 . NGC 6621 contributes nearly all of the radio and far infrared flux of the pair. When observed in H-alpha, the centre of the galaxy has two bright sources separated by 3 arcseconds, with the southwest being brighter while the northeastern one coincides with the nucleus of NGC 6621.
Two supernovae have been detected in NGC 6621. The first, SN2010hi (type unknown, mag. 18), was discovered on 1 September 2010, lying 30" east and 4" north of the center of the galaxy. The second, SN2019hsx (typeIc-BL, mag. 18.6), was discovered on 2 June 2019.
See also
List of NGC objects (6001–7000)
References
External links
Spiral galaxies
Peculiar galaxies
Luminous infrared galaxies
Draco (constellation)
6621
11175
081
61582
Interacting galaxies | NGC 6621 | Astronomy | 484 |
1,568,608 | https://en.wikipedia.org/wiki/Half-integer | In mathematics, a half-integer is a number of the form
where is an integer. For example,
are all half-integers. The name "half-integer" is perhaps misleading, as each integer is itself half of the integer . A name such as "integer-plus-half" may be more accurate, but while not literally true, "half integer" is the conventional term. Half-integers occur frequently enough in mathematics and in quantum mechanics that a distinct term is convenient.
Note that halving an integer does not always produce a half-integer; this is only true for odd integers. For this reason, half-integers are also sometimes called half-odd-integers. Half-integers are a subset of the dyadic rationals (numbers produced by dividing an integer by a power of two).
Notation and algebraic structure
The set of all half-integers is often denoted
The integers and half-integers together form a group under the addition operation, which may be denoted
However, these numbers do not form a ring because the product of two half-integers is not a half-integer; e.g. The smallest ring containing them is , the ring of dyadic rationals.
Properties
The sum of half-integers is a half-integer if and only if is odd. This includes since the empty sum 0 is not half-integer.
The negative of a half-integer is a half-integer.
The cardinality of the set of half-integers is equal to that of the integers. This is due to the existence of a bijection from the integers to the half-integers: , where is an integer
Uses
Sphere packing
The densest lattice packing of unit spheres in four dimensions (called the D4 lattice) places a sphere at every point whose coordinates are either all integers or all half-integers. This packing is closely related to the Hurwitz integers: quaternions whose real coefficients are either all integers or all half-integers.
Physics
In physics, the Pauli exclusion principle results from definition of fermions as particles which have spins that are half-integers.
The energy levels of the quantum harmonic oscillator occur at half-integers and thus its lowest energy is not zero.
Sphere volume
Although the factorial function is defined only for integer arguments, it can be extended to fractional arguments using the gamma function. The gamma function for half-integers is an important part of the formula for the volume of an -dimensional ball of radius ,
The values of the gamma function on half-integers are integer multiples of the square root of pi:
where denotes the double factorial.
References
Rational numbers
Elementary number theory
Parity (mathematics) | Half-integer | Mathematics | 543 |
38,663,008 | https://en.wikipedia.org/wiki/24%20Ursae%20Majoris | 24 Ursae Majoris is a variable star in the northern circumpolar constellation of Ursa Major, located 101.5 light-years from the Sun. It has the variable star designation DK Ursae Majoris and the Bayer designation d Ursae Majoris; 24 Ursae Majoris is the Flamsteed designation. This object is visible to the naked eye as a faint, yellow-hued star with an apparent visual magnitude of 4.54. It is moving closer to the Earth with a heliocentric radial velocity of −27 km/s, and is expected to come as close as in around 879,000 years.
Description
24 Ursae Majoris has a stellar classification of G4 III-IV, which, at the estimated age of about one billion years, matches the spectrum of an aging giant star blended with features of a subgiant luminosity class. Based upon its position on the H–R diagram, this star has just passed through the Hertzsprung gap and is ready to begin its first ascent along the red-giant branch. It is a suspected RS Canum Venaticorum variable that changes in brightness by up to 0.058 in magnitude. Periods of 22.08 and 2.115 hours have been reported. It is an X-ray source with a luminosity of .
This star has 1.9 times the mass of the Sun and has expanded to 4.6 times the Sun's radius. It is spinning with a rotation period of 10 days. The star is radiating 14.9 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 5,335 K.
Nomenclature
With π1, π2, σ1, σ2, ρ and 2 Ursae Majoris, it composed the Arabic asterism Al Ṭhibā᾽, the Gazelle. According to the catalogue of stars in the Technical Memorandum 33-507 - A Reduced Star Catalog Containing 537 Named Stars, Al Ṭhibā were the title for seven stars : 2 Ursae Majoris as Althiba I, π1 as Althiba II, π2 as Althiba III, ρ as Althiba IV, σ1 as Althiba V, σ2 as Althiba VI, and this star (d) as Althiba VII.
References
G-type giants
RS Canum Venaticorum variables
Ursa Major
Ursae Majoris, d
Durchmusterung objects
Ursae Majoris, 24
082210
046977
3771
Ursae Majoris, DK
Althiba VII | 24 Ursae Majoris | Astronomy | 544 |
8,072,034 | https://en.wikipedia.org/wiki/Zoia%20Ceau%C8%99escu | Zoia Ceaușescu (; 28 February 1949 – 20 November 2006) was a Romanian mathematician, the daughter of Communist leader Nicolae Ceaușescu and his wife, Elena. She was also known as Tovarășa Zoia (comrade Zoia).
Biography
Zoia Ceaușescu studied at High School nr. 24 (now ) in Bucharest and graduated in 1966. She then continued her studies at the Faculty of Mathematics, University of Bucharest. She received her Ph.D. in 1977 with thesis On Intertwining Dilations written under the direction of Ciprian Foias. Ceaușescu worked as a researcher at the Institute of Mathematics of the Romanian Academy in Bucharest starting in 1974. Her field of specialization was functional analysis. Allegedly, her parents were unhappy with their daughter's choice of doing research in mathematics, so the Institute was disbanded in 1975. She moved on to work for Institutul pentru Creație Științifică și Tehnică (INCREST, Institute for Scientific and Technical Creativity), where she eventually started and headed a new department of mathematics. In 1976, Ceaușescu received the Simion Stoilow Prize for her outstanding contributions to the mathematical sciences.
She was married in 1980 to Mircea Oprean, an engineer and professor at the Polytechnic University of Bucharest.
During the Romanian Revolution, on 24 December 1989, she was arrested for "undermining the Romanian economy", and released eight months later, on 18 August 1990. After she was freed, she tried unsuccessfully to return to her former job at INCREST, then gave up and retired. After the revolution, some newspapers reported that she had lived a wild life, having numerous lovers and often being drunk.
After her parents were executed, the new government confiscated the house where she and her husband lived (the house was used as proof of allegedly stolen wealth), so she and her husband had to live with friends and relatives. She made efforts to return to INCREST (from where she had been fired), but permission was denied. Zoia sued the institute, but the lawsuit was not completed; in the end, she decided to stop fighting and retire.
After the revolution that ousted her parents, Zoia reported that during her parents' time in power her mother had asked the Securitate to keep an eye on the Ceaușescu children, perhaps she felt, out of a "sense of love". The Securitate "could not touch" the children she said, but the information they provided created a lot of problems for the children. She also remarked that power had a "destructive effect" on her father and that he "lost his sense of judgement".
Zoia Ceaușescu believed that her parents were not buried in Ghencea Cemetery; she attempted to have their remains exhumed, but a military court refused her request. The bodies were exhumed for identification and confirmed to be of her parents in 2010, after her death.
Zoia was a chain smoker. She died of lung cancer in 2006, at the age of 57, and her remains were incinerated at the .
Selected publications
Zoia Ceaușescu published 22 scientific papers between 1976 and 1988. Some of those are:
References
External links
Zoia Ceausescu
Children of presidents
20th-century Romanian mathematicians
Mathematical analysts
People of the Romanian revolution
University of Bucharest alumni
Deaths from lung cancer in Romania
1949 births
2006 deaths
20th-century women mathematicians
Scientists from Bucharest | Zoia Ceaușescu | Mathematics | 703 |
56,104,974 | https://en.wikipedia.org/wiki/Relacorilant | Relacorilant (developmental code name CORT-125134) is an antiglucocorticoid which is under development by Corcept Therapeutics for the treatment of Cushing's syndrome. It is also under development for the treatment of solid tumors and alcoholism. The drug is a nonsteroidal compound and acts as an antagonist of the glucocorticoid receptor. As of December 2017, it is in phase II clinical trials for Cushing's syndrome and phase I/II clinical studies for solid tumors, while the clinical phase for alcoholism is unknown.
References
External links
Relacorilant - AdisInsight
Antiglucocorticoids
Experimental drugs
4-Fluorophenyl compounds
Isoquinolines
Ketones
Pyrazoles
Pyridines
Sulfonamides
Trifluoromethyl compounds | Relacorilant | Chemistry | 177 |
19,163,391 | https://en.wikipedia.org/wiki/Ocean%20Prediction%20Center | The Ocean Prediction Center (OPC), established in 1995, is one of the National Centers for Environmental Prediction's (NCEP's) original six service centers. Until 2003, the name of the organization was the Marine Prediction Center. Its origins are traced back to the sinking of the RMS Titanic in 1912. The OPC issues forecasts up to five days in advance for ocean areas north of 31° north latitude and west of 35° west longitude in the Atlantic, and across the northeast Pacific north of 30° north latitude and east of 160° east longitude. Until recently, the OPC provided forecast points for tropical cyclones north of 20° north latitude and east of the 60° west longitude to the National Hurricane Center. OPC is composed of two branches: the Ocean Forecast Branch and the Ocean Applications Branch.
History
The first attempt as a marine weather program within the United States was initiated in New Orleans, Louisiana, by the United States Army Signal Corps. A January 23, 1873, memo directed the New Orleans Signal Observer to transcribe meteorological data from the ship logs of those arriving in port. Marine forecasting responsibility transferred from the United States Navy to the Weather Bureau in 1904, which enabled the receipt of timely observations from ships at sea. The basis for OPC's mission can be traced back to the sinking of the Titanic in April 1912. In response to that tragedy, an international commission was formed to determine requirements for safer ocean voyages. In 1914, the commission's work resulted in the International Convention for the Safety of Life at Sea, of which the United States is one of the original signatories.
In 1957, in order to help address marine issues, the United States Weather Bureau started to publish the Mariners Weather Log bi-monthly publication to report past weather conditions primarily over Northern Hemisphere oceans, information regarding the globe's tropical cyclone seasons, to publish monthly climatologies for use of those at sea, and to encourage voluntary ship observations from vessels at sea. From 1957 through 1966, the United States Weather Bureau's Office of Climatology published the Log. From 1966 through the summer of 1995, the Environmental Data Service, which became the National Environmental Satellite, Data, and Information Service, published the magazine.
Within the United States National Weather Service (NWS), forecast weather maps began to be published by offices in New York City, San Francisco, and Honolulu for public use. North Atlantic forecasts were shifted from a closed United States Navy endeavor to a National Weather Service product suite via radiofacsimile in 1971, while northeast Pacific forecasts became publicly available by the same method in 1972. Between 1986 and 1989, the portion of the National Meteorological Center (NMC) known as the Ocean Products Center (OPC) was responsible for marine weather forecasting guidance within the NWS. Between August 1989 and 1995, the unit named the Marine Forecast Branch also was involved in providing objective analysis and forecast products for marine and oceanographic variables.
When the National Centers for Environmental Prediction was created, the Marine Prediction Center (MPC) was organized to assume the U.S. obligation to issue warnings and forecasts for portions of the North Atlantic and North Pacific oceans. MPC was expected to be moved from Camp Springs, Maryland, to Monterey, California, but this did not occur. The Center was renamed the Ocean Prediction Center (OPC) on January 12, 2003.
Products
OPC's Ocean Forecast Branch issues warnings and forecasts in print and graphical formats for up five days into the future. Over 100 forecast products are issued daily. They cover the North Atlantic Ocean from the west coast of Europe to the U.S. and Canadian east coasts, and the North Pacific Ocean from the U.S. and Canadian west coast to the east coast of Asia. OPC weather forecasts and warnings for these areas primarily ensure the safety of ocean-crossing commercial ships and other vessels on the high seas. Embedded in these high seas areas are smaller offshore zones off the Atlantic and Pacific coasts. These zones extend from near the coast seaward to just beyond the U.S. Exclusive Economic Zones, out to about . OPC services ensure the safety of the extensive commercial and recreational fishing, boating, and shipping activities in these offshore waters.
OPC began to produce experimental gridded significant wave height forecasts in 2006, a first step toward digital marine service for high seas and offshore areas. Additional gridded products such as surface pressure and winds are under development. Recently, OPC began to use the NWS operational extratropical storm surge model output to provide experimental extratropical storm surge guidance for coastal weather forecast offices to assist them in coastal flood warning and forecast operations.
Role in the unified surface analysis
The OPC provides an important role in the production of the National Weather Service Unified Surface Analysis. After the Weather Prediction Center, or WPC, sends out their analysis for the synoptic hour, OPC cuts and stitches the WPC analysis to its area. The National Hurricane Center, or NHC, stitches the analysis from the Honolulu Forecast Office onto their map, before it is sent up to OPC. The OPC analysts then stitch together the entire analysis, and send it to the world through their website. The analysis covers much of the Northern Hemisphere, except for eastern Europe and the western half of Asia.
Quality control of marine observations
In 1994, OPC began to quality control global surface marine observations. Using an automated algorithm and interactive system, forecasters examine the latest observations from voluntary observing ships and drifting and moored platforms and compare them against short-projection model runs. Worldwide surface marine observations come to OPC via the World Meteorological Organization's Global Telecommunications System in real time. These quality control measures remove spurious data before the data are ingested into models to initialize forecasts. Several hundred of these observations are interactively examined daily. In addition, the quality controlled data are used by OPC forecasters to determine if gale, storm, or hurricane-force wind warnings are warranted.
Ocean Applications Branch
The Ocean Applications Branch plays an important role in enhancing OPC operations and services. One example is the adaptation of ocean surface winds observed from the QuikSCAT satellite in early 2000. Prior to the QuikSCAT launch, there was no ability to observe, verify, and warn of hurricane-force wind conditions, areas where wind speed exceeds , often associated with strong winter ocean storms. With QuikSCAT data routinely available in 2000, OPC began to issue hurricane-force wind warnings. In the 2006-2007 winter storm season, over 100 hurricane-force wind warnings were issued for North Pacific and North Atlantic oceans to warn ships of these most severe weather hazard conditions over major shipping routes. Preliminary results from a recent study estimates that in the absence of good information about extra-tropical ocean storms, the annual loss to container and dry bulk shipping would be on the order of more than $500 million. Operational marine warnings and forecasts reduce the above estimated annual loss by nearly one half.
OPC has a number of ongoing research-to-operations transition efforts that will lead to a suite of new oceanographic analysis and forecast products, such as ocean temperatures and currents based on real time observations and advanced global and basin scale ocean forecasting models. Global ocean sea surface temperatures and currents are now available on the OPC website.
See also
Environmental Modeling Center
MAFOR
Shipping Forecast
Space Weather Prediction Center
Storm Prediction Center
References
External links
Official website
Experimental OPC Facebook Page
National Weather Service
Maritime organizations
Weather prediction | Ocean Prediction Center | Physics | 1,515 |
9,818,988 | https://en.wikipedia.org/wiki/2007%20United%20Kingdom%20petrol%20contamination | The 2007 United Kingdom petrol contamination problem arose on 28 February 2007, when motorists in South East England reported that their cars were breaking down. This was caused by silicon contaminated unleaded petrol, sold by several supermarkets, that damaged the lambda sensors in engine management systems.
The problem
On 28 February 2007, motorists in South East England reported that their cars were breaking down. Motorists blamed supermarket petrol with most of the fuel sold by the supermarkets in the South East coming from the Vopak terminal in the Thames Estuary. Harvest Energy, which supplies Asda, shares tank facilities at the depot in West Thurrock with another oil company, Greenergy, which is part owned by Tesco and supplies both Tesco and Morrisons.
Then, on the evening of 2 March, scientists who had been testing the petrol reported finding traces of silicon in the fuel which were subsequently tracked down to four storage tanks by Harvest Energy.
Morrisons immediately announced that it was to stop selling unleaded petrol at 41 outlets supplied by the Vopak terminal, while Tesco was emptying and refilling tanks at 150 petrol stations, but was not suspending sales. The following day, Asda said it was replacing unleaded petrol at the 30 forecourts in the South East as a "precautionary measure".
By Sunday, 4 March, tests from Tesco and Asda had found contamination of silicon in unleaded fuel.
The cause
The Society of Motor Manufacturers and Traders (SMMT) said it believed suspect fuel might have damaged lambda sensors in some cars' systems, leading them to cut power to prevent damage to the engine. Silicon products are used as 'anti-foaming agents' in diesel fuel, but even very small quantities can cause serious problems in petrol engines.
Consequences
Tesco was criticised with claims that they had been alerted to the problem as early as 12 February. Affected motorists faced bills of several hundred pounds to repair their cars and, with up to 10,000 cars needing repair, the suppliers could be liable for compensation claims of up to £10 million. On 4 March, it was announced that a class action, on behalf of affected motorists, would be mounted.
Trading Standards officials advised motorists to keep petrol receipts, take a sample of the fuel, obtain quotes from garages for repair costs and approach the petrol station where the fuel was purchased. Then, on 6 March, Morrisons and Tesco offered to pay for any damage caused by the faulty petrol, and Tesco printed full page apologies in many national newspapers.
A further consequence was a rise in petrol prices charged by traditional petrol companies, of up to 4p per litre, across the country. An AA spokesman said "Putting up prices to make a fast buck is completely unjustified. Sometimes garages increase the price to protect stocks from a sudden run. But if anyone has upped the price outside the areas where the problems have been then they are milking the system.". However, the UK Petroleum Industry Association replied "There is no profiteering. Oil products are priced to the market. Pump prices are usually linked to the price of crude oil or the wholesale price of petrol.".
Further contamination
Since the initial problem in 2007, many people have reported problems with their cars after filling up with petrol and diesel at Tesco filling stations. These problems normally caused the engine management light to come on, and ranged from problems with the fuel system to damaged lambda sensors. They may have been caused by silicon or even water found in the underground fuel storage tanks. In December 2015, Tesco in Faversham had a water leak in one of its tanks causing problems for some customers.
References
Uk Petrol Contamination, 2007
Petroleum products
Supermarkets of the United Kingdom | 2007 United Kingdom petrol contamination | Chemistry | 758 |
9,477,975 | https://en.wikipedia.org/wiki/Completion%20of%20a%20ring | In abstract algebra, a completion is any of several related functors on rings and modules that result in complete topological rings and modules. Completion is similar to localization, and together they are among the most basic tools in analysing commutative rings. Complete commutative rings have a simpler structure than general ones, and Hensel's lemma applies to them. In algebraic geometry, a completion of a ring of functions R on a space X concentrates on a formal neighborhood of a point of X: heuristically, this is a neighborhood so small that all Taylor series centered at the point are convergent. An algebraic completion is constructed in a manner analogous to completion of a metric space with Cauchy sequences, and agrees with it in the case when R has a metric given by a non-Archimedean absolute value.
General construction
Suppose that E is an abelian group with a descending filtration
of subgroups. One then defines the completion (with respect to the filtration) as the inverse limit:
This is again an abelian group. Usually E is an additive abelian group. If E has additional algebraic structure compatible with the filtration, for instance E is a filtered ring, a filtered module, or a filtered vector space, then its completion is again an object with the same structure that is complete in the topology determined by the filtration. This construction may be applied both to commutative and noncommutative rings. As may be expected, when the intersection of the equals zero, this produces a complete topological ring.
Krull topology
In commutative algebra, the filtration on a commutative ring R by the powers of a proper ideal I determines the Krull (after Wolfgang Krull) or I-adic topology on R. The case of a maximal ideal is especially important, for example the distinguished maximal ideal of a valuation ring. The basis of open neighbourhoods of 0 in R is given by the powers In, which are nested and form a descending filtration on R:
(Open neighborhoods of any r ∈ R are given by cosets r + In.) The (I-adic) completion is the inverse limit of the factor rings,
pronounced "R I hat". The kernel of the canonical map from the ring to its completion is the intersection of the powers of I. Thus is injective if and only if this intersection reduces to the zero element of the ring; by the Krull intersection theorem, this is the case for any commutative Noetherian ring which is an integral domain or a local ring.
There is a related topology on R-modules, also called Krull or I-adic topology. A basis of open neighborhoods of a module M is given by the sets of the form
The I-adic completion of an R-module M is the inverse limit of the quotients
This procedure converts any module over R into a complete topological module over . [that is wrong in general! Only if the ideal is finite generated it is the case.]
Examples
The ring of p-adic integers is obtained by completing the ring of integers at the ideal (p).
Let R = K[x1,...,xn] be the polynomial ring in n variables over a field K and be the maximal ideal generated by the variables. Then the completion is the ring K[[x1,...,xn]] of formal power series in n variables over K.
Given a noetherian ring and an ideal the -adic completion of is an image of a formal power series ring, specifically, the image of the surjection
The kernel is the ideal
Completions can also be used to analyze the local structure of singularities of a scheme. For example, the affine schemes associated to and the nodal cubic plane curve have similar looking singularities at the origin when viewing their graphs (both look like a plus sign). Notice that in the second case, any Zariski neighborhood of the origin is still an irreducible curve. If we use completions, then we are looking at a "small enough" neighborhood where the node has two components. Taking the localizations of these rings along the ideal and completing gives and respectively, where is the formal square root of in More explicitly, the power series:
Since both rings are given by the intersection of two ideals generated by a homogeneous degree 1 polynomial, we can see algebraically that the singularities "look" the same. This is because such a scheme is the union of two non-equal linear subspaces of the affine plane.
Properties
The completion of a Noetherian ring with respect to some ideal is a Noetherian ring.
The completion of a Noetherian local ring with respect to the unique maximal ideal is a Noetherian local ring.
The completion is a functorial operation: a continuous map f: R → S of topological rings gives rise to a map of their completions,
Moreover, if M and N are two modules over the same topological ring R and f: M → N is a continuous module map then f uniquely extends to the map of the completions:
where are modules over
The completion of a Noetherian ring R is a flat module over R.
The completion of a finitely generated module M over a Noetherian ring R can be obtained by extension of scalars:
Together with the previous property, this implies that the functor of completion on finitely generated R-modules is exact: it preserves short exact sequences. In particular, taking quotients of rings commutes with completion, meaning that for any quotient R-algebra , there is an isomorphism
Cohen structure theorem (equicharacteristic case). Let R be a complete local Noetherian commutative ring with maximal ideal and residue field K. If R contains a field, then
for some n and some ideal I (Eisenbud, Theorem 7.7).
See also
Formal scheme
Profinite integer
Locally compact field
Zariski ring
Linear topology
Quasi-unmixed ring
Citations
References
David Eisenbud, Commutative algebra. With a view toward algebraic geometry. Graduate Texts in Mathematics, 150. Springer-Verlag, New York, 1995. xvi+785 pp. ;
Commutative algebra
Topological algebra | Completion of a ring | Mathematics | 1,302 |
26,059,863 | https://en.wikipedia.org/wiki/Arterolane | Arterolane, also known as OZ277 or RBx 11160, is an antimalarial compound marketed by Ranbaxy Laboratories. It was discovered by US and European scientists coordinated by the Medicines for Malaria Venture (MMV). Its molecular structure is uncommon for pharmacological compounds in that it has both an ozonide (trioxolane) group and an adamantane substituent.
Initial results were disappointing, and in 2007 MMV withdrew support, after having invested $20M in the research; Ranbaxy said at the time that it intended to continue developing arterolane in combination with another drug. In 2009, Ranbaxy started a Phase II clinical trial of arterolane in combination with piperaquine, and it was published in 2015.
In 2012, Ranbaxy obtained approval to market an arterolane/piperaquine combination drug in India, under the brand name Synriam. In 2014, the product was also approved in Nigeria, Uganda, Senegal, Cameroon, Guinea, Kenya and Ivory Coast.
References
Antimalarial agents
Adamantanes
Organic peroxides
Spiro compounds
Carboxamides
Amines
Cyclohexanes | Arterolane | Chemistry | 247 |
63,711,218 | https://en.wikipedia.org/wiki/European%20Bank%20for%20induced%20pluripotent%20Stem%20Cells | The European Bank for induced pluripotent Stem Cells (EBiSC) is a non-profit induced pluripotent stem cell (iPSC) biorepository and service provider with central facilities in Germany and the United Kingdom.
EBiSC was set up between 2014 and 2017 by a consortium that represented researchers, clinicians and industry stakeholders. A second phase of the project runs between 2019 and 2022 with the aim of consolidating EBiSC as a not-for-profit, self-sustainable iPSC bank and service provider. The initiative is funded by the European Commission and the European Federation of Pharmaceutical Industries and Associations under the Innovative Medicines Initiative.
The European Bank for induced pluripotent Stem Cells performs collection, banking, quality control and distribution of iPSC lines for research purposes. EBiSC's stated goal is to supply academic, non-profit and commercial researchers with quality-controlled, disease-relevant iPSC lines, data and other services. It also seeks to promote the international standardisation of iPSC banking practices and to act as a central hub that ensures the sustainability and accessibility of iPSC lines generated by different research organisations. IPSC lines generated externally can be deposited into EBiSC for storage, banking, quality control and distribution.
Catalogue and facilities
In February 2020, the EBiSC catalogue contained iPSC lines representing diseases and conditions such as Alzheimer's disease, Frontotemporal Dementia, Parkinson's disease, Huntington's disease, Dravet syndrome, Bardet-Biedl syndrome, depression and pain, diabetes mellitus, eye diseases and heart disease. These iPSC lines have been deposited into EBiSC by academic institutions and non-profit and commercial organisations internationally. This includes lines generated within research projects such as StemBANCC, HipSci, IMI-ADAPTED, CRACK IT BadIPS and CRACK IT UnTangle.
The EBiSC Bank is run by two central facilities: the main distributor of EBiSC cell lines, the European Collection of Authenticated Cell Cultures in the UK, and the 'mirror bank' storing duplicates of all deposited lines long-term, established by the Fraunhofer Institute for Biomedical Engineering (IBMT) in Germany.
All EBiSC lines are distributed by the European Collection of Authenticated Cell Cultures operated by Public Health England.
References
Stem cell research
South Cambridgeshire District
Science and technology in Cambridgeshire
Wellcome Trust
Biorepositories | European Bank for induced pluripotent Stem Cells | Chemistry,Biology | 500 |
6,141,973 | https://en.wikipedia.org/wiki/Zygomatic%20branches%20of%20the%20facial%20nerve | The zygomatic branches of the facial nerve (malar branches) are nerves of the face. They run across the zygomatic bone to the lateral angle of the orbit. Here, they supply the orbicularis oculi muscle, and join with filaments from the lacrimal nerve and the zygomaticofacial branch of the maxillary nerve (CN V2).
Structure
The zygomatic branches of the facial nerve are branches of the facial nerve (CN VII). They run across the zygomatic bone to the lateral angle of the orbit. This is deep to zygomaticus major muscle. They send fibres to orbicularis oculi muscle.
Connections
The zygomatic branches of the facial nerve have many nerve connections. Along their course, there may be connections with the buccal branches of the facial nerve. They join with filaments from the lacrimal nerve and the zygomaticofacial nerve from the maxillary nerve (CN V2). They also join with the inferior palpebral nerve and the superior labial nerve, both from the infraorbital nerve.
Function
The zygomatic branches of the facial nerve supply part of the orbicularis oculi muscle. This is used to close the eyelid.
Clinical significance
Testing
To test the zygomatic branches of the facial nerve, a patient is asked to close their eyes tightly. This uses orbicularis oculi muscle. The zygomatic branches of the facial nerve may be recorded and stimulated with an electrode.
Surgical damage
Rarely, the zygomatic branches of the facial nerve may be damaged during surgery on the temporomandibular joint (TMJ).
Additional images
See also
Zygomatic nerve
Zygomaticus major muscle
Zygomaticus minor muscle
References
External links
- "Branches of Facial Nerve (CN VII)"
()
()
http://www.dartmouth.edu/~humananatomy/figures/chapter_47/47-5.HTM
Facial nerve
Otorhinolaryngology
Nervous system
Neurology
Nerves of the head and neck | Zygomatic branches of the facial nerve | Biology | 456 |
7,188,879 | https://en.wikipedia.org/wiki/Immortal%20DNA%20strand%20hypothesis | The immortal DNA strand hypothesis was proposed in 1975 by John Cairns as a mechanism for adult stem cells to minimize mutations in their genomes. This hypothesis proposes that instead of segregating their DNA during mitosis in a random manner, adult stem cells divide their DNA asymmetrically, and retain a distinct template set of DNA strands (parental strands) in each division. By retaining the same set of template DNA strands, adult stem cells would pass mutations arising from errors in DNA replication on to non-stem cell daughters that soon terminally differentiate (end mitotic divisions and become a functional cell). Passing on these replication errors would allow adult stem cells to reduce their rate of accumulation of mutations that could lead to serious genetic disorders such as cancer.
Although evidence for this mechanism exists, whether it is a mechanism acting in adult stem cells in vivo is still controversial.
Methods
Two main assays are used to detect immortal DNA strand segregation: label-retention and label-release pulse/chase assays.
In the label-retention assay, the goal is to mark 'immortal' or parental DNA strands with a DNA label such as tritiated thymidine or bromodeoxyuridine (BrdU). These types of DNA labels will incorporate into the newly synthesized DNA of dividing cells during S phase. A pulse of DNA label is given to adult stem cells under conditions where they have not yet delineated an immortal DNA strand. During these conditions, the adult stem cells are either dividing symmetrically (thus with each division a new 'immortal' strand is determined and in at least one of the stem cells the immortal DNA strand will be marked with DNA label), or the adult stem cells have not yet been determined (thus their precursors are dividing symmetrically, and once they differentiate into adult stem cells and choose an 'immortal' strand, the 'immortal strand' will already have been marked). Experimentally, adult stem cells are undergoing symmetric divisions during growth and after wound healing, and are not yet determined at neonatal stages. Once the immortal DNA strand is labelled and the adult stem cell has begun or resumed asymmetric divisions, the DNA label is chased out. In symmetric divisions (most mitotic cells), DNA is segregating randomly and the DNA label will be diluted out to levels below detection after five divisions. If, however, cells are using an immortal DNA strand mechanism, then all the labeled DNA will continue to co-segregate with the adult stem cell, and after five (or more) divisions will still be detected within the adult stem cell. These cells are sometimes called label-retaining cells (LRCs).
In the label-release assay, the goal is to mark the newly synthesized DNA that is normally passed on to the daughter (non-stem) cell. A pulse of DNA label is given to adult stem cells under conditions where they are dividing asymmetrically. Under conditions of homeostasis, adult stem cells should be dividing asymmetrically so that the same number of adult stem cells is maintained in the tissue compartment. After pulsing for long enough to label all the newly replicated DNA, the DNA label is chased out (each DNA replication now incorporates unlabeled nucleotides) and the adult stem cells are assayed for loss of the DNA label after two cell divisions. If cells are using a random segregation mechanism, then enough DNA label should remain in the cell to be detected. If, however, the adult stem cells are using an immortal DNA strand mechanism, they are obligated to retain the unlabeled 'immortal' DNA, and will release all the newly synthesized labeled DNA to their differentiating daughter cells in two divisions.
Some scientists have combined the two approaches, by first using one DNA label to label the immortal strands, allowing to adult stem cells to begin dividing asymmetrically, and then using a different DNA label to label the newly synthesized DNA. Thus, the adult stem cells will retain one DNA label and release the other within two divisions.
Evidence
Evidence for the immortal DNA strand hypothesis has been found in various systems. One of the earliest studies by Karl Lark et al. demonstrated co-segregation of DNA in the cells of plant root tips. Plant root tips labeled with tritiated thymidine tended to segregate their labeled DNA to the same daughter cell. Though not all the labeled DNA segregated to the same daughter, the amount of thymidine-labeled DNA seen in the daughter with less label corresponded to the amount that would have arisen from sister-chromatid exchange. Later studies by Christopher Potten et al. (2002), using pulse/chase experiments with tritiated thymidine, found long-term label-retaining cells in the small intestinal crypts of neonatal mice. These researchers hypothesized that long-term incorporation of tritiated thymidine occurred because neonatal mice have undeveloped small intestines, and that pulsing tritiated thymidine soon after the birth of the mice allowed the 'immortal' DNA of adult stem cells to be labeled during their formation. These long-term cells were demonstrated to be actively cycling, as demonstrated by incorporation and release of BrdU.
Since these cells were cycling but continued to contain the BrdU label in their DNA, the researchers reasoned that they must be segregating their DNA using an immortal DNA strand mechanism. Joshua Merok et al. from the lab of James Sherley engineered mammalian cells with an inducible p53 gene that controls asymmetric divisions. BrdU pulse/chase experiments with these cells demonstrated that chromosomes segregated non-randomly only when the cells were induced to divide asymmetrically like adult stem cells. These asymmetrically dividing cells provide an in vitro model for demonstration and investigation of immortal strand mechanisms.
Scientists have strived to demonstrate that this immortal DNA strand mechanism exists in vivo in other types of adult stem cells. In 1996 Nik Zeps published the first paper demonstrating label retaining cells were present in the mouse mammary gland and this was confirmed in 2005 by Gilbert Smith who also published evidence that a subset of mouse mammary epithelial cells could retain DNA label and release DNA label in a manner consistent with the immortal DNA strand mechanism. Soon after, scientists from the laboratory of Derek van der Kooy showed that mice have neural stem cells that are BrdU-retaining and continue to be mitotically active. Asymmetric segregation of DNA was shown using real-time imaging of cells in culture. In 2006, scientists in the lab of Shahragim Tajbakhsh presented evidence that muscle satellite cells, which are proposed to be adult stem cells of the skeletal muscle compartment, exhibited asymmetric segregation of BrdU-labelled DNA when put into culture. They also had evidence that demonstrated BrdU release kinetics consistent with an immortal DNA strand mechanism were operating in vivo, using juvenile mice and mice with muscle regeneration induced by freezing.
These experiments supporting the immortal strand hypothesis, however, are not conclusive. While the Lark experiments demonstrated co-segregation, the co-segregation may have been an artifact of radiation from the tritium. Although Potten identified the cycling, label-retaining cells as adult stem cells, these cells are difficult to identify unequivocally as adult stem cells. While the engineered cells provide an elegant model for co-segregation of chromosomes, studies with these cells were done in vitro with engineered cells. Some features may not be present in vivo or may be absent in vitro. In May 2007 evidence in support of the Immortal DNA Strand theory was discovered by Michael Conboy et al., using the muscle stem/satellite cell model during tissue regeneration, where there is tremendous cell division during a relatively brief period of time. Using two BrdU analogs to label template and newly synthesized DNA strands, they saw that about half of the dividing cells in regenerating muscle sort the older "Immortal" DNA to one daughter cell and the younger DNA to the other. In keeping with the stem cell hypothesis, the more undifferentiated daughter typically inherited the chromatids with the older DNA, while the more differentiated daughter inherited the younger DNA.
Experimental evidence against the immortal strand hypothesis is sparse. In one study, researchers incorporated tritiated thymidine into dividing murine epidermal basal cells. They followed the release of tritiated thymidine after various chase periods, but the pattern of release was not consistent with the immortal strand hypothesis. Although they found label-retaining cells, they were not within the putative stem cell compartment. With increasing lengths of time for the chase periods, these label-retaining cells were located farther from the putative stem cell compartment, suggesting that the label-retaining cells had moved. However, finding conclusive evidence against the immortal strand hypothesis has proven difficult.
DNA template strand segregation was studied in the developing zebrafish. During larval development there was rapid depletion of older DNA template strands from stem cell niches in the retina, brain and intestine. Using high resolution microscopy, no evidence of asymmetric template strand segregation (in over 100 cell pairs) was found, making it improbable that in developing zebrafish asymmetric DNA segregation avoids mutational burden as proposed by the immortal strand hypothesis.
Further models
After Cairns first proposed the immortal DNA strand mechanism, the theory has undergone several updated refinements.
In 2002, he proposed that in addition to using immortal DNA strand mechanisms to segregate DNA, when the immortal DNA strands of adult stem cells undergo damage, they will choose to die (apoptose) rather than use DNA repair mechanisms that are normally used in non-stem cells.
Emmanuel David Tannenbaum and James Sherley developed a quantitative model describing how repair of point mutations might differ in adult stem cells. They found that in adult stem cells, repair was most efficient if they used an immortal DNA strand mechanism for segregating DNA, rather than a random segregation mechanism. This method would be beneficial because it avoids wrongly fixing DNA mutations in both DNA strands and propagating the mutation.
Mechanisms
The complete proof of a concept generally requires a plausible mechanism that could mediate the effect. Although controversial, there is a suggestion that this could be provided by the Dynein Motor. This paper is accompanied by a comment summarizing the findings and background.
However, this work has highly respected biologists among its detractors as exemplified by a further comment on a paper by the same authors from 2006. The authors have rebutted the criticism.
See also
Telomere
References
DNA
Stem cells
Cell biology
Developmental biology | Immortal DNA strand hypothesis | Biology | 2,156 |
13,252,599 | https://en.wikipedia.org/wiki/Distribution%20amplifier | In electronics, a distribution amplifier, or simply distribution amp or DA, is a device that accepts a single input signal and provides this same signal to multiple isolated outputs.
These devices allow a signal to be distributed to multiple destinations without ground loops or signal degradation. They are used for a number of common engineering tasks, including multiple amplification, cable television, splitting monitor and front of house mixes, and "tapping" a signal prior to sending it through effects units to preserve a "dry" signal for later experimentation.
Audio distribution amplifier
An audio distribution amplifier also known as: a press feed; a pool feed; a media feed; press box; or an ADA, takes a single audio feed, usually a line input, but it may be a microphone input, and outputs multiple line or microphone outputs. This can be done using a passive feed, where the signal is split among the outputs, or as an active feed where the outputs are amplified. The primary use of the Audio Distribution Amplifier is to share a single audio feed with multiple members of the press pool.
Video distribution amplifier
A video distribution amplifier (also known as a distribution amp or VDA) takes a video signal as an input, amplifies it, and outputs the amplified video signal to two or more outputs. It is primarily used to supply a single video signal to multiple pieces of video equipment. It adjusts the amplitude of a video signal to compensate for loss of signal in a video distribution system. Extending the distance of the video signal is the main purpose of the VDA. There are VDAs built for all video formats, NTSC, ATSC, QAM16, QAM32, QAM64, composite video and component video.
Their construction and capabilities can be simple; accept input signal, amplify, then output. Others can be more sophisticated and allow remote control from a control station, allow adjustment of the gain, equalization, and provide status of the input and output signals through Ethernet networks.
See also
Microphone splitter
References
External links
Info about some very common distribution amplifier types.
Guide to HDMI Distribution Amp, switches and matrix
Press feeds from distributor Whirlwind USA.
Consumer electronics
Video hardware
Television technology | Distribution amplifier | Technology,Engineering | 443 |
1,109,484 | https://en.wikipedia.org/wiki/Telemachy | The Telemachy (from Greek Τηλεμάχεια, Tēlemacheia) is a term traditionally applied to the first four books of Homer's epic poem the Odyssey. They are named so because, just as the Odyssey tells the story of Odysseus, they tell the story of Odysseus's son Telemachus as he journeys from home for the first time in search of news about his missing father.
The Telemachy as an introduction to the Odyssey
The Odyssey is a nostos that recalls the story of Odysseus' journey home to Ithaca, finally completed twenty years after the Trojan War began. Odysseus, however, does not directly appear in the narrative until Book 5. Instead, the Telemachys subject is the effect of Odysseus' absence on his family, Telemachus in particular. The first four books of the Odyssey give the reader a glimpse of the goings-on at the palace in Ithaca. There are a multitude of suitors vying for Penelope's hand in marriage, consuming the absent king's estate. They have been a terrible drain on the family's wealth, as they have been nearly permanent houseguests while Penelope put off her choice for three to four years. A brooding Telemachus wants to eject the suitors, and in fact announces his intention to do so; but he is not strong enough to act on the threat. Homer thus provides Telemachus with a motive for leaving Ithaca, and the reader with this portrait of Ithaca to place Odysseus' homecoming in context and to underscore the urgency of his journey.
Telemachus' Rites of Passage
Homeric scholarship generally recognizes the Telemachy as the story of its eponymous hero's journey from boyhood to manhood. It is only after having gone through this journey that Telemachus will be equipped to help Odysseus kill the suitors in Book 22. His first step toward Homer's ideal of manhood is a figurative one: in Book 1 Penelope tries to dictate what songs a bard should sing for the suitors. Telemachus (345ff.) admonishes her, and directs her to go back to her room; this signals the first time that Telemachus asserts himself as the head of the household in the Odyssey.
In Book 2 Telemachus further tries to assert his authority when he calls an Assembly and demands that the suitors leave his estate. But since Telemachus is, in his own words (61-2), "a weakling knowing nothing of valor," the suitors refuse, blaming Penelope for their staying so long. Telemachus then announces his intention to visit Sparta and Pylos in search of news about his father. This first journey away from home is an important part of the figurative journey from boyhood to manhood.
In Book 3 Telemachus is schooled in the ancient Greek social contract between hosts and their houseguests. The concept, called xenia, is simple: the host should offer the houseguest anything he wants, and the houseguest should not abuse this generosity, for he might find himself playing the part of host in the future. Nestor, the king of Pylos, exemplifies this social contract. Furthermore, Nestor's storytelling allows Homer to relate myths that fall outside of the Odyssey'''s purview. He reflects on the Trojan War, praising Odysseus for his cunning. Telemachus begins to learn and appreciate what kind of man his father was. The story Nestor tells of Orestes in particular serves as a model for Telemachus to emulate: just as Orestes killed the overbearing suitor who occupied his father Agamemnon's estate, so should Telemachus kill the suitors and reclaim his own father's estate.
In Book 4 Telemachus visits Menelaus in Sparta. Through the story-telling of Menelaus, Homer further narrates myths of the Trojan War that are not strictly the Odyssey's purview. Menelaus tells Telemachus of his own detour in Egypt on his way home from the Trojan War, during which he learned that Odysseus is still alive, a virtual captive of the nymph Calypso. His wife Helen recalls one of Odysseus' exploits during the war, which prompts Menelaus to tell his own story about Odysseus' heroism in the war. These tales of bravery and cunning both further educate Telemachus about his father, and serve as further examples of heroism to which he should aspire. The story of Orestes is revisited, again, to inspire Telemachus to take action against the suitors. Telemachus takes his own steps toward manhood when he leaves Sparta. Whereas he arrived at Pylos afraid to even speak to Nestor, upon leaving Menelaus he has enough confidence in himself to ask for a gift more appropriate for an inhabitant of rocky Ithaca. Menelaus obliges, and exchanges the chariot and team of horses he had given him for a wine bowl made by Hephaestus. Telemachus then begins his journey back home. But in Ithaca, the suitors have decided to ambush and kill Telemachus before he reaches his (669) "measure of manhood" and begin making trouble for them: in Book 2 Telemachus is considered a boy who poses no threat; by the end of Book 4 they fear his becoming a man who could stand up to them. The Telemachy abruptly draws to a close with this cliffhanger, the Suitors setting an ambush for Telemachus at a harbour.
Typically, in the hero's journey he will receive occasional aid from a mentor figure. In the Odyssey, Athena serves as mentor to both Odysseus and Telemachus. In Book 1 she visits Telemachus disguised as the mortal Mentes to spur the young man to action. She alternately advises Telemachus in the guise of a man actually named Mentor—hence the word "mentor" in English.
Capstone to Telemachy
Near the end of the Odyssey, Telemachus demonstrates his decisiveness and independence by hanging the disloyal women slaves, instead of killing them by sword, for the sake of his honor.
Foreshadowing in the Telemachy
The Orestes paradigm treated above is perhaps the most overt example of foreshadowing events in the Odysseys later books. The stories told about Odysseus serve a similar purpose. In the Telemachy'' both Nestor and Menelaus praise Odysseus for his cunning. In telling of his own detour in Egypt, Menelaus emphasizes how the use of cunning and subterfuge were instrumental in his return to Sparta. It was only by hiding under a seal skin that he was able to ambush and capture Proteus, the only one who can direct Menelaus how to reach home. Although the scheme was not of Menelaus' devising, it does demonstrate that while the battlefield inspires bravery from its heroes, wily cunning also has its place when the situation demands. These recollections of stealth and subterfuge point to the tactics that Odysseus will eventually employ upon his return to Ithaca.
Notes
Odyssey
Nestor (mythology)
Components of intellectual works | Telemachy | Technology | 1,531 |
44,443,349 | https://en.wikipedia.org/wiki/Americium%28IV%29%20fluoride | Americium(IV) fluoride is the inorganic compound with the formula AmF4. It is a tan solid. In terms of its structure, solid AmF4 features 8-coordinate Am centers interconnected by doubly bridging fluoride ligands.
References
Americium compounds
Fluorides
Actinide halides | Americium(IV) fluoride | Chemistry | 72 |
9,315,395 | https://en.wikipedia.org/wiki/Parametricity | In programming language theory, parametricity is an abstract uniformity property enjoyed by parametrically polymorphic functions, which captures the intuition that all instances of a polymorphic function act the same way.
Idea
Consider this example, based on a set X and the type T(X) = [X → X] of functions from X to itself. The higher-order function twiceX : T(X) → T(X) given by twiceX(f) = f ∘ f, is intuitively independent of the set X. The family of all such functions twiceX, parametrized by sets X, is called a "parametrically polymorphic function". We simply write twice for the entire family of these functions and write its type as X. T(X) → T(X). The individual functions twiceX are called the components or instances of the polymorphic function. Notice that all the component functions twiceX act "the same way" because they are given by the same rule. Other families of functions obtained by picking one arbitrary function from each T(X) → T(X) would not have such uniformity. They are called "ad hoc polymorphic functions". Parametricity is the abstract property enjoyed by the uniformly acting families such as twice, which distinguishes them from ad hoc families. With an adequate formalization of parametricity, it is possible to prove that the parametrically polymorphic functions of type X. T(X) → T(X) are one-to-one with natural numbers. The function corresponding to the natural number n is given by the rule f fn, i.e., the polymorphic Church numeral for n. In contrast, the collection of all ad hoc families would be too large to be a set.
History
The parametricity theorem was originally stated by John C. Reynolds, who called it the abstraction theorem. In his paper "Theorems for free!", Philip Wadler described an application of parametricity to derive theorems about parametrically polymorphic functions based on their types.
Programming language implementation
Parametricity is the basis for many program transformations implemented in compilers for the Haskell programming language. These transformations were traditionally thought to be correct in Haskell because of Haskell's non-strict semantics. Despite being a lazy programming language, Haskell does support certain primitive operations—such as the operator seq—that enable so-called "selective strictness", allowing the programmer to force the evaluation of certain expressions. In their paper "Free theorems in the presence of seq", Patricia Johann and Janis Voigtlaender showed that because of the presence of these operations, the general parametricity theorem does not hold for Haskell programs; thus, these transformations are unsound in general.
Dependent types
See also
Parametric polymorphism
Non-strict programming language
References
External links
Wadler: Parametricity
Programming language topics
Type theory
Polymorphism (computer science) | Parametricity | Mathematics,Engineering | 608 |
6,782,080 | https://en.wikipedia.org/wiki/Aerial%20rigging | Aerial rigging is a specialty within the field of rigging that deals specifically with human loads. Aerial rigging is the process of setting up equipment used to make humans fly, specifically aerial circus and aerial dance equipment.
Aerial rigging is commonly practiced to different degrees by specialty fabricators, professional riggers, professional aerial artists, as well as amateur aerial artists. Most aerial circus equipment is built by fabricators around the world that build equipment specifically for the circus industry.
Aerial artists, both professional and amateur, often become riggers out of necessity. They generally learn to rig what they need.
WLA (Weak Link Analysis) is the process of systematically analyzing aerial rigging for the weakest link or links in the system. WLA is the most common process used by aerial riggers to assess and improve rigging. However, it is not the only system used.
See also
Fly system, theatrical rigging
References
Introduction to Rigging: Lyras and Trapeze Bars
Introduction to Rigging: Aerial Fabrics
Rigging Math Made Simple
Allard-Buffet, Véronique. The Accommodating Showman. Diss. Carleton University, 2012.
Brunsdale, Maureen, and Mark Schmitt. The Bloomington-Normal Circus Legacy: The Golden Age of Aerialists. The History Press, 2013.
Cossin, M. & Bergeron-Parenteau, A. & Ross, A., (2022) “Maximal dynamic forces exerted by acrobats on nine circus apparatuses”, Circus: Arts, Life, and Sciences 1(1). doi: https://doi.org/10.3998/circus.2776
External links
Basic Circus Arts Instruction Manual: Chapter 8 – "Manual for Safety and Rigging." [PDF, 3.3 MB] European Federation of Professional Circus Schools (FEDEC), 2008.
Silk rigging tutorial and example
Aerial Rigging Fundamentals Classes
The Flying Trapeze Resource Page
AERISC – Association Européen pour la Recherche, l'Innovation et la Sécurité du Cirque (European Association for the Research, Innovation, and Safety of the Circus Arts)
High Performance Rigging for Aerial Performance
Circus skills
Special effects | Aerial rigging | Engineering | 456 |
9,279,449 | https://en.wikipedia.org/wiki/Pressurisation%20ductwork | Pressurisation ductwork is a passive fire protection system. It is used to supply fresh air to any area of refuge, designated emergency evacuation or egress route.
Purpose
The purpose of pressurisation ductwork is to maintain positive pressure in building spaces to prevent smoke from entering from other spaces in which a fire is occurring. It is typically used in exit stairways, corridors, and lobbies.
Requirements
Pressurisation ductwork is certified on the basis of fire testing such as ISO 6944.
Systems
There are two means of providing fire-resistance rated ductwork:
Inherently fire-resistant, or proprietary factory assembled ducts which are made of sheet metal shells filled with mixtures of rockwool, fiber and silicon dioxide
Sheet metal duct with exterior fireproofing materials such as blanket rockwool, ceramic fiber, or intumescent paint.
See also
Heat and smoke vent
Fire protection
Smoke exhaust ductwork
Emergency evacuation
External links
ISO 6944-1:2008 Fire containment -- Elements of building construction -- Part 1: Ventilation ducts
Active fire protection
Pressure
Heating, ventilation, and air conditioning | Pressurisation ductwork | Physics | 223 |
199,051 | https://en.wikipedia.org/wiki/Antisocial%20personality%20disorder | Antisocial personality disorder (ASPD) is a personality disorder defined by a chronic pattern of behavior that disregards the rights and well-being of others. People with ASPD often exhibit behavior that conflicts with social norms, leading to issues with interpersonal relationships, employment, and legal matters. The condition generally manifests in childhood or early adolescence, with a high rate of associated conduct problems and a tendency for symptoms to peak in late adolescence and early adulthood.
The prognosis for ASPD is complex, with high variability in outcomes. Individuals with severe ASPD symptoms may have difficulty forming stable relationships, maintaining employment, and avoiding criminal behavior, resulting in higher rates of divorce, unemployment, homelessness, and incarceration. In extreme cases, ASPD may lead to violent or criminal behaviors, often escalating in early adulthood. Research indicates that individuals with ASPD have an elevated risk of suicide, particularly those who also engage in substance misuse or have a history of incarceration. Additionally, children raised by parents with ASPD may be at greater risk of delinquency and mental health issues themselves.
Although ASPD is a persistent and often lifelong condition, symptoms may diminish over time, particularly after age 40, though only a small percentage of individuals experience significant improvement. Many individuals with ASPD have co-occurring issues such as substance use disorders, mood disorders, or other personality disorder. Research on pharmacological treatment for ASPD is limited, with no medications approved specifically for the disorder. However, certain psychiatric medications, including antipsychotics, antidepressants, and mood stabilizers, may help manage symptoms like aggression and impulsivity in some cases, or treat co-occurring disorders.
The diagnostic criteria and understanding of ASPD have evolved significantly over time. Early diagnostic manuals, such as the DSM-I in 1952, described “sociopathic personality disturbance” as involving a range of antisocial behaviors linked to societal and environmental factors. Subsequent editions of the DSM have refined the diagnosis, eventually distinguishing ASPD in the DSM-III (1980) with a more structured checklist of observable behaviors. Current definitions in the DSM-5 align with the clinical description of ASPD as a pattern of disregard for the rights of others, with potential overlap in traits associated with psychopathy and sociopathy.
Symptoms and behaviors
Due to tendencies toward recklessness and impulsivity, patients with ASPD are at a higher risk of drug and alcohol abuse. ASPD is the personality disorder most likely to be associated with addiction. Individuals with ASPD are at a higher risk of illegal drug usage, blood-borne diseases, HIV, shorter periods of abstinence, misuse of oral administrations, and compulsive gambling as a consequence of their tendency towards addiction. In addition, sufferers are more likely to abuse substances or develop an addiction at a young age.
Due to ASPD being associated with higher levels of impulsivity, suicidality, and irresponsible behavior, the condition is correlated with heightened levels of aggressive behavior, domestic violence, illegal drug use, pervasive anger, and violent crimes. This behavior typically has negative effects on their education, relationships, and/or employment. Alongside this, sexual behaviors of risk such as having multiple sexual partners in a short period of time, seeing prostitutes, inconsistent use of condoms, trading sex for drugs, and frequent unprotected sex are also common.
Patients with ASPD have been documented to describe emotions with ambivalence and experience heightened states of emotional coldness and detachment. Individuals with ASPD, or who display antisocial behavior, may often experience chronic boredom. They may experience emotions such as happiness and fear less clearly than others. It is also possible that they may experience emotions such as anger and frustration more frequently and clearly than other emotions.
People with ASPD may have a limited capacity for empathy and can be more interested in benefiting themselves than avoiding harm to others. They may have no regard for morals, social norms, or the rights of others. People with ASPD can have difficulty beginning or sustaining relationships. It is common for the interpersonal relationships of someone with ASPD to revolve around the exploitation and abuse of others. People with ASPD may display arrogance, think lowly and negatively of others, have limited remorse for their harmful actions, and have a callous attitude toward those they have harmed.
People with ASPD can have difficulty mentalizing, or interpreting the mental state of others. Alternately, they may display a perfectly intact theory of mind, or the ability to understand one's mental state, but have an impaired ability to understand how another individual may be affected by an aggressive action. These factors might contribute to aggressive and criminal behavior as well as empathy deficits. Despite this, they may be adept at social cognition, or the ability to process and store information about other people, which can contribute to an increased ability to manipulate others.
ASPD is highly prevalent among prisoners. People with ASPD tend to be convicted more, receive longer sentences, and are more likely to be charged with almost any crime, with assault and other violent crimes being the most common charges. Those who have committed violent crimes tend to have higher levels of testosterone than the average person, also contributing to the higher likelihood for men to be diagnosed with ASPD. The effect of testosterone is counteracted by cortisol, which facilitates the cognitive control of impulsive tendencies.
Arson and the destruction of others' property are also behaviors commonly associated with ASPD. Alongside other conduct problems, many people with ASPD had conduct disorder in their youth, characterized by a pervasive pattern of violent, criminal, defiant, and anti-social behavior.
Although behaviors vary by degree, individuals with this personality disorder have been known to exploit others in harmful ways for their own gain or pleasure, and frequently manipulate and deceive other people. While some do so with a façade of superficial charm, others do so through intimidation and violence. Individuals with antisocial personality disorder may deliberately show irresponsibility, have difficulty acknowledging their faults and/or attempt to redirect attention away from harmful behaviors.
Comorbidity
ASPD presents high comorbidity rates with various psychiatric conditions, particularly substance use and mood disorder. Individuals diagnosed with ASPD are significantly more prone to develop substance use disorder (SUDs), with studies showing that they are approximately 13 times more likely to be diagnosed with a SUD than those without ASPD. This population also faces increased risks for mood disorders, including a fourfold likelihood of experiencing major depressive disorder, as well as heightened risks for suicidal ideation and behaviors. Anxiety disorders, particularly post-traumatic stress disorder (PTSD) and social anxiety disorder, are also common comorbidities, affecting up to 50% of individuals with ASPD. These comorbidities often exacerbate the problems of those with ASPD, leading to more severe symptoms, complex treatment needs, and poorer clinical outcomes.
When combined with alcoholism, people may show frontal brain function deficits on neuropsychological tests greater than those associated with each condition. Alcohol use disorder is likely caused by lack of impulse and behavioral control exhibited by antisocial personality disorder patients.
Causes
Personality disorders are generally believed to be caused by a combination and interaction of genetics and environmental influences. People with an antisocial or alcoholic parent are considered to be at higher risk of developing ASPD. Fire-setting and cruelty to animals during childhood are also linked to the development of an antisocial personality disorder, along with being more common in males and among incarcerated populations. Although the causes listed correlate to the risk of developing ASPD, one factor alone is unlikely to be the only cause associated with ASPD and relating to a listed cause does not necessarily mean that a person should identify or be identified as having ASPD.
According to professor Emily Simonoff of the Institute of Psychiatry, Psychology and Neuroscience, there are many variables that are consistently connected to ASPD, such as: childhood hyperactivity and conduct disorder, criminality in adulthood, lower IQ scores, and reading problems. Additionally, children who grow up with a predisposition of ASPD and interact with other delinquent children are likely to later be diagnosed with ASPD.
Genetic
Research into genetic associations in antisocial personality disorder suggests that ASPD has some or even a strong genetic basis. The prevalence of ASPD is higher in people related to someone with the disorder. Twin studies, which are designed to discern between genetic and environmental effects, have reported significant genetic influences on antisocial behavior and conduct disorder.
In the specific genes that may be involved, one gene that has shown particular promise in its correlation with ASPD is the gene that encodes for monoamine oxidase A (MAO-A), an enzyme that breaks down monoamine neurotransmitters such as serotonin and norepinephrine. Various studies examining the gene's relationship to behavior have suggested that variants of the gene resulting in less MAO-A being produced (such as the 2R and 3R alleles of the promoter region) have associations with aggressive behavior in men.
This association is also influenced by negative experiences early in life, with children possessing a low-activity variant (MAOA-L) who have experienced negative circumstances being more likely to develop antisocial behavior than those with the high-activity variant (MAOA-H). Even when environmental interactions (e.g., emotional abuse) are taken out of the equation, a small association between MAOA-L and aggressive and antisocial behavior remains.
The gene that encodes for the serotonin transporter (SLC6A4), a gene that is heavily researched for its associations with other mental disorders, is another gene of interest in antisocial behavior and personality traits. Genetic association's studies have suggested that the short "S" allele is associated with impulsive antisocial behavior and ASPD in the inmate population.
However, research into psychopathy find that the long "L" allele is associated with the Factor 1 traits of psychopathy, which describes its core affective (e.g. lack of empathy, fearlessness) and interpersonal (e.g. grandiosity, manipulativeness) personality disturbances. This is suggestive of two different forms of the disorder, one associated more with impulsive behavior and emotional dysregulation, and the other with predatory aggression and affective disturbance.
Various other gene candidates for ASPD have been identified by a genome-wide association study published in 2016. Several of these gene candidates are shared with attention-deficit hyperactivity disorder, with which ASPD is often comorbid. The study found that those who carry four mutations on chromosome 6 are 50% more likely to develop antisocial personality disorder than those who do not.
Physiological
Hormones and neurotransmitters
Traumatic events can lead to a disruption of the standard development of the central nervous system, which can generate a release of hormones that can change normal patterns of development.
One of the neurotransmitters that has been discussed in individuals with ASPD is serotonin, also known as 5-HT. A meta-analysis of 20 studies found significantly lower 5-HIAA levels (indicating lower serotonin levels), especially in those who are younger than 30 years of age.
While it has been shown that lower levels of serotonin may be associated with ASPD, there has also been evidence that decreased serotonin function is highly correlated with impulsiveness and aggression across a number of different experimental paradigms. Impulsivity is not only linked with irregularities in 5-HT metabolism but may be the most essential psychopathological aspect linked with such dysfunction. Correspondingly, the DSM classifies "impulsivity or failure to plan ahead" and "irritability and aggressiveness" as two of seven sub-criteria in category A of the diagnostic criteria of ASPD.
Some studies have found a relationship between monoamine oxidase A and antisocial behavior, including conduct disorder and symptoms of adult ASPD, in maltreated children.
Neurological
Antisocial behavior may be related to a number of neurological defects, such as head trauma. Antisocial behavior is associated with decreased grey matter in the right lentiform nucleus, left insular, and frontopolar cortex. Increased volumes of grey matter have been observed in the right fusiform gyrus, inferior parietal cortex, right cingulate gyrus, and post-central cortex.
Intellectual and cognitive ability is often found to be impaired or reduced in the ASPD population. Contrary to stereotypes in popular culture of the "psychopathic genius", antisocial personality disorder is associated with reduced overall intelligence and specific reductions in individual aspects of cognitive ability. These deficits also occur in general-population samples of people with antisocial traits and in children with the precursors to antisocial personality disorder.
People that exhibit antisocial behavior tend to demonstrate decreased activity in the prefrontal cortex, and is more apparent in functional neuroimaging as opposed to structural neuroimaging. Some investigators have questioned whether the reduced volume in prefrontal regions is associated with antisocial personality disorder, or whether they result from co-morbid disorders, such as substance use disorder or childhood maltreatment. It is still considered an open question if the anatomical abnormality causes the psychological and behavioral abnormality, or vice versa.
Antisocial behavior is also associated with structural brain differences. Some of the major areas involved are areas of the prefrontal cortex, such as the right frontal and temporal cortices, the ventromedial prefrontal cortex, and the middle and orbitofrontal cortices. In these areas, a reduction in gray matter is seen in individuals with antisocial personality disorder, suggesting these structural differences may play a role in their behavior. Reduced gray matter volumes in these areas are in fact associated with a lack of emotional regulation, a lack of behavioral and response inhibition, and poor decision making among other affects. Additionally, those with ASPD have shown decreased gray matter volumes in other brain areas such as the amygdala and insula, suggesting possible issues with emotional reactions to certain stimuli. People that exhibit antisocial behavior also tend to demonstrate decreased activity in the prefrontal cortex, as is apparent in functional neuroimaging.
Cavum septi pellucidi (CSP) is a marker for limbic neural maldevelopment, and its presence has been loosely associated with certain mental disorders, such as schizophrenia and post-traumatic stress disorder. One study found that those with CSP had significantly higher levels of antisocial personality, psychopathy, arrests and convictions compared with controls.
Environmental
Family environment
Many studies suggest that the social and home environment contribute to the development of ASPD. Parents of children with ASPD may display antisocial behavior themselves, which are then adopted by their children. A lack of parental stimulation and affection during early development can lead to high levels of cortisol with the absence of balancing hormones such as oxytocin.
This disrupts and overloads the child's stress response systems, which is thought to lead to underdevelopment of the part of the child's brain that deals with emotion, empathy, and ability to connect to other humans on an emotional level. According to Dr. Bruce Perry in his book The Boy Who Was Raised as a Dog, "the infant's developing brain needs to be patterned, repetitive stimuli to develop properly. Spastic, unpredictable relief from fear, loneliness, discomfort, and hunger keeps a baby's stress system on high alert. An environment of intermittent care punctuated by total abandonment may be the worst of all worlds for a child."
Parenting styles
Some hypothesise that parenting styles can affect how children experience and develop in their youth, and can have an impact on a child's diagnosis of ASPD.
Childhood trauma
ASPD is highly comorbid with emotional and physical abuse in childhood. Physical neglect also has a significant correlation to ASPD. The way a child bonds with its parents early in life is important. Poor parental bonding due to abuse or neglect puts children at greater risk for developing antisocial personality disorder. There is also a significant correlation with parental overprotection and people who develop ASPD.
Those with ASPD may have experienced any of the following forms of childhood trauma or abuse: physical or sexual abuse, neglect, coercion, abandonment or separation from caregivers, violence in a community, acts of terror, bullying, or life-threatening incidents. Some symptoms can mimic other forms of mental illness, such as:
post-traumatic stress disorder (symptoms of upsetting/terrifying memories of traumatic events)
reactive attachment disorder (little to no response regarding emotional triggers)
disinhibited social engagement disorder (roaming off with people you don't know without caregivers being informed)
dissociative identity disorder (disconnection from self or environment)
The comorbidity rate of the previously listed disorders with ASPD tend to be much higher.
Cultural influences
The sociocultural perspective of clinical psychology views disorders as influenced by cultural aspects; since cultural norms differ significantly, mental disorders (such as ASPD) are viewed differently. Robert D. Hare suggested that the rise in ASPD that has been reported in the United States may be linked to changes in cultural norms, serving to validate the behavioral tendencies of many individuals with ASPD. While the rise reported may be in part a byproduct of the widening use (and abuse) of diagnostic techniques, given Eric Berne's division between individuals with active and latent ASPD – the latter keeping themselves in check by attachment to an external source of control like the law, traditional standards, or religion – it has been suggested that the erosion of collective standards may serve to release the individual with latent ASPD from their previously prosocial behavior.
There is also a continuous debate as to the extent to which the legal system should be involved in the identification and admittance of patients with preliminary symptoms of ASPD. Controversial clinical psychiatrist Pierre-Édouard Carbonneau suggested that the problem with legal forced admittance is the rate of failure when diagnosing ASPD. He contends that the possibility of diagnosing and coercing a patient into prescribing medication to someone without ASPD, but is diagnosed with ASPD, could be potentially disastrous. But the possibility of not diagnosing ASPD and seeing a patient go untreated because of a lack of sufficient evidence of cultural or environmental influences is something a psychiatrist must ignore; and in his words, "play it safe".
Conduct disorder
While antisocial personality disorder is a mental disorder diagnosed in adulthood, it has its precedent in childhood. The DSM-5's criteria for ASPD require that the individual have conduct problems evident by the age of 15. Persistent antisocial behavior, as well as a lack of regard for others in childhood and adolescence, is known as conduct disorder and is the precursor of ASPD. About 25–40% of youths with conduct disorder will be diagnosed with ASPD in adulthood.
Conduct disorder (CD) is a disorder diagnosed in childhood that parallels the characteristics found in ASPD. It is characterized by a repetitive and persistent pattern of behavior in which the basic rights of others or major age-appropriate norms are violated by the child. Children with the disorder often display impulsive and aggressive behavior, may be callous and deceitful, may repeatedly engage in petty crime (such as stealing or vandalism), or get into fights with other children and adults.
This behavior is typically persistent and may be difficult to deter with either threat or punishment. Attention deficit hyperactivity disorder (ADHD) is common in this population, and children with the disorder may also engage in substance use. CD is distinct from oppositional defiant disorder (ODD) in that children with ODD do not commit aggressive or antisocial acts against other people, animals, or property, though many children diagnosed with ODD are subsequently re-diagnosed with CD.
Two developmental courses for CD have been identified based on the age at which the symptoms become present. The first course is known as the "childhood-onset type" and occurs when conduct disorder symptoms are present before the age of 10. This course is often linked to a more persistent life course and more pervasive behaviors, and children in this group express greater levels of ADHD symptoms, neuropsychological deficits, more academic problems, increased family dysfunction, and higher likelihood of aggression and violence.
The second course is known as the "adolescent-onset type" and occurs when conduct disorder develops after the age of 10 years. Compared to the childhood-onset type, less impairment in various cognitive and emotional functions are present, and the adolescent-onset variety may remit by adulthood. In addition to this differentiation, the DSM-5 provides a specifier for a callous and unemotional interpersonal style, which reflects characteristics seen in psychopathy and are believed to be a childhood precursor to this disorder. Compared to the adolescent-onset subtype, the childhood-onset subtype tends to have a worse treatment outcome, especially if callous and unemotional traits are present.
Diagnosis
DSM-5
Section II
The main text of fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) defines antisocial personality disorder as being characterized by at least three of the following traits:
Failure to conform to social norms and laws, indicated by repeatedly engaging in illegal activities.
Deceitfulness, indicated by continuously lying, using aliases, or conning others for personal gain and pleasure.
Exhibiting impulsivity or failing to plan ahead.
Irritability and aggressiveness, indicated by repeatedly getting into fights or physically assaulting others.
Reckless behaviors that disregard the safety of others.
Irresponsibility, indicated by repeatedly failing to consistently work or honor financial obligations.
Lack of remorse after hurting or mistreating another person.
In order to be diagnosed with antisocial personality disorder under the DSM-5, one must be at least 18 years old, show evidence of onset of conduct disorder before age 15, and antisocial behavior cannot be explained by schizophrenia or bipolar disorder.
Section III (Alternative Model of Personality Disorders)
In response to criticisms of the extant (Section II/DSM-IV) criteria for personality disorders, including their discordance with current models in the scientific literature, high comorbidity rate, overuse of some categories, underuse of others, and overwhelming use of the personality disorder-not otherwise specified (PD-NOS) diagnosis, the DSM-5 Workgroup on personality disorders devised a dimensional model, wherein categoric personality diagnoses reflect extreme variations of normal personality traits.
In response to criticisms of the extant Section II/DSM-IV criteria for ASPD, namely its failure to capture the interpersonal and affective features of psychopathy, new criteria were proposed.
In addition to the new criteria, the individual must be at least 18 years old, the traits must cause dysfunction or distress, and should not be better explained by another mental disorder, the pathophysiological effects of a substance, or a person's cultural or social background. Also included as a "with psychopathic traits" specifier modelled after the Fearless Dominance scale of the Psychopathic Personality Inventory, defined by low Anxiousness and Withdrawal and high Attention-Seeking. Researchers have also proposed the inclusion of Grandiosity and Restricted Affectivity to better capture psychopathy.
Psychopathy
Psychopathy is commonly defined as a personality construct characterized partly by antisocial behavior, a diminished capacity for empathy and remorse, and poor behavioral controls. Psychopathic traits are assessed using various measurement tools, including Canadian researcher Robert D. Hare's Psychopathy Checklist, Revised (PCL-R). "Psychopathy" is not the official title of any diagnosis in the DSM or ICD; nor is it an official title used by any other major psychiatric organizations. The DSM and ICD, however, state that their antisocial diagnoses are at times referred to (or include what is referred to) as psychopathy or sociopathy.
American psychiatrist Hervey Cleckley's work on psychopathy formed the basis of the diagnostic criteria for ASPD, and the DSM states ASPD is often referred to as psychopathy. However, critics argue ASPD is not synonymous with psychopathy as the diagnostic criteria are not the same, since criteria relating to personality traits are emphasized relatively less in the former. These differences exist in part because it was believed such traits were difficult to measure reliably and it was "easier to agree on the behaviors that typify a disorder than on the reasons why they occur".
Although the diagnosis of ASPD covers two to three times as many prisoners as the diagnosis of psychopathy, Robert Hare believes the PCL-R is better able to predict future criminality, violence, and recidivism than a diagnosis of ASPD. He suggests there are differences between PCL-R-diagnosed psychopaths and non-psychopaths on "processing and use of linguistic and emotional information", while such differences are potentially smaller between those diagnosed with ASPD and without. Additionally, Hare argued confusion regarding how to diagnose ASPD, confusion regarding the difference between ASPD and psychopathy, as well as the differing future prognoses regarding recidivism and treatability, may have serious consequences in settings such as court cases where psychopathy is often seen as aggravating the crime.
Nonetheless, psychopathy has been proposed as a specifier under an alternative model for ASPD. In the DSM-5, under "Alternative DSM-5 Model for Personality Disorders", ASPD with psychopathic features is described as characterized by "a lack of anxiety or fear and by a bold interpersonal style that may mask maladaptive behaviors (e.g., fraudulence)". Low levels of withdrawal and high levels of attention-seeking combined with low anxiety are associated with "social potency" and "stress immunity" in psychopathy. Under the specifier, affective and interpersonal characteristics are comparatively emphasized over behavioral components. Research suggests that, even without the "with psychopathic traits" specifier, these Section III criteria accurately capture the affective-interpersonal features of psychopathy, though the specifier increases coverage of the Interpersonal and Lifestyle facets of the PCL-R.
Millon's subtypes
Theodore Millon suggested 5 subtypes of ASPD. However, these constructs are not recognized in the DSM or ICD.
Elsewhere, Millon differentiates ten subtypes (partially overlapping with the above) – covetous, risk-taking, malevolent, tyrannical, malignant, disingenuous, explosive, and abrasive – but specifically stresses that "the number 10 is by no means special ... Taxonomies may be put forward at levels that are more coarse or more fine-grained."
Treatment
ASPD is considered to be among the most difficult personality disorders to treat. Rendering an effective treatment for ASPD is further complicated due to the inability to look at comparative studies between psychopathy and ASPD due to differing diagnostic criteria, differences in defining and measuring outcomes and a focus on treating incarcerated patients rather than those in the community. Because of their very low or absent capacity for remorse, individuals with ASPD often lack sufficient motivation and fail to see the costs associated with antisocial acts. They may only simulate remorse rather than truly commit to change: they can be charming and dishonest, and may manipulate staff and fellow patients during treatment. Studies have shown that outpatient therapy is not likely to be successful, but the extent to which persons with ASPD are entirely unresponsive to treatment may have been exaggerated.
Most treatment done is for those in the criminal justice system to whom the treatment regimes are given as part of their imprisonment. Those with ASPD may stay in treatment only as required by an external source, such as parole conditions. Residential programs that provide a carefully controlled environment of structure and supervision along with peer confrontation have been recommended. There has been some research on the treatment of ASPD that indicated positive results for therapeutic interventions.
Psychotherapy, also known as "talk" therapy, has been found to help treat patients with ASPD. Schema therapy is also being investigated as a treatment for ASPD. A review by Charles M. Borduin features the strong influence of multisystemic therapy (MST) that could potentially improve this issue. However, this treatment requires complete cooperation and participation of all family members. Some studies have found that the presence of ASPD does not significantly interfere with treatment for other disorders, such as substance use, although others have reported contradictory findings.
Therapists working with individuals with ASPD may have considerable negative feelings toward patients with extensive histories of aggressive, exploitative, and abusive behaviors. Rather than attempt to develop a sense of conscience in these individuals, which is extremely difficult considering the nature of the disorder, therapeutic techniques are focused on rational and utilitarian arguments against repeating past mistakes. These approaches would focus on the tangible, material value of prosocial behavior and abstaining from antisocial behavior. However, the impulsive and aggressive nature of those with this disorder may limit the effectiveness of this form of therapy.
The use of medications in treating antisocial personality disorder is still poorly explored, and no medications have been approved by the FDA to specifically treat ASPD. A 2020 Cochrane review of studies that explored the use of pharmaceuticals in ASPD patients, of which eight studies met the selection criteria for review, concluded that the current body of evidence was inconclusive for recommendations concerning the use of pharmaceuticals in treating the various issues of ASPD. Nonetheless, psychiatric medications such as antipsychotics, antidepressants, and mood stabilizers can be used to control symptoms such as aggression and impulsivity, as well as treat disorders that may co-occur with ASPD for which medications are indicated.
Prognosis
Boys are almost twice as likely to meet all of the diagnostic criteria for ASPD than girls and they will often start showing symptoms of the disorder much earlier in life. Children that do not show symptoms of the disease through age 15 will almost never develop ASPD later in life. If adults exhibit milder symptoms of ASPD, it is likely that they never met the criteria for the disorder in their childhood and were consequently never diagnosed. Overall, symptoms of ASPD tend to peak in late teens and early twenties, but can often reduce or improve through age 40.
ASPD is ultimately a lifelong disorder that has chronic consequences, though some of these can be moderated over time. There may be a high variability of the long-term outlook of antisocial personality disorder. The treatment of this disorder can be successful, but it entails unique difficulties. It is unlikely to see rapid change especially when the condition is severe. In fact, past studies revealed that remission rates were small, with 27-31% of patients with ASPD seeing an improvement "with the most violent and dangerous features remitting". As a result of the characteristics of ASPD (e.g., displaying charm in effort of personal gain, manipulation), patients seeking treatment (mandated or otherwise) may appear to be "cured" in order to get out of treatment. According to definitions found in the DSM-5, people with ASPD can be deceitful and intimidating in their relationships. When they are caught doing something wrong, they often appear to be unaffected and unemotional about the consequences. Over time, continual behavior that lacks empathy and concern may lead to someone with ASPD taking advantage of the kindness of others, including their therapist.
Without proper treatment, individuals with ASPD could lead a life that brings about harm to themselves or others. This can be detrimental to their families and careers. Those with ASPD lack interpersonal skills (e.g., lack of remorse, lack of empathy, lack of emotional-processing skills). As a result of the inability to create and maintain healthy relationships due to the lack of interpersonal skills, individuals with ASPD may find themselves in predicaments such as divorce, unemployment, homelessness and even premature death by suicide. They also see higher rates of committed crime, reaching peaks in their late teens and often committing higher-severity crimes in their younger ages of diagnoses. Comorbidity of other mental illnesses such as depression or substance use disorder is prevalent among patients with ASPD. People with ASPD are also more likely to commit homicides and other crimes. Those who are imprisoned longer often see higher rates of improvement with symptoms of ASPD than others who have been imprisoned for a shorter amount of time.
According to one study, aggressive tendencies show in about 72% of all male patients diagnosed with ASPD. About 29% of the men studied with ASPD also showed a prevalence of pre-meditated aggression. Based on the evidence in the study, the researchers concluded that aggression in patients with ASPD is mostly impulsive, though there are some long-term evidences of pre-meditated aggressions. It often occurs that those with higher psychopathic traits will exhibit the pre-meditated aggressions to those around them. Over the course of a patient's life with ASPD, he or she can exhibit this aggressive behavior and harm those close to him or her.
Additionally, many people (especially adults) who have been diagnosed with ASPD become burdens to their close relatives, peers, and caretakers. Harvard Medical School recommends that time and resources be spent treating victims who have been affected by someone with ASPD, because the patient with ASPD may not respond to the administered therapies. In fact, a patient with ASPD may only accept treatment when ordered by a court, which will make their course of treatment difficult and severe. Because of the challenges in treatment, the patient's family and close friends must take an active role in decisions about therapies that are offered to the patient. Ultimately, there must be a group effort to aid the long-term effects of the disorder.
Epidemiology
The estimated lifetime prevalence of ASPD amongst the general population falls within 1% to 4%, skewed towards 6% men and 2% women. The prevalence of ASPD is even higher in selected populations, like prisons, where there is a preponderance of violent offenders. It has been found that the prevalence of ASPD among prisoners is just under 50%. According to one study (n=23000), the prevalence of ASPD in prisoners is 47% in men and 21% in women. Thus, with only 27-31% of patients with ASPD seeing an improvement in symptoms over time, statistically around one third (33%) of male prisoners will not see any improvement in their symptoms, and are thus essentially prognostically hopeless. The corresponding percentage of female prisoners with statistically no chance of improvement in symptoms is around 15% or roughly one in six. Similarly, the prevalence of ASPD is higher among patients in alcohol or other drug (AOD) use treatment programs than in the general population, suggesting a link between ASPD and AOD use and dependence. As part of the Epidemiological Catchment Area (ECA) study, men with ASPD were found to be three to five times more likely to excessively use alcohol and illicit substances than those men without ASPD. There was found to be increased severity of this substance use in women with ASPD. In a study conducted with both men and women with ASPD, women were more likely to misuse substances compared to their male counterparts.
Homelessness is also common amongst people with ASPD. A study on 31 youths of San Francisco and 56 youths in Chicago found that 84% and 48% of the homeless met the diagnostic criteria for ASPD respectively. Another study on the homeless found that 25% of participants had ASPD.
Individuals with ASPD are at an elevated risk for suicide. Some studies suggest this increase in suicidality is in part due to the association between suicide and symptoms or trends within ASPD, such as criminality and substance use. Children of people with ASPD are also at risk. Some research suggests that negative or traumatic experiences in childhood, perhaps as a result of the choices a parent with ASPD might make, can be a predictor of delinquency later on in the child's life. Additionally, with variability between situations, children of a parent with ASPD may face consequences of delinquency if they are raised in an environment in which crime and violence is common. Suicide is a leading cause of death among youth who display antisocial behavior, especially when mixed with delinquency. Incarceration, which could come as a consequence of actions from a person with ASPD, is a predictor for suicide ideation in youth.
History
The first version of the DSM in 1952 listed sociopathic personality disturbance. This category was for individuals who were considered "...ill primarily in terms of society and of conformity with the prevailing milieu, and not only in terms of personal discomfort and relations with other individuals." There were four subtypes, referred to as "reactions": antisocial, dyssocial, sexual, and addiction. The antisocial reaction was said to include people who were "always in trouble" and not learning from it, maintaining "no loyalties", frequently callous and lacking responsibility, with an ability to "rationalize" their behavior. The category was described as more specific and limited than the existing concepts of "constitutional psychopathic state" or "psychopathic personality" which had a very broad meaning; the narrower definition was in line with criteria advanced by Hervey M. Cleckley from 1941, while the term sociopathic had been advanced by George Partridge in 1928 when studying the early environmental influence on psychopaths. Partridge discovered the correlation between antisocial psychopathic disorder and parental rejection experienced in early childhood.
The DSM-II in 1968 rearranged the categories and "antisocial personality" was now listed as one of ten personality disorders but still described similarly, to be applied to individuals who are: "basically unsocialized", in repeated conflicts with society, incapable of significant loyalty, selfish, irresponsible, unable to feel guilt or learn from prior experiences, and who tend to blame others and rationalize. The manual preface contains "special instructions" including "Antisocial personality should always be specified as mild, moderate, or severe." The DSM-II warned that a history of legal or social offenses was not by itself enough to justify the diagnosis, and that a "group delinquent reaction" of childhood or adolescence or "social maladjustment without manifest psychiatric disorder" should be ruled out first. The dyssocial personality type was relegated in the DSM-II to "dyssocial behavior" for individuals who are predatory and follow more or less criminal pursuits, such as racketeers, dishonest gamblers, prostitutes, and dope peddlers (DSM-I classified this condition as sociopathic personality disorder, dyssocial type). It would later resurface as the name of a diagnosis in the ICD manual produced by the WHO, later spelled dissocial personality disorder and considered approximately equivalent to the ASPD diagnosis.
The DSM-III in 1980 included the full term antisocial personality disorder and, as with other disorders, there was now a full checklist of symptoms focused on observable behaviors to enhance consistency in diagnosis between different psychiatrists ('inter-rater reliability'). The ASPD symptom list was based on the Research Diagnostic Criteria developed from the so-called Feighner Criteria from 1972, and in turn largely credited to influential research by sociologist Lee Robins published in 1966 as "Deviant Children Grown Up". However, Robins has previously clarified that while the new criteria of prior childhood conduct problems came from her work, she and co-researcher psychiatrist Patricia O'Neal got the diagnostic criteria they used from Lee's husband the psychiatrist Eli Robins, one of the authors of the Feighner criteria who had been using them as part of diagnostic interviews.
The DSM-IV maintained the trend for behavioral antisocial symptoms while noting, "This pattern has also been referred to as psychopathy, sociopathy, or dyssocial personality disorder" and re-including in the 'Associated Features' text summary some of the underlying personality traits from the older diagnoses. The DSM-5 has the same diagnosis of antisocial personality disorder. The Pocket Guide to the DSM-5 Diagnostic Exam suggests that a person with ASPD may present "with psychopathic features" if he or she exhibits "a lack of anxiety or fear and a bold, efficacious interpersonal style".
See also
References
Further reading
External links
DSM-V-TR criteria + additional information
DSM-IV-TR Criteria for Antisocial personality disorder
Psychopathy and Antisocial Personality Disorder: A Case of Diagnostic Confusion
Anti-social behaviour
Behavioural sciences
Cluster B personality disorders
Criminology
Forensic psychology
Psychopathy | Antisocial personality disorder | Biology | 8,483 |
39,509,139 | https://en.wikipedia.org/wiki/Austropaxillus%20infundibuliformis | Austropaxillus infundibuliformis (formerly Paxillus infundibuliformis) is a species of fungus in the family Serpulaceae. A mycorrhizal species, it grows in the eucalypt forests of southeastern Australia. It is readily recognised by its tawny yellow colour, large size (relative to other Australian mushrooms) and forked decurrent gills.
Taxonomy
The species was first described in 1927 by Australian mycologist John Burton Cleland as Paxillus infundibuliformis. The initial specimens were found in Kuitpo Forest, Mount Lofty, Mount Sedgwick, and near Bendigo. Rolf Singer's 1945 Phylloporus infundibuliformis is a synonym. It was given its current name in 1999 when Andreas Bresinsky and colleagues, studying the genus Paxillus, determined that several Southern Hemisphere species were found to be in a lineage that is most closely related to the brown rot genus Serpula.
Description
Austropaxillus infundibuliformis is readily identified by its large size (for an Australian mushroom), colour and gills. The cap is convex to flattened and features an inrolled margin when it is young; it grows to diameters of up to . As it matures, it develops a central depression and becomes funnel shape, and the margin becomes wavy and folded. The cap colour ranges from yellow brown to dark brown, while the surface is dry and felt-like, sometimes developing small cracks in age. The closely spaced, pale cream to pale yellow-brown gills are decurrent and interspersed with lamellulae (short gills). Gills are shallow (up to 4 mm deep), have a smooth edge, and are multiply forked. They can be readily removed from the flesh of the cap. The stipe measures up to long by thick. Yellowish with a lighter shading near its base, it bruises dark brown where it has been injured or handled. The flesh has no distinctive odour and a bitter taste. The spore print is brown, while spores are somewhat fuse-shaped, thick-walled, and measure 11–13 by 5–6 μm.
Habitat and distribution
Found across southern Australia with records from Western Australia to Victoria and New South Wales, Austropaxillus infundibuliformis is a mycorrhizal species, and fruits on the ground in eucalypt forests, such as Eucalyptus obliqua.
Watling and colleagues examined a specimen regarded as A. infundibuliformis from Mount Field in Tasmania and found it to have smaller spores and paler gills.
References
External links
Boletales
Fungi described in 1927
Fungi native to Australia
Taxa named by John Burton Cleland
Fungus species | Austropaxillus infundibuliformis | Biology | 574 |
38,152,608 | https://en.wikipedia.org/wiki/Eta%20Pavonis | Eta Pavonis, a name latinized from η Pavonis, is a single star in the southern constellation of Pavo, positioned near the western constellation border next to Ara. It has an orange hue and is visible to the naked eye with an apparent visual magnitude of 3.61. Based on parallax, this object is located at a distance of approximately from the Sun. It has an absolute magnitude of −1.56, and is drifting closer with a radial velocity of −7.6 km/s.
This is an evolved bright giant star with a stellar classification K2II, between the classifications of giant and supergiant. Having exhausted the supply of hydrogen at its core, it has expanded to around 33.5 times the radius of the Sun. The star is radiating 469 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 4,642 K.
References
K-type bright giants
Pavo (constellation)
Pavonis, Eta
Durchmusterung objects
160635
086929
6582 | Eta Pavonis | Astronomy | 221 |
13,200,967 | https://en.wikipedia.org/wiki/Entegris | Entegris, Inc. is a supplier of materials for the semiconductor and other high-tech industries. Entegris has approximately 8,000 employees throughout its global operations. It has manufacturing, customer service and/or research facilities in the United States, Canada, China, Germany, Israel, Japan, Malaysia, Singapore, South Korea, and Taiwan. The company’s corporate headquarters are in Billerica, Massachusetts.
The company seeks to help manufacturers increase their yields by improving contamination control in several key processes, including photolithography, wet etch and clean, chemical-mechanical planarization, thin-film deposition, bulk chemical processing, wafer and reticle handling and shipping, and testing, assembly and packaging. Approximately 80% of the company's products are used in the semiconductor industry.
Products
Entegris products include: filtration products that purify process gases and fluids, as well as the ambient environment; liquid systems and components that dispense, control, or transport process fluids; high-performance materials and specialty gas management solutions; wafer carriers and shippers that protect the semiconductor wafer from contamination and breakage; and specialized graphite, silicon carbide, and coatings.
History
The company was incorporated in 1999 as the combined entity of Fluoroware, Inc., which began operating in 1966, and EMPAK, Inc. The company went public in 2000.
In August 2005, Entegris merged with Mykrolis Corporation, a publicly held supplier of filtration products to the semiconductor industry. Mykrolis was spun-out of Millipore Corporation in 2000.
In August 2008, Entegris acquired Poco Graphite, Inc., a Decatur, Texas supplier of specialized graphite and silicon carbide products for use in semiconductor, EDM, glass bottling, biomedical, aerospace, and alternative energy applications.
On April 30, 2014, Entegris acquired Danbury, Connecticut-based ATMI, a publicly held company providing critical materials and materials-handling solutions to the semiconductor industry, in a $1.1 billion transaction.
In Dec 2020, Entegris announced an investment of US$500 million, building a state-of-the-art facility in Taiwan. The project is expected to complete in three years in Kaohsiung Science Park.
In July 2022, Entegris acquired another US semiconductor chemicals company, CMC Materials Inc, for $5.7 billion. The acquisition, previously known as Cabot Microelectronics Corp, had 2,200 employees.
References
Li1, S., Shih, S., Yen, S., Yang, J.: "Case Study of Microcontamination Control." Aerosol and Air Quality Research, Vol. 7, No. 3, pp. 432–442, 2007
Manufacturing companies based in Massachusetts
Manufacturing companies established in 1966
Equipment semiconductor companies
1966 establishments in Massachusetts
Companies listed on the Nasdaq
2000 initial public offerings
2005 mergers and acquisitions
1999 mergers and acquisitions | Entegris | Engineering | 626 |
14,416,980 | https://en.wikipedia.org/wiki/Homotrimer | In molecular biology, a homotrimer () is a protein which is composed of three identical subunits of polypeptide.
Examples
Hemagglutinin (influenza)
Spike protein (coronavirus)
See also
Protein trimer
References
Peptides | Homotrimer | Chemistry | 52 |
2,472,880 | https://en.wikipedia.org/wiki/Biased%20graph | In mathematics, a biased graph is a graph with a list of distinguished circles (edge sets of simple cycles), such that if two circles in the list are contained in a theta graph, then the third circle of the theta graph is also in the list. A biased graph is a generalization of the combinatorial essentials of a gain graph and in particular of a signed graph.
Formally, a biased graph Ω is a pair (G, B) where B is a linear class of circles; this by definition is a class of circles that satisfies the theta-graph property mentioned above.
A subgraph or edge set whose circles are all in B (and which contains no half-edges) is called balanced. For instance, a circle belonging to B is balanced and one that does not belong to B is unbalanced.
Biased graphs are interesting mostly because of their matroids, but also because of their connection with multiary quasigroups. See below.
Technical notes
A biased graph may have half-edges (one endpoint) and loose edges (no endpoints). The edges with two endpoints are of two kinds: a link has two distinct endpoints, while a loop has two coinciding endpoints.
Linear classes of circles are a special case of linear subclasses of circuits in a matroid.
Examples
If every circle belongs to B, and there are no half-edges, Ω is balanced. A balanced biased graph is (for most purposes) essentially the same as an ordinary graph.
If B is empty, Ω is called contrabalanced. Contrabalanced biased graphs are related to bicircular matroids.
If B consists of the circles of even length, Ω is called antibalanced and is the biased graph obtained from an all-negative signed graph.
The linear class B is additive, that is, closed under repeated symmetric difference (when the result is a circle), if and only if B is the class of positive circles of a signed graph.
Ω may have underlying graph that is a cycle of length n ≥ 3 with all edges doubled. Call this a biased 2Cn . Such biased graphs in which no digon (circle of length 2) is balanced lead to spikes and swirls (see Matroids, below).
Some kinds of biased graph are obtained from gain graphs or are generalizations of special kinds of gain graph. The latter include biased expansion graphs, which generalize group expansion graphs.
Minors
A minor of a biased graph Ω = (G, B) is the result of any sequence of taking subgraphs and contracting edge sets. For biased graphs, as for graphs, it suffices to take a subgraph (which may be the whole graph) and then contract an edge set (which may be the empty set).
A subgraph of Ω consists of a subgraph H of the underlying graph G, with balanced circle class consisting of those balanced circles that are in H. The deletion of an edge set S, written Ω − S, is the subgraph with all vertices and all edges except those of S.
Contraction of Ω is relatively complicated. To contract one edge e, the procedure depends on the kind of edge e is. If e is a link, contract it in G. A circle C in the contraction G/e is balanced if either C or is a balanced circle of G. If e is a balanced loop or a loose edge, it is simply deleted. If it is an unbalanced loop or a half-edge, it and its vertex v are deleted; each other edge with v as an endpoint loses that endpoint, so a link with v as one endpoint becomes a half-edge at its other endpoint, while a loop or half-edge at v becomes a loose edge.
In the contraction Ω/S by an arbitrary edge set S, the edge set is E − S. (We let G = (V, E).) The vertex set is the class of vertex sets of balanced components of the subgraph (V, S) of Ω. That is, if (V, S) has balanced components with vertex sets V1, ..., Vk, then Ω/S has k vertices V1, ..., Vk . An edge e of Ω, not in S, becomes an edge of Ω/S and each endpoint vi of e in Ω that belongs to some Vi becomes the endpoint Vi of e in Ω/S; thus, an endpoint of e that is not in a balanced component of (V, S) disappears. An edge with all endpoints in unbalanced components of (V, S) becomes a loose edge in the contraction. An edge with only one endpoint in a balanced component of (V, S) becomes a half-edge. An edge with two endpoints that belong to different balanced components becomes a link, and an edge with two endpoints that belong to the same balanced component becomes a loop.
Matroids
There are two kinds of matroid associated with a biased graph, both of which generalize the cycle matroid of a graph (Zaslavsky, 1991).
The frame matroid
The frame matroid (sometimes called bias matroid) of a biased graph, M(Ω), (Zaslavsky, 1989) has for its ground set the edge set E. An edge set is independent if each component contains either no circles or just one circle, which is unbalanced. (In matroid theory a half-edge acts like an unbalanced loop and a loose edge acts like a balanced loop.) M(Ω) is a frame matroid in the abstract sense, meaning that it is a submatroid of a matroid in which, for at least one basis, the set of lines generated by pairs of basis elements covers the whole matroid. Conversely, every abstract frame matroid is the frame matroid of some biased graph.
The circuits of the matroid are called frame circuits or bias circuits. There are four kinds. One is a balanced circle. Two other kinds are a pair of unbalanced circles together with a connecting simple path, such that the two circles are either disjoint (then the connecting path has one end in common with each circle and is otherwise disjoint from both) or share just a single common vertex (in this case the connecting path is that single vertex). The fourth kind of circuit is a theta graph in which every circle is unbalanced.
The rank of an edge set S is n − b, where n is the number of vertices of G and b is the number of balanced components of S, counting isolated vertices as balanced components.
Minors of the frame matroid agree with minors of the biased graph; that is, M(Ω−S) = M(Ω)−S and M(Ω/S) = M(Ω)/S.
Frame matroids generalize the Dowling geometries associated with a group (Dowling, 1973). The frame matroid of a biased 2Cn (see Examples, above) which has no balanced digons is called a swirl. It is important in matroid structure theory.
The lift matroid
The extended lift matroid L0(Ω) has for its ground set the set E0, which is the union of E with an extra point e0. The lift matroid L(Ω) is the extended lift matroid restricted to E. The extra point acts exactly like an unbalanced loop or a half-edge, so we describe only the lift matroid.
An edge set is independent if it contains either no circles or just one circle, which is unbalanced.
A circuit is a balanced circle, a pair of unbalanced circles that are either disjoint or have just a common vertex, or a theta graph whose circles are all unbalanced.
The rank of an edge set S is n − c + ε, where c is the number of components of S, counting isolated vertices, and ε is 0 if S is balanced and 1 if it is not.
Minors of the lift and extended lift matroids agree in part with minors of the biased graph. Deletions agree: L(Ω−S) = L(Ω)−S. Contractions agree only for balanced edge sets: M(Ω/S) = M(Ω)/S if S is balanced, but not if it is unbalanced. If S is unbalanced, M(Ω/S) = M(G)/S = M(G/S) where M of a graph denotes the ordinary graphic matroid.
The lift matroid of a 2Cn (see Examples, above) which has no balanced digons is called a spike. Spikes are quite important in matroid structure theory.
Multiary quasigroups
Just as a group expansion of a complete graph Kn encodes the group (see Dowling geometry), its combinatorial analog expanding a simple cycle of length n + 1 encodes an n-ary (multiary) quasigroup. It is possible to prove theorems about multiary quasigroups by means of biased graphs (Zaslavsky, t.a.)
References
T. A. Dowling (1973), A class of geometric lattices based on finite groups. Journal of Combinatorial Theory, Series B, Vol. 14, pp. 61–86.
Thomas Zaslavsky (1989), Biased graphs. I. Bias, balance, and gains. Journal of Combinatorial Theory, Series B, Vol. 47, pp. 32–52.
Thomas Zaslavsky (1991), Biased graphs. II. The three matroids. Journal of Combinatorial Theory, Series B, Vol. 51, pp. 46–72.
Thomas Zaslavsky (1999). A mathematical bibliography of signed and gain graphs and allied areas. 1999 edition: Electronic Journal of Combinatorics, Dynamic Surveys in Combinatorics, #DS8, archived. Current edition: Electronic Journal of Combinatorics, Dynamic Surveys in Combinatorics, #DS8.
Thomas Zaslavsky (2012), Associativity in multiary quasigroups: the way of biased expansions. Aequationes Mathematicae, Vol. 83, pp. 1–66.
Graph families
Matroid theory | Biased graph | Mathematics | 2,129 |
450,268 | https://en.wikipedia.org/wiki/Stationary%20engine | A stationary engine is an engine whose framework does not move. They are used to drive immobile equipment, such as pumps, generators, mills or factory machinery, or cable cars. The term usually refers to large immobile reciprocating engines, principally stationary steam engines and, to some extent, stationary internal combustion engines. Other large immobile power sources, such as steam turbines, gas turbines, and large electric motors, are categorized separately.
Stationary engines, especially stationary steam engines were once widespread in the late Industrial Revolution. This was an era when each factory or mill generated its own power, and power transmission was mechanical (via line shafts, belts, gear trains, and clutches). Applications for stationary engines have declined since electrification has become widespread; most industrial uses today draw electricity from an electrical grid and distribute it to various individual electric motors instead.
Engines that operate in one place, but can be moved to another place for later operation, are called portable engines. Although stationary engines and portable engines are both "stationary" (not moving) while running, preferred usage (for clarity's sake) reserves the term "stationary engine" to the permanently immobile type, and "portable engine" to the mobile type.
Types of stationary engine
There are many types of stationary engines. These include:
Stationary steam engine
Hit and miss engine
Hot bulb engine
Hot tube engine
Applications
Stationary engines had a wide range of applications but they were especially used by small companies and operations, requiring power in limited settings at specific sites.
Lead, tin, and copper mines
Cotton, woollen, and worsted mills
Flour mills and corn grinders
A flat belt could be used to connect an engine to a flour mill or corn grinder. These machines are popular at old engine shows. Corn grinders would take corn off the cob, and grind up corn into animal feed. flour mills make flour.
Electricity generation
Before mains electricity and the formation of nationwide power grids, stationary engines were widely used for small-scale electricity generation. While large power stations in cities used steam turbines or high-speed reciprocating steam engines, in rural areas petrol/gasoline, paraffin/kerosene, and fuel oil-powered internal combustion engines were cheaper to buy, install, and operate, since they could be started and stopped quickly to meet demand, left running unattended for long periods of time, and did not require a large dedicated engineering staff to operate and maintain. Due to their simplicity and economy, hot bulb engines were popular for high-power applications until the diesel engine took their place from the 1920s. Smaller units were generally powered by spark-ignition engines, which were cheaper to buy and required less space to install.
Most engines of the late-19th and early-20th centuries ran at speeds too low to drive a dynamo or alternator directly. As with other equipment, the generator was driven off the engine's flywheel by a broad flat belt. The pulley on the generator was much smaller than the flywheel, providing the required 'gearing up' effect. Later spark-ignition engines developed from the 1920s could be directly coupled.
Up to the 1930s most rural houses in Europe and North America needed their own generating equipment if electric light was fitted. Engines would often be installed in a dedicated "engine house", which was usually an outbuilding separate from the main house to reduce the interference from the engine noise. The engine house would contain the engine, the generator, the necessary switchgear and fuses, as well as the engine's fuel supply and usually a dedicated workshop space with equipment to service and repair the engine. Wealthy households could afford to employ a dedicated engineer to maintain the equipment, but as the demand for electricity spread to smaller homes, manufacturers produced engines that required less maintenance and that did not need specialist training to operate.
Such generator sets were also used in industrial complexes and public buildings – anywhere where electricity was required but mains electricity was not available.
Most countries in the Western world completed large-scale rural electrification in the years following World War II, making individual generating plants obsolete for front-line use. However, even in countries with a reliable mains supply, many buildings are still fitted with modern diesel generators for emergency use, such as hospitals and pumping stations. This network of generators often forms a crucial part of the national electricity system's strategy for coping with periods of high demand.
Pumping stations
The development of water supply and sewage removal systems required the provision of many pumping stations. In these, some form of stationary engine (steam-powered for earlier installations) is used to drive one or more pumps, although electric motors are more conventionally used nowadays.
Canals
For canals, a distinct area of application concerned the powering of boat lifts and inclined planes. Where possible these would be arranged to utilise water and gravity in a balanced system, but in some cases additional power input was required from a stationary engine for the system to work. The vast majority of these were constructed (and in many cases, demolished again) before steam engines were supplanted by internal combustion alternatives.
Cable haulage railways
Industrial railways in quarries and mines made use of cable railways based on the inclined plane idea, and certain early passenger railways in the UK were planned with lengths of cable-haulage to overcome severe gradients.
For the first proper railway, the Liverpool and Manchester of 1830, it was not clear whether locomotive traction would work, and the railway was designed with steep 1 in 100 gradients concentrated on either side of Rainhill, just in case. Had cable haulage been necessary, then inconvenient and time-consuming shunting would have been required to attach and detach the cables. The Rainhill gradients proved not to be a problem, and in the event, locomotive traction was determined to be a new technology with great potential for further development.
The steeper 1 in 50 grades from Liverpool down to the docks were operated by cable traction for several decades until locomotives improved. Cable haulage continued to be used where gradients were even steeper.
Cable haulage did prove viable where the gradients were exceptionally steep, such as the 1 in 8 gradients of the Cromford and High Peak Railway opened in 1830. Cable railways generally have two tracks with loaded wagons on one track partially balanced by empty wagons on the other, to minimize fuel costs for the stationary engine. Various kinds of rack railways were developed to overcome the lack of friction of conventional locomotives on steep gradients.
These early installations of stationary engines would all have been steam-powered initially.
Some manufacturers of stationary engines
Associated Manufactures Company US
Blackstone & Co UK c.18821936
Briggs & Stratton US
Charter Gas Engine Company c.18831920
Cushman
Deere & Company / John Deere US
Electro-Motive US
Emerson-Brantingham US
Fairbanks-Morse US
Fuller and Johnson
Hercules Gas Engine Company 19121930
Hercules Motors Corporation 19151967, 1976-
Richard Hornsby & Sons UK
International Harvester US
Jacobson Machine Manufacturing Company
Kohler Company US
Lister Petter UK
R A Lister and Company UK
Petters Limited UK
Malkotsis Greece
National Gas Engine Company UK
New Holland Machine Company US
Olds Gasoline Engine Works (Pliny Olds, sons Wallace and Ransom) (18901910)
Otto Gas Engine Works
Palmer Brothers
Rider-Ericsson Engine Company
Russell & Company US
Stover Manufacturing and Engine Company
Van Duzen Gas and Gasoline Engine Company c.18911898
Waterloo Gasoline Engine Company US
Wärtsilä
Witte Engine Works
Preserved stationary engines
Many steam rallies, like the Great Dorset Steam Fair, include an exhibit section for internal combustion stationary engines for which purpose the definition is usually extended to include any engine that was not intended primarily for the propulsion of a vehicle. Thus many are in fact portable engines, either from new or having been converted by mounting on a wheeled trolley for ease of transport and may also include such things as marine or airborne auxiliary power units and engines removed from equipment such as motor mowers. These engines have been restored by private individuals and often are exhibited in operation, powering water pumps, electric generators, hand tools, and the like.
In the UK there are few museums where visitors can see stationary engines in operation. Many museums have one or more engines but only a few specialise in the internal combustion stationary engines. Among these are the Internal Fire Museum of Power, in Wales, and the Anson Engine Museum in Cheshire. The Amberley Working Museum in West Sussex also has a number of engines, as does Kew Bridge Steam Museum in London.
See also
Canterbury and Whitstable Railway
Diesel generator, which may be stationary
Engine-generator, which may be stationary
Hillclimbing (railway)
Non-road engine
References
External links
Antique Stationary Engines
Internal Fire Museum of Power in Wales
Anson Engine Museum in Cheshire
Stationary engine website
Stationary engines in South Africa
Harry's Old Engine Home Page
Antique-engine.com
Engines
Engine technology | Stationary engine | Physics,Technology | 1,800 |
24,676,312 | https://en.wikipedia.org/wiki/Advanced%20Intrusion%20Detection%20Environment | The Advanced Intrusion Detection Environment (AIDE) was initially developed as a free replacement for Tripwire licensed under the terms of the GNU General Public License (GPL).
The primary developers are named as Rami Lehti and Pablo Virolainen, who are both associated with the Tampere University of Technology, along with Richard van den Berg, an independent Dutch security consultant. The project is used on many Unix-like systems as an inexpensive baseline control and rootkit detection system.
Functionality
AIDE takes a "snapshot" of the state of the system, register hashes, modification times, and other data regarding the files defined by the administrator. This "snapshot" is used to build a database that is saved and may be stored on an external device for safekeeping.
When the administrator wants to run an integrity test, the administrator places the previously built database in an accessible place and commands AIDE to compare the database against the real status of the system. Should a change have happened to the computer between the snapshot creation and the test, AIDE will detect it and report it to the administrator. Alternatively, AIDE can be configured to run on a schedule and report changes daily using scheduling technologies such as cron, which is the default behavior of the Debian AIDE package.
This is mainly useful for security purposes, given that any malicious change which could have happened inside the system would be reported by AIDE.
See also
Host-based intrusion detection system comparison
References
External links
AIDE Github project
AIDE online manpage
AIDE reference in Ubuntu wiki
OpenSUSE Security Guide chapter on AIDE
Computer security software
Unix security software
Intrusion detection systems | Advanced Intrusion Detection Environment | Engineering | 330 |
31,889,797 | https://en.wikipedia.org/wiki/Cycles%20of%20Time | Cycles of Time: An Extraordinary New View of the Universe is a science book by mathematical physicist Roger Penrose published by The Bodley Head in 2010. The book outlines Penrose's Conformal Cyclic Cosmology (CCC) model, which is an extension of general relativity but opposed to the widely supported multidimensional string theories and cosmological inflation following the Big Bang.
Synopsis
Penrose examines implications of the Second Law of Thermodynamics and its inevitable march toward a maximum entropy state of the universe. Penrose illustrates entropy in terms of information state phase space (with 1 dimension for every degree of freedom) where particles end up moving through ever larger grains of this phase space from smaller grains over time due to random motion. He disagrees with Stephen Hawking's back-track over whether information is destroyed when matter enters black holes. Such information loss would non-trivially lower total entropy in the universe as the black holes wither away due to Hawking radiation, resulting in a loss in phase space degrees of freedom.
Penrose goes on further to state that over enormous scales of time (beyond 10100 years), distance ceases to be meaningful as all mass breaks down into extremely red-shifted photon energy, whereupon time has no influence, and the universe continues to expand without event . This period from Big Bang to infinite expansion Penrose defines as an aeon. The smooth "hairless" infinite oblivion of the previous aeon becomes the low-entropy Big Bang state of the next aeon cycle. Conformal geometry preserves the angles but not the distances of the previous aeon, allowing the new aeon universe to appear quite small at its inception as its phase space starts anew.
Penrose cites concentric rings found in the WMAP cosmic microwave background survey as preliminary evidence for his model, as he predicted black hole collisions from the previous aeon would leave such structures due to ripples of gravitational waves.
Reception
Most nonexpert critics (nonscientists) have found the book a challenge to fully comprehend; a few such as Kirkus Reviews and Doug Johnstone for The Scotsman appreciate the against the grain innovative ideas Penrose puts forth. Manjit Kumar reviewing for The Guardian admires the Russian doll geometry play of the CCC concept, framing it as an idea of which M. C. Escher "would have approved". Graham Storrs for the New York Journal of Books concedes that this is not the book that an unambitious lay person should plunge into. The American fiction writer Anthony Doerr in The Boston Globe writes "Penrose has never shied away from including mathematics in his texts, and kudos to his publisher for honoring that wish. That said, the second half of Cycles of Time offers some seriously hard sledding"; "If you'll forgive a skiing metaphor, Cycles of Time is a black diamond of a book."
References
2010 non-fiction books
Cosmology books
Information theory
Popular physics books
The Bodley Head books
Works by Roger Penrose | Cycles of Time | Mathematics,Technology,Engineering | 618 |
44,656,810 | https://en.wikipedia.org/wiki/Bicyclobutane | Bicyclo[1.1.0]butane is an organic compound with the formula C4H6. It is a bicyclic molecule consisting of two cis-fused cyclopropane rings, and is a colorless and easily condensed gas. Bicyclobutane is noted for being one of the most strained compounds that is isolatable on a large scale — its strain energy is estimated at 63.9 kcal mol−1. It is a nonplanar molecule, with a dihedral angle between the two cyclopropane rings of 123°.
The first reported bicyclobutane was the ethyl carboxylate derivative, C4H5CO2Et, which was prepared by dehydrohalogenation the corresponding bromocyclobutanecarboxylate ester with sodium hydride. The parent hydrocarbon was prepared from 1-bromo-3-chlorocyclobutane by conversion of the bromocyclobutanecarboxylate ester, followed by intramolecular Wurtz coupling using molten sodium. The intermediate 1-bromo-3-chlorocyclobutane can also be prepared via a modified Hunsdiecker reaction from 3-chlorocyclobutanecarboxylic acid using mercuric oxide and bromine:
A synthetic approach to bicyclobutane derivatives involves ring closure of a suitably substituted 2-bromo-1-(chloromethyl)cyclopropane with magnesium in THF. Substituted bicyclo[1.1.0]butanes can also be prepared from the reaction of iodo-bicyclo[1.1.1]pentanes with amines, thiols, and sulfinate salts. Bicyclo[1.1.0]butanes are explored in medicinal chemistry as covalent reactive groups.
Stereochemical evidence indicates that bicyclobutane undergoes thermolysis to form 1,3-butadiene with an activation energy of 41 kcal mol−1 via a concerted pericyclic mechanism (cycloelimination, [σ2s+σ2a]).
Biological synthesis
Linolenic acid can be converted into its bicyclobutane derivative using a fusion protein produced by a strain of the cyanobacterium Anabaena sphaerica (strain PCC 7120). The other group reported a directed evolution approach, whereby engineered heme protein was expressed in E. coli and optimized for rate and yield of a substituted bicyclobutane derivative.
See also
Propalene (Bicyclobutadiene)
Bicyclopentane
1.1.1-Propellane
References
Cyclopropanes
Bicycloalkanes
Gases | Bicyclobutane | Physics,Chemistry | 608 |
20,443,048 | https://en.wikipedia.org/wiki/BSRIA | BSRIA (it takes its name from the initial letters of the Building Services Research and Information Association) is a UK-based testing, instrumentation, research and consultancy organisation, providing specialist services in construction and building services engineering. It is a not-for-profit, member-based association, with over 650 member companies; related services are delivered by a trading company, BSRIA Limited. Any profits made are invested in its research programme, producing best practice guidance.
BSRIA is a full member of the Construction Industry Council.
Structure
BSRIA had a turnover of £11.8 million in 2010/11. It employs over 180 people at its UK head office in Bracknell as well as regionally based engineers in the UK and offices in France, Spain, Germany, China, Japan, Brazil and North America.
BSRIA's mission is "to enable the building services and construction industries and their clients to enhance the value of the built environment, by improving the quality of their products and services, the efficiency of their provision and the effectiveness of their operation."
History of BSRIA
BSRIA was formed in 1955 as the Heating and Ventilating Research Council, later to become the Heating and Ventilating Research Association. As the industry became increasingly linked with other services so its research association and professional body saw the need to widen their remit. In 1975 the 'building services' scope was adopted, marked by the formation of the Building Services Research and Information Association, commonly shortened to BSRIA, and, in 1976, the formation of the Chartered Institution of Building Services, renamed the Chartered Institution of Building Services Engineers (CIBSE) in 1985.
As the Association's activities developed to meet the needs of an integrated construction industry and to provide more than just research and information, the full name became less relevant. When new government rules required it to split research and other activities into two companies, BSRIA started formal use of the abbreviation.
Trading activities, including research, are now managed through a trading company, BSRIA Limited, which is a wholly owned subsidiary of the Building Services Research and Information Association, which is a company limited by guarantee. Thus, members – largely companies active in designing and delivering building services – join the Building Services Research and Information Association, and services are provided by BSRIA Limited.
Timeline history
1955 (29 Dec) – The inaugural General Meeting of the Heating and Ventilating Research Council held at the Institution of Mechanical Engineers. The Chairman (C.S.K. Benham) summed up the proceedings by " Thank you, Gentlemen. Now we exist "
1956 – With almost 200 members, 3 research staff are appointed to work in rented premises at British Coal Utilisation Research Association, Leatherhead
1959 – The Heating and Ventilating Research Association (HVRA) was incorporated to take over the assets, liabilities and undertaking of the Heating and Ventilating Research Council, which was dissolved
1964 – Laboratories double in size.
1967 – Test Division established.
1973 – Establishment of Member Services and Technical Division to reflect 'customer / contractor ' ethos. The 'Application Guide' is launched to reflect members interest in the application of research.
1975 – 'Building Services Research and Information Association' name formally adopted. Instrument Hire service established with 120 hirings. First Statistics Bulletin published.
1986–87 – "Research Clubs" established to match DoE funds for projects. BEMS (Building Energy Management Systems) Centre established as autonomous centre of expertise. Air Infiltration Centre renamed to Air Infiltration and Ventilation Centre to reflect widening role.
1989 – EuroCentre established to help industry take advantage of the single European market.
1993 – Second site established at Crowthorne giving 50% more space. New radiator test room for testing to new European standard.
2000 – The Association establishes a trading company – BSRIA Ltd – to undertake all the trading activities, including research.
2003 – Acquisition of new offices adjacent to existing laboratories to accommodate staff relocated from Crowthorne.
2006 – Offices set up in France, Spain and Germany.
2008 – New subsidiary, BSRIA Construction Consulting (Beijing), established. Acquisition of market research business 'Proplan' to develop market intelligence in controls, fire protection and security.
2011 – New subsidiary, BSRIA Cert established to provide independent certification of products and services.
Founding members
The following companies were the founding members of BSRIA who remain as members now (original company names updated to current):
BRE
Chartered Institution of Building Services Engineers (CIBSE)
Comyn Ching & Co. (Solray) Ltd
Crown House Technologies Ltd
EMCOR Group (UK) Plc
Faber Maunsell (now AECOM's UK subsidiary)
Flakt Woods Ltd
Gratte Brothers Ltd
Haden Young (part of Balfour Beatty)
Harry Taylor
Hoare Lea
Honeywell Control Systems Ltd
Inviron Ltd
Jacobs Babtie
Lennox Europe
C H Lindsey & Son
London South Bank University
MJN Colston Ltd
Pearce Buckle (Design Engineers) Ltd
R W Gregory LLP
Roger Preston & Partners
Rosser & Russell Building Services Ltd
Skanska
TPS
WSP Group plc
BSRIA now has over 600 corporate members.
References
External links
BSRIA
Modern Building Services – related website
Heating and Ventilating – related website
Video clips
Air tightness testing
Bracknell
Building engineering organizations
Construction trade groups based in the United Kingdom
Engineering research institutes
Heating, ventilation, and air conditioning
Organisations based in Berkshire
Organizations established in 1955
Science and technology in Berkshire
1955 establishments in the United Kingdom | BSRIA | Engineering | 1,105 |
1,073,054 | https://en.wikipedia.org/wiki/Punji%20stick | The punji stick or punji stake is a type of booby trapped stake. It is a simple spike, made out of wood or bamboo, which is sharpened, heated, and usually set in a hole. Punji sticks are usually deployed in substantial numbers. The Oxford English Dictionary (third edition, 2007) lists less frequent, earlier spellings for "punji stake (or stick)": panja, panjee, panjie, panji, and punge.
Description
Punji sticks would be placed in areas likely to be passed through by enemy troops. The presence of punji sticks may be camouflaged by natural undergrowth, crops, grass, brush or similar materials. They were often incorporated into various types of traps; for example, a camouflaged pit into which a soldier might fall (it would then be a trou de loup).
Sometimes a pit would be dug with punji sticks in the sides pointing downward at an angle. A soldier stepping into the pit would find it impossible to remove their leg without doing severe damage, and injuries might be incurred by the simple act of falling forward while one's leg is in a narrow, vertical, stake-lined pit. Such pits would require time and care to dig the soldier's leg out, immobilizing the unit longer than if the foot were simply pierced, in which case the victim could be evacuated by stretcher or fireman's carry if necessary.
Other additional measures include coating the sticks in poison from plants, animal venom, or even human feces, causing infection or poisoning in the victim after being pierced by the sticks, even if the injury itself was not life-threatening.
Punji sticks were sometimes deployed in the preparation of an ambush. Soldiers lying in wait for the enemy to pass would deploy punji sticks in the areas where the surprised enemy might be expected to take cover, resulting in soldiers diving for cover potentially impaling themselves.
The point of penetration was usually in the foot or lower leg area. Punji sticks were not necessarily meant to kill the person who stepped on them; rather, they were sometimes designed specifically to only wound the enemy and slow or halt their unit while the victim was evacuated to a medical facility.
Vietnam War
In the Vietnam War, this method was used to force wounded soldiers to be transported by helicopter to a medical hospital for treatment.
Punji sticks were also used in Vietnam to complement various defenses, such as barbed wire.
Etymology
The term first appeared in the English language in the 1870s, after the British Indian Army encountered the sticks in their border conflicts against the Kachins of northeast Burma (and it is from the Tibeto-Burman language that this word probably originated).
See also
Area denial weapon
NLF and PAVN strategy, organization and structure
NLF and PAVN logistics and equipment
NLF and PAVN battle tactics
References
Area denial weapons
Guerrilla warfare tactics | Punji stick | Engineering | 586 |
24,106,558 | https://en.wikipedia.org/wiki/Endurance%20running%20hypothesis | The endurance running hypothesis is a series of conjectures which presume humans evolved anatomical and physiological adaptations to run long distances
and, more strongly, that "running is the only known behavior that would account for the different body plans in Homo as opposed to apes or australopithecines".
The hypothesis posits a significant role of endurance running in facilitating early hominins' ability to obtain meat. Proponents of this hypothesis assert that endurance running served as a means for hominins to effectively engage in persistence hunting and carcass poaching, thus enhancing their competitive edge in acquiring prey. Consequently, these evolutionary pressures have led to the prominence of endurance running as a primary factor shaping many biomechanical characteristics of modern humans.
Evolutionary evidence
No primates other than humans are capable of endurance running, and in fact, Australopithecus did not have structural adaptations for running. Instead, forensic anthropology suggests that anatomical features that directly contributed to endurance running capabilities were heavily selected for within the genus Homo dating back to 1.9Ma. Consequently, selecting anatomical features that made endurance running possible radically transformed the hominid body. The general form of human locomotion is markedly distinct from all other animals observed in nature. The unique manner of human locomotion has been described in the Journal of Anatomy:
"… no animal walks or runs as we do. We keep the trunk erect; in walking, our knees are almost straight at mid-stance; the forces our feet exert on the ground are very markedly two-peaked when we walk fast; and in walking and usually in running, we strike the ground initially with the heel alone. No animal walks or runs like that."
More recent research has shown that so-called heelstrike, the tendency of runners to channel all of their weight through the heel as the leading foot touches the ground, is not universal. This may be an artefact of more comfortable shoes, those specifically designed for running. Runners who have only ever gone barefoot tend to land on the front of the foot, on the heads of the fourth and fifth metatarsal bones. When asked to run on a forceplace that records the degree of pressure experienced by the foot over the course of a stride barefoot runners display a markedly flatter and less intense force curve, indicating a reduced impact on the bones of their feet. Researchers whose work has elucidated this detail, such as Harvard's Daniel Lieberman, conclude that this was likely the stride of humanity's earliest upright ancestors. Other authors hold that this "barefoot stride" is both less injurious and more efficient than the "shod stride" typical of runners who wear specialized shoes. Their case rests on the traditional grounds that underlie all arguments from human evolution: namely that millions of years of natural selection have optimized the human body for one mode of locomotion and that modern attempts to surpass this through technological interventions, such as engineered running shoes, cannot compete with human anatomy as delivered by evolution.
From the perspective of natural selection, scientists acknowledge that specialization in endurance running would not have helped early humans avoid faster predators over short distances. Instead, it could have allowed them to traverse shifting habitat zones more effectively in the African savannas during the Pliocene. Endurance running facilitated the timely scavenging of large animal carcasses and enabled the tracking and chasing of prey over long distances. This tactic of exhausting prey was especially advantageous for capturing large quadrupedal mammals struggling to thermoregulate in hot weather and over extended distances. Conversely, humans possess efficient means to dissipate heat, primarily through sweating. Specifically, evaporative heat dissipation from the scalp and face prevents hyperthermia and heat-induced encephalitis by extreme cardiovascular loads. Furthermore, as humans continued to develop, their posture became more upright and subsequently increased vertically with the elongation of limbs and torso, effectively increasing surface area for corporeal heat dissipation.
In work exploring the evolution of the human head, paleontologist Daniel Lieberman suggests that certain adaptations to the Homo skull and neck are correlational evidence of traits selective to endurance running optimization. Specifically, he posits that adaptations such as a flattening face and the development of the nuchal ligament promote improved head balance for cranial stabilization during extended periods of running.
Compared to Australopithecus fossil skeletons, selection for walking by itself would not develop some of these proposed "endurance running" derived traits:
evaporative heat dissipation from the scalp and face prevents hyperthermia
flatter face makes the head more balanced
nuchal ligament helps counterbalance the head
shoulders and body can rotate without rotating the head
taller body has more skin surface for evaporative heat dissipation
torso can counter-rotate to balance the rotation of the hindlimbs
shorter forearms make it easier to counterbalance hindlimbs
shorter forearms cost less to keep flexed
backbones are wider, which will absorb more impact
stronger backbone pelvis connection will absorb more impact
compared to modern apes, human buttocks "are huge" and "critical for stabilization."
longer hindlimbs
Achilles tendon springs conserve energy
lighter tendons efficiently replace lower limb muscles
broader hindlimb joints will absorb more impact
foot bones create a stiff arch for efficient push off
broader heel bone will absorb more impact
shorter toes and an aligned big toe provide better push off
Academic discourse
The derived longer hindlimb was already present in Australopithecus along with evidence for foot bones with a stiff arch. Walking and running in Australopithecus may have been the same as early Homo. Small changes in joint morphology may indicate neutral evolutionary processes rather than selection.
The methodology by which the proposed derived traits were chosen and evaluated does not seem to have been stated, and there were immediate highly technical arguments "dismissing their validity and terming them either trivial or incorrect."
Most of those proposed traits have not been tested for their effect on walking and running efficiency. The new trunk shape counter-rotations, which help control rotations induced by hip-joint motion, seem active during walking. Elastic energy storage does occur in the plantar soft tissue of the foot during walking. Relative lower-limb length has a slightly larger effect on the economy of walking than running. The heel-down foot posture makes walking economical but does not benefit running.
Model-based analysis showing that scavengers would reach a carcass within 30 minutes of detection suggests that "endurance running" would not have given earlier access to carcasses and so not result in selection for "endurance running". Earlier access to carcasses may have been selected for running short distances of 5 km or less, with adaptations that generally improved running performance.
The discovery of more fossil evidence resulted in additional detailed descriptions of hindlimb bones with measurable data reported in the literature. From a study of those reports, hindlimb proposed traits were already present in Australopithecus or early Homo. Those hindlimb characteristics most likely evolved to improve walking efficiency with improved running as a by-product.
Gluteus maximus activity was substantially higher in maximal effort jumping and punching than sprinting, and substantially higher in sprinting than in running at speeds that can be sustained. The activity levels are not consistent with the suggestion that the muscle size is a result of selection for sustained endurance running. Additionally, gluteus maximus activity was much greater in sprinting than in running, similar in climbing and running, and greater in running than walking. Increased muscle activity seems related to the speed and intensity of the movement rather than the gait itself. The data suggests that the large size of the gluteus maximus reflects multiple roles during rapid and powerful movements rather than a specific adaptation to submaximal endurance running.
References
Human evolution
Hypotheses
Running
Biological hypotheses | Endurance running hypothesis | Biology | 1,606 |
34,866,867 | https://en.wikipedia.org/wiki/STRAT-X | STRAT-X, or Strategic-Experimental, was a U.S. government-sponsored study conducted during 1966 and 1967 that comprehensively analyzed the potential future of the U.S. nuclear deterrent force. At the time, the Soviet Union was making significant strides in nuclear weapons delivery, and also constructing anti-ballistic missile defenses to protect strategic facilities. To address a potential technological gap between the two superpowers, U.S. Secretary of Defense Robert McNamara entrusted the classified STRAT-X study to the Institute for Defense Analyses, which compiled a twenty-volume report in nine months. The report looked into more than one hundred different weapons systems, ultimately resulting in the MGM-134 Midgetman and LGM-118 Peacekeeper intercontinental ballistic missiles, the s, and the Trident submarine-launched ballistic missiles, among others. Journalists have regarded STRAT-X as a major influence on the course of U.S. nuclear policy.
Background
In the mid-1960s, reports received by U.S. intelligence agencies indicated that the Soviets were planning to deploy large numbers of highly accurate and powerful intercontinental ballistic missiles (ICBMs). Later, the R-36 ICBM entered service. Possessing the greatest throw weight of any ICBM ever at , the R-36 was larger than the most modern ICBMs in the U.S. arsenal at the time. Due to its size, it was able to carry high-yield warheads capable of destroying Minuteman hardened silos (see Counterforce). This was considered a significant risk to American ICBMs and, as a result, to the United States' nuclear defense strategy by reducing the United States' ability to retaliate with nuclear weapons if attacked.
At the same time, the Soviets were designing and constructing increasingly sophisticated anti-ballistic missile defense systems to protect strategically important facilities around Moscow, reducing the threat posed by American ICBMs. These developments compelled the U.S. Secretary of Defense, Robert McNamara, to commission a study to look into ways of improving the survivability of the U.S. nuclear arsenal.
According to Graham Spinardi in his book From Polaris to Trident (1994), STRAT-X was a response by the U.S. Department of Defense's Deputy Director of Defense Research and Engineering, Lloyd Wilson, to the U.S. Air Force; the service was demanding a large ICBM called the WS-120A. Spinardi suggests that STRAT-X was allowed to proceed so it could terminate the study for such a missile. Funding for the WS-120A would not be released by Secretary McNamara, and plans for such a missile were canceled in 1967.
Study
The study was named "STRAT-X" in order not to reveal its intentions, and also to eliminate partiality towards sea-, air- or land-based systems. It was conducted by the Research and Engineering Support Division of the independent and non-profit Institute for Defense Analyses (IDA), which had conducted a study in early 1966 titled "Pen-X", upon which STRAT-X was based. STRAT-X was chaired by President of the IDA, General Maxwell D. Taylor, while the institution's Fred Payne presided over STRAT-X's "working" panel. The panel also included executives from major independent corporations and defense contractors such as Boeing, Booz Allen Hamilton, Thiokol and TRW. The Advisory Committee members were mostly military officers, including U.S. Navy Rear Admirals George H. Miller and Levering Smith.
On 1 November 1966, McNamara signed an order authorizing STRAT-X, officially initiating the study. During STRAT-X, the working panel was "encouraged to examine system concepts unrestrained by considerations of potential management problems or political influences." The Secretary wanted new ideas about "path-breaking" weapons systems that were either offensive or defensive in nature, unhindered by defense bureaucracy, which had the potential to stifle innovation. Sea-, land- and air-based missile systems were investigated, but crewed bombers and orbital systems were not. The group was also asked to consider the cost effectiveness of all systems, as well to predict possible Soviet responses. To meet this requirement, a series of documents were written from the perspective of the Soviet Minister of Defense General Andrei Grechko, complete with anti-capitalistic statements and a prediction of the eventual triumph of socialism. In the end, a twenty-volume report covered no fewer than 125 different ideas for missile systems, nine of which were reviewed in great detail.
Findings and consequences
Of the nine prospective weapons systems, five were land-based. These were: "Rock Silo"—a system where missiles would be stored in hardened silos of granite bedrock in the Western and Northern United States; "Soft Silo"—a similar system but with easily and cheaply constructed silos; "Rock Tunnel"—a system where missiles would be transported around in deep underground networks before emerging at launch points; "Soft Tunnel"—a similar tunnel but built more cheaply and easily; and "Land Mobile"—a truck-based system where road-transporters traveled at speeds up to constantly around a dedicated and winding road system in of public land.
Of the remaining four, three were sea-based. These were: "Canal-Based"—a systems where missiles would be sailed in canals to confuse Soviet military planners; "Ship-Based"—a system where ships carrying missile canisters would travel around the world, hiding among other traffic; and "Submarine-Based"—a system where ballistic missile submarines would roam the oceans while carrying missile canisters outside their pressure hulls. The single air-based consideration was the "Air Launched ICBM", which required large aircraft carrying standoff ballistic missiles to launch their payloads at the Soviet Union.
Despite the numerous options investigated during the study, none were fully implemented. Although the STRAT-X "Land Mobile" option resulted in the MGM-134 Midgetman and LGM-118 Peacekeeper missiles, the fall of communism throughout the late 1980s and early 1990s resulted in the Midgetman being canceled while still a prototype, while only 50 out of the original 100 Peacekeeper missiles were ever fielded. Nevertheless, the study did inspire a number of developments in nuclear weapons delivery systems. In October 1974, the U.S. Air Force successfully conducted an air launch of a Minuteman missile from a C-5 Galaxy, demonstrating the credibility of the "Air Launched ICBM" option of STRAT-X.
Although the U.S. Navy then had several classes of ballistic missile submarines and submarine-launched ballistic missile (SLBM) in service, the study placed a significant emphasis on the survivability of SLBMs. This resulted in the enormous Ohio-class submarine and the Trident SLBMs which the Ohio class carried. The study originally called for dedicated slow-moving missile-carrying submarines (instead of converted attack submarines) to embark missiles outside their hulls and rely primarily on stealth for survivability. However, Admiral Hyman Rickover, director of the Naval Reactors office, wanted a boat capable of a burst of high speed in order to affect a safe "getaway" after launching the boat's payload. As a result, the Ohio class was designed to accommodate enormous nuclear reactors to produce the required speed. Ohio-class submarines carry their missiles inside of their hulls, despite STRAT-X's recommendation. Ohio-class submarines and Trident missiles are still in service .
Legacy
STRAT-X had far-reaching effects on the development and deployment of U.S. nuclear forces. It was the first time that the strategic requirements of the U.S. Armed Forces were addressed in a detailed and analytical manner. In a 2002 report by the RAND Corporation, STRAT-X was described as "one of the most influential analyses ever conducted" for the U.S. Department of Defense. Journalist Peter Grier, in his Air Force magazine article "STRAT-X", described the study as "a wide-ranging look at the future of U.S. weapons that shaped the nuclear triad for decades, and remains a model for such efforts today". In 2006, the Defense Science Board (DFS) noted STRAT-X's introduction of ideas and concepts that resulted in the Ohio-class submarines and small and mobile ICBMs. The DFS also attributed the use of air-launched cruise missiles, particularly those carried by the B-52 Stratofortress, to STRAT-X despite their lack of references in the study.
Footnotes
Notes
References
Bibliography
Nuclear warfare
Research projects | STRAT-X | Chemistry | 1,775 |
69,183,062 | https://en.wikipedia.org/wiki/Polycoccum%20anatolicum | Polycoccum anatolicum is a species of lichenicolous fungus in the family Polycoccaceae. It was described as a new species by Mehmet Gökhan Halici and Hatice Esra Akgül in 2013. The type specimen was collected growing on the thallus of the dust lichen Lepraria incana, which itself was growing on the trunk on a Prunus species in western Turkey at an altitude of . The specific epithet refers to the type locality in Anatolia.
The fungus causes mild bleaching on infected parts of the surface of the host. It is the only species of Polycoccum known to infect Lepraria. Polycoccum dzieduszyckii is morphologically similar, but can be distinguished from P. anatolicum by its eight-spored asci and its growth on Verrucaria.
References
Trypetheliales
Taxa described in 2013
Fungi of Asia
Lichenicolous fungi
Fungus species | Polycoccum anatolicum | Biology | 203 |
6,948,409 | https://en.wikipedia.org/wiki/Binary%20erasure%20channel | In coding theory and information theory, a binary erasure channel (BEC) is a communications channel model. A transmitter sends a bit (a zero or a one), and the receiver either receives the bit correctly, or with some probability receives a message that the bit was not received ("erased") .
Definition
A binary erasure channel with erasure probability is a channel with binary input, ternary output, and probability of erasure . That is, let be the transmitted random variable with alphabet . Let be the received variable with alphabet , where is the erasure symbol. Then, the channel is characterized by the conditional probabilities:
Capacity
The channel capacity of a BEC is , attained with a uniform distribution for (i.e. half of the inputs should be 0 and half should be 1).
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Proof
|-
|By symmetry of the input values, the optimal input distribution is . The channel capacity is:
Observe that, for the binary entropy function (which has value 1 for input ),
as is known from (and equal to) y unless , which has probability .
By definition , so
.
|}
If the sender is notified when a bit is erased, they can repeatedly transmit each bit until it is correctly received, attaining the capacity . However, by the noisy-channel coding theorem, the capacity of can be obtained even without such feedback.
Related channels
If bits are flipped rather than erased, the channel is a binary symmetric channel (BSC), which has capacity (for the binary entropy function ), which is less than the capacity of the BEC for . If bits are erased but the receiver is not notified (i.e. does not receive the output ) then the channel is a deletion channel, and its capacity is an open problem.
History
The BEC was introduced by Peter Elias of MIT in 1955 as a toy example.
See also
Erasure code
Packet erasure channel
Notes
References
Coding theory | Binary erasure channel | Mathematics | 423 |
55,823,225 | https://en.wikipedia.org/wiki/List%20of%20human%20endocrine%20organs%20and%20actions |
Hypothalamic-pituitary axis
Hypothalamus
Pineal body (epiphysis)
Pituitary gland (hypophysis)
The pituitary gland (or hypophysis) is an endocrine gland about the size of a pea and weighing in humans. It is a protrusion off the bottom of the hypothalamus at the base of the brain, and rests in a small, bony cavity (sella turcica) covered by a dural fold (diaphragma sellae). The pituitary is functionally connected to the hypothalamus by the median eminence via a small tube called the infundibular stem or pituitary stalk. The anterior pituitary (adenohypophysis) is connected to the hypothalamus via the hypothalamo–hypophyseal portal vessels, which allows for quicker and more efficient communication between the hypothalamus and the pituitary.
Anterior pituitary lobe (adenohypophysis)
Posterior pituitary lobe (neurohypophysis)
Oxytocin and anti-diuretic hormone are not secreted in the posterior lobe, merely stored.
Thyroid
Digestive system
Stomach
Duodenum (small intestine)
Liver
Pancreas
The pancreas is a heterocrine gland as it functions both as an endocrine and as an exocrine gland.
Kidney
Adrenal glands
Adrenal cortex
Adrenal medulla
Reproductive
Testes
Ovarian follicle and corpus luteum
Placenta (when pregnant)
Uterus (when pregnant)
Calcium regulation
Parathyroid
Skin
Other
Heart
Bone
Skeletal muscle
In 1998, skeletal muscle was identified as an endocrine organ due to its now well-established role in the secretion of myokines. The use of the term myokine to describe cytokines and other peptides produced by muscle as signalling molecules was proposed in 2003.
Adipose tissue
Signalling molecules released by adipose tissue are referred to as adipokines.
References
Endocrine system
Human physiology
endocrine organs | List of human endocrine organs and actions | Biology | 452 |
20,775,637 | https://en.wikipedia.org/wiki/C17H36 | The molecular formula C17H36 (molar mass: 240.27 g/mol, exact mass: 240.2817 u) may refer to:
3,3-Di-tert-butyl-2,2,4,4-tetramethylpentane
Heptadecane
Molecular formulas | C17H36 | Physics,Chemistry | 69 |
45,298,260 | https://en.wikipedia.org/wiki/Reactive%20compatibilization | Reactive compatibilization is the process of modifying a mixed immiscible blend of polymers to arrest phase separation and allow for the formation of a stable, long-term continuous phase. It is done via the addition of a reactive polymer, miscible with one blend component and reactive towards functional groups on the second component, which result in the "in-situ" formation of block or grafted copolymers.
A large number of commercial polymeric products are derived from the blending of two or more polymers to achieve a favorable balance of physical properties. However, since most polymer blends are immiscible, it is rare to find a pair of polymers that both are miscible and have desired characteristics. An example of such pair is the miscible resin NORYL™, a mix of poly(phenylene oxide) and polystyrene. Immiscible blends will phase separate and form a dispersed phase, which may improve physical properties (figure 1). DuPont’s rubber toughened Nylon consists of small particles of poly(cis-isoprene) (natural rubber) in a Nylon matrix that toughen the material by arresting crack propagation.
Miscibility of Polymer Blends
The Gibbs free energy of mixing, , must be negative for a blend to be miscible. According to Flory-Huggins theory, a revision of regular solution theory, the entropy change per mole of lattice sites of blending polymer 1 and polymer 2 is
, where ΔS is the change in entropy of mixing, R is the gas constant, Φ is the volume fraction of each polymer, and x is the number of segments of each polymer. x1 and x2 increase with higher degrees of polymerization and thus molecular weight. Since most useful polymers are high in molecular weight, the change in entropy experienced from the mixing of two large polymer chains is very low, and typically does not bring the Gibbs free energy low enough to constitute miscibility.
Compatibilization
Most processed polymer mixes consist of a dispersed phase in a more continuous matrix of the other component. The formation, size, and concentration of this disperse phase are typically optimized for specific mechanical properties. If the morphology is not stabilized, the dispersed phase may coalesce under heat or stress from the environment or further processing. This coalescence may result in diminished properties (brittleness and discoloration) due to the induced phase separation. These morphologies can be stabilized by sufficient interfacial adhesion or lowered interfacial tension between the two phases.
A common technique involves functionalizing one monomer. For example, Nylon-rubber bands are polymerized with functionalized rubber to produce graft or block copolymers. The added structures make it no longer favorable to coalesce and/or increase the steric hindrance in the interfacial area where phase separation would occur.
References
Polymers
Polymer chemistry | Reactive compatibilization | Chemistry,Materials_science,Engineering | 590 |
71,440,611 | https://en.wikipedia.org/wiki/Leucocoprinus%20zeylanicus | Leucocoprinus zeylanicus is a species of mushroom producing fungus in the family Agaricaceae.
Taxonomy
It was first described in 1847 by the British mycologist Miles Joseph Berkeley who classified it as Agaricus zeylanicus.
In 1891 it was classified as Mastocephalus zeylanicus by the German botanist Otto Kunze, however Kunze's Mastocephalus genus, along with most of 'Revisio generum plantarum was not widely accepted by the scientific community of the age so it remained an Agaricus.
In 1940 it was reclassified as Leucocoprinus zeylanicus by the Dutch mycologist Karel Bernard Boedijn.
Description
Leucocoprinus zeylanicus is a small dapperling mushroom. Cap: Around 8cm wide. Campanulate (bell shaped) with an umbo in the centre and striations at the edges. Gills: Free. Stem: Smooth with a narrow stem ring. Spores:''' 7.5-9x4.5-6.5 μm.
Habitat and distribution L. zeylanicus'' is scarcely recorded and little known however it is reported to be a very common species in the Western Ghats ranges of India. In 2003 a mushroom survey conducted at the Tropical Botanic Garden and Research Institute, in Kerala state, India observed this species growing on the campus. It was found scattered or in groups on the forest floor and in flower beds in the garden which had been well fertilised with manure as well as on cow dung itself and occasionally on the bark of living trees.
Berkeley described the mushroom from a garden in Peradeniya, Sri Lanka (then known as Ceylon) in 1844. Many of his observations were conducted in this area so it is possible that they were in or around the vicinity of the Royal Botanical Gardens, Peradeniya, which were founded in 1843.
References
Leucocoprinus
Fungi described in 1847
Taxa named by Miles Joseph Berkeley
Fungus species | Leucocoprinus zeylanicus | Biology | 421 |
5,062,816 | https://en.wikipedia.org/wiki/Copper%28II%29%20bromide | Copper(II) bromide (CuBr2) is a chemical compound that forms an unstable tetrahydrate CuBr2·4H2O. It is used in photographic processing as an intensifier and as a brominating agent in organic synthesis.
It is also used in the copper vapor laser, a class of laser where the medium is copper bromide vapour formed in-situ from hydrogen bromide reacting with the copper discharge tube. Producing yellow or green light, it is used in dermatological applications.
Synthesis
Copper(II) bromide can be obtained by combining copper oxide and hydrobromic acid:
CuO + 2HBr → CuBr2 + H2O.
The tetrahydrate can be produced by recrystallization of solutions of copper(II) bromide at 0 °C. If heated above 18 °C, it releases water to produce the anhydrous form.
Purification
Copper(II) bromide is purified by crystallization twice from water, filtration to remove any CuBr and concentration under vacuum. This product is dehydrated using phosphorus pentoxide.
Molecular and crystal structure
In the solid state CuBr2 has a polymeric structure, with CuBr4 planar units connected on opposite sides to form chains. The crystal structure is monoclinic, space group C2/m, with lattice constants a = 714 pm, b = 346 pm, c = 718 pm, e ß = 121° 15'. CuBr2 monomeric units are present in the gas phase at high temperature.
The tetrahydrate, structurally formulated as [CuBr2(H2O)2]·2H2O, has a monoclinic crystal structure and consists of distorted square planartrans-[CuBr2(H2O)2] centres as well as two molecules of water.
Reactions
Copper(II) bromide in chloroform-ethyl acetate reacts with ketones resulting in the formation of alpha-bromo ketones. The resulting product can be directly used for the preparation of derivatives. This heterogeneous method is reported to be the most selective and direct method of formation of α-bromo ketones.
Dibromination of NPGs, n-pentenyl glycosides, using CuBr2/LiBr reagent combination was performed in order for an NPG to serve as a glycosyl acceptor during halonium-promoted couplings.
Such reaction gives high yield of the dibromides from alkenyl sugars that are resistant to a direct reaction with molecular bromine.
Usage
Copper(II) bromide lasers produce pulsed yellow and green light and have been studied as a possible treatment for cutaneous lesions. Experiments have also shown copper bromide treatment to be beneficial for skin rejuvenation.
It has been widely used in photography as its solution was used as the bleaching step for intensifying collodion and gelatin negatives.
Copper(II)bromide has also been proposed as a possible material in humidity indicator cards.
Safety
Copper(II) bromide is harmful if swallowed. It affects the central nervous system, brain, eyes, liver, and kidneys. It causes irritation to skin, eyes, and respiratory tract.
Natural occurrence
Pure copper(II) bromide is as yet (2020) unknown among minerals. However, barlowite, Cu4BrF(OH)6, contains both copper and bromide.
See also
Copper(I) bromide
References
Bromides
Metal halides
Copper(II) compounds | Copper(II) bromide | Chemistry | 743 |
51,082,810 | https://en.wikipedia.org/wiki/Ignite%20the%20Genius%20Within | Ignite The Genius Within is a multi-media creativity book co-authored by author and journalist Christopher Lee Nutter and EMDR therapist Dr. Christine Ranck. It was published by Dutton Penguin in March, 2009 (). The book uses visuals with audio stimulus created by Dr. David Grand derived from EMDR trauma therapy to the end of enhancing creativity. It was endorsed by performance artist Laurie Anderson and actress / playwright Sarah Jones (stage actress).
External links
Publisher’s website
Publishers Weekly article
Author’s website
2009 non-fiction books
Dutton Penguin books
Creativity | Ignite the Genius Within | Biology | 115 |
15,215,548 | https://en.wikipedia.org/wiki/KIFC1 | Kinesin-like protein KIFC1 is a protein that in humans is encoded by the KIFC1 gene.
Function
The protein KifC1 is a member of kinesin-14 family. KifC1 consists of C-terminal motor domain, superhelical stalk and N-terminal tail domain. Tail and motor domains contain microtubule-binding sites. This kinesin moves towards the minus-end of microtubule and has an ability to slide or crosslink microtubules. KifC1 functions during mitotic spindle formation.
References
Further reading
External links | KIFC1 | Chemistry | 120 |
14,902,503 | https://en.wikipedia.org/wiki/Lead%28II%29%20bromide | Lead(II) bromide is the inorganic compound with the formula PbBr2. It is a white powder. It is produced in the burning of typical leaded gasolines.
Preparation and properties
It is typically prepared from treating solutions of lead salts (e.g., (lead(II) nitrate) with bromide salts. This process exploits its low solubility in water - only 0.455 g dissolves in 100 g of water at 0 °C. It is about ten times more soluble in boiling water.
PbBr2 has the same crystal structure as lead chloride (cotunnite) – they are isomorphous. In this structure, Pb2+ is surrounded by nine Br− ions in a distorted tricapped trigonal prismatic geometry. Seven of the Pb-Br distances are shorter, in the range 2.9-3.3 Å, while two of them are longer at 3.9 Å. The coordination is therefore sometimes described as (7+2).
Lead bromide was prevalent in the environment as the result of the use of leaded gasoline. Tetraethyl lead was once widely used to improve the combustion properties of gasoline. To prevent the resulting lead oxides from fouling the engine, gasoline was treated with 1,2-Dibromoethane, which converted lead oxides into the more volatile lead bromide, which was then exhausted from the engine into the environment.
Safety
Like other compounds containing lead, lead(II) bromide is categorized as probably carcinogenic to humans (Category 2A), by the International Agency for Research on Cancer (IARC). Its release into the environment as a product of leaded gasoline was highly controversial.
References
Lead(II) compounds
Bromides
Metal halides | Lead(II) bromide | Chemistry | 366 |
1,225,645 | https://en.wikipedia.org/wiki/Dry%20sump | A dry-sump system is a method to manage the lubricating motor oil in four-stroke and large two-stroke piston driven internal combustion engines. The dry-sump system uses two or more oil pumps and a separate oil reservoir, as opposed to a conventional wet-sump system, which uses only the main sump (U.S.: oil pan) below the engine and a single pump. A dry-sump engine requires a pressure relief valve to regulate negative pressure inside the engine, so internal seals are not inverted.
Dry-sumps are common on larger diesel engines such as those used in ships, as well as gasoline engines used in racing cars, aerobatic aircraft, high-performance personal watercraft and motorcycles. Dry sump lubrication may be chosen for these applications due to increased reliability, oil capacity, reduction of oil starvation under high g-loads and/or other technical or performance reasons. Dry sump systems may not be suitable for all applications due to increased cost, complexity, and/or bulk, among other factors.
Design
Engines are both lubricated and cooled by oil that circulates throughout the engine, feeding various bearings and other moving parts and then draining, via gravity, into the sump at the base of the engine. In the wet-sump system of nearly all production automobile engines, the oil that's not actively circulating is stored in the sump, which is large enough for this purpose. A pump collects oil from the sump and directly circulates it back through the engine. In a dry-sump system, the oil still falls to the base of the engine, but into a much shallower sump, where one or more scavenge pumps draw it away and transfer it to a (usually external) reservoir, where it is both cooled and de-aerated before being recirculated through the engine by a pressure pump. The sump in a dry-sump system is not actually dry; it is still wet from oil draining from the engine. The reservoir is usually tall and narrow and specially designed with internal baffles, and an oil outlet (supply) at the very bottom for uninhibited oil supply even during sloshing.
The dry pump operation consists of a pressure stage and a scavenging stage. Although the term "stages" is commonly used to describe the work of the multiple pumps, they typically run in parallel rather than in series as might be implied by the term. The pressure stage draws oil from the bottom of the reservoir and passes it through the filter and into the engine itself. An adjustable pressure regulator ensures that the oil pressure is kept stable at different engine speeds. The dry-sump system requires at least two pumps - one pressure and one scavenge - and sometimes as many as four or five scavenge pumps are used to minimize the amount of oil in the engine. The pressure pump and scavenge pumps are frequently mounted on a common crankshaft, so that a single pulley at the front of the system can run as many pumps as the engine design requires. It is common practice to have one scavenge pump per crankcase section; however, in the case of inverted engines (typically aircraft engines) it is necessary to employ separate scavenge pumps for each cylinder bank. Therefore, an inverted V engine would have a minimum of two scavenge pumps and a pressure pump in the pump stack.
Dry sump systems may optionally be designed to keep the engine's crankcase at lower than atmospheric pressure (vacuum), by sealing the crankcase and allowing the scavenge pumps to draw out both oil and gases. An equilibrium pressure will be reached when the rate of gases entering the crankcase (blow-by gases past the piston rings, but also air leaks and oil vapor) equals the rate of gas removal from the scavenge pump capacity beyond what's required to remove just the oil. Alternatively, the crankcase may be kept near atmospheric pressure by venting it to the oil reservoir, which in turn is vented into the engine's air intake, or to outside air.
Advantages
A dry-sump system offers many advantages over a wet-sump. The primary advantages include:
Prevention of the engine experiencing oil starvation during high g-loads when oil sloshes, which improves engine reliability. Most engines can be damaged by even brief periods of oil starvation. This is the reason why dry-sumps were invented, and is particularly valuable in racing cars, high performance sports cars, and aerobatic aircraft that regularly experience high accelerations. Oil slosh occurs in dry-sump systems too, but it is much easier to design a remote reservoir to tolerate high amounts of slosh, by being tall and narrow, and having large baffles.
Increased oil capacity by using a large external reservoir, which would be impractical in a wet-sump system.
Improvements to vehicle handling and stability. The vehicle's center of gravity can be lowered by mounting the (typically very heavy) engine lower in the chassis due to a shallow sump profile. A vehicle's overall weight distribution can be modified by locating the external oil reservoir away from the engine.
Improved oil temperature control. This is due to increased oil volume providing resistance to heat saturation, the positioning of the oil reservoir away from the hot engine, and the ability to include cooling capabilities between the scavenger pumps and oil reservoir and also within the reservoir itself.
Improved oil quality. When oil sloshes against the crankshaft and other high-speed spinning parts, it causes a "hurricane that whips the oil in a wet-sump engine into an aerated froth like a milkshake in a blender". Aerated oil protects engine components far less effectively. A dry-sump system minimizes oil aeration, and also de-aerates oil far more effectively by pumping it first into a remote reservoir.
Increased engine power. In a wet-sump engine, oil slosh against spinning parts causes substantial viscous drag which creates parasitic power loss. A dry-sump system removes oil from the crankcase, along with the possibility of such viscous drag. More complex dry-sump systems may scavenge oil from other areas where oil may pool, such as in the valvetrain. Power can be further increased if the dry-sump system is designed to create a vacuum inside the crankcase, which reduces air drag (or 'windage') on moving parts as well.
Improved pump efficiency to maintain oil supply to the engine. Since scavenge pumps are typically mounted at the lowest point on the engine, the oil flows into the pump intake by gravity rather than having to be lifted up into the intake of the pump as in a wet-sump. Furthermore, scavenge pumps can be of a design that is more tolerant of entrapped gasses than the typical pressure pump, which can lose suction if too much air mixes into the oil. Since the pressure pump is typically lower than the external oil tank, it always has a positive pressure on its suction regardless of cornering forces.
Having the pumps external to the engine makes them easier to maintain or replace.
Disadvantages
Dry-sump engines have several disadvantages compared to wet-sump engines, including;
Dry-sump systems add cost, complexity, and weight.
The extra pumps and lines in dry-sump engines require additional oil and maintenance.
The large external reservoir and pumps can be tricky to position around the engine and within the engine bay due to their size.
As wrist pins and pistons rely on the oil being splashed around in the crankcase for lubrication and cooling respectively, these parts might have inadequate oiling if too much oil is pulled away by the pump. Installing piston oilers can circumvent this issue, but adds cost and complexity to the engine.
Inadequate upper valvetrain lubrication can also become an issue if too much oil vapor is being pulled out from the area, especially with multi-staged pumps.
Common engine applications
Dry-sumps are common on larger diesel engines such as those used for ship propulsion, largely due to increased reliability and serviceability. They are also commonly used in racing cars and aerobatic aircraft, due to problems with g-forces, reliable oil supply, power output and vehicle handling. The Chevrolet Corvette Z06 has a dry sump engine which requires an initial oil change after 500 miles.
Motorcycle engines
The dry-sump lubrication is particularly applicable to motorcycles, which tend to be operated more vigorously than other road vehicles. Although motorcycles such as the Honda CB750 (1969) feature a dry-sump engine, modern motorcycles tend to use a wet-sump design. This is understandable with across-the-frame inline four-cylinder engines, since these wide engines must be mounted fairly high in the frame (for ground clearance), so the space below may as well be used for a wet-sump. However, narrower engines can be mounted lower and ideally should use dry-sump lubrication.
Several motorcycle models that use dry-sumps include;
The classic British parallel twin motorcycles, such as BSA, Triumph and Norton, all used dry-sump lubrication. Traditionally, the oil tank was a remote item, but some late-model BSAs, and the Meriden Triumphs, used "oil-in-the-frame" designs.
The Triumph Rocket 3, an inline three-cylinder, water-cooled, DOHC engine.
The Yamaha TRX850 270-degree parallel twin motorcycle has a dry-sump engine. Its oil reservoir is not remote, but integral to the engine, sitting atop the gearbox. This design eliminates external oil lines, allowing simpler engine removal and providing faster oil warm up.
The Yamaha XT660Z (and R/X models) use a dry-sump design where the bike's frame tubing is used as the oil reservoir and cooling system
The Yamaha SR400/500 uses a dry-sump design where the bike's frame tubing doubles as the oil reservoir and cooling system.
Harley-Davidson has used dry-sump type lubricating oil systems in their engines since the 1930s.
The Rotax engined Aprilia RSV Mille, and the Aprilia RST1000 Futura both incorporate a dry-sump, along with sister bikes, the SL1000 Falco and ETV1000 Caponord.
All BMW K-series motorcycles with inline-4 engines.
The Honda NX650, XR500R, XR600R, XR650R and XR650L four-stroke dirt bikes utilize a dry-sump with the oil in the frame tubing.
The Suzuki DR-Z400 has a 2L dry-sump with oil in the frame tubing.
Chennai built Royal Enfield prior to 2007. Royal Enfield dry sump designs were completely phased out by 2012.
See also
Wet sump
References
External links
Engine lubrication systems
Engine technology | Dry sump | Technology | 2,270 |
43,169,387 | https://en.wikipedia.org/wiki/RAC%20drawing | In graph drawing, a RAC drawing of a graph is a drawing in which the vertices are represented as points, the edges are represented as straight line segments or polylines, at most two edges cross at any point, and when two edges cross they do so at right angles to each other. In the name of this drawing style, "RAC" stands for "right angle crossing".
The right-angle crossing style and the name "RAC drawing" for this style were both formulated by , motivated by previous user studies showing that crossings with large angles are much less harmful to the readability of drawings than shallow crossings. Even for planar graphs, allowing some right-angle crossings in a drawing of the graph can significantly improve measures of the drawing quality such as its area or angular resolution.
Examples
The complete graph K5 has a RAC drawing with straight edges, but K6 does not. Every 6-vertex RAC drawing has at most 14 edges, but K6 has 15 edges, too many to have a RAC drawing.
A complete bipartite graph Ka,b has a RAC drawing with straight edges if and only if either min(a,b) ≤ 2 or a + b ≤ 7. If min(a,b) ≤ 2, then the graph is a planar graph, and (by Fáry's theorem) every planar graph has a straight-line drawing with no crossings. Such a drawing is automatically a RAC drawing. The only two cases remaining are the graphs K3,3 and K3,4. A drawing of K3,4 is shown; K3,3 can be formed from it by deleting one vertex. Neither of the next two larger graphs, K4,4 and K3,5, has a RAC drawing.
Edges and bends
If an n-vertex graph (n ≥ 4) has a RAC drawing with straight edges, it can have at most 4n − 10 edges. This is tight: there exist RAC-drawable graphs with exactly 4n − 10 edges. For drawings with polyline edges, the bound on the number of edges in the graph depends on the number of bends that are allowed per edge. The graphs that have RAC drawings with one or two bends per edge have O(n) edges; more specifically, with one bend there are at most 5.5n edges and with two bends there are at most 74.2n edges. Every graph has a RAC drawing with three bends per edge.
Relation to 1-planarity
A graph is 1-planar if it has a drawing with at most one crossing per edge. Intuitively, this restriction makes it easier to cause this crossing to be at right angles, and the 4n − 10 bound on the number of edges of straight-line RAC drawings is close to the bounds of 4n − 8 on the number of edges in a 1-planar graph, and of 4n − 9 on the number of edges in a straight-line 1-planar graph. Every RAC drawing with 4n − 10 edges is 1-planar. Additionally, every outer-1-planar graph (that is, a graph drawn with one crossing per edge with all vertices on the outer face of the drawing) has a RAC drawing. However, there exist 1-planar graphs with 4n − 10 edges that do not have RAC drawings.
Computational complexity
It is NP-hard to determine whether a given graph has a RAC drawing with straight edges, even if the input graph is 1-planar and the output RAC drawing must be 1-planar as well. More specifically, RAC drawing is complete for the existential theory of the reals. The RAC drawing problem remains NP-hard for upward drawing of directed acyclic graphs. However, in the special case of outer-1-planar graphs, a RAC drawing can be constructed in linear time.
References
Graph drawing
NP-complete problems | RAC drawing | Mathematics | 823 |
27,141,301 | https://en.wikipedia.org/wiki/Tazolol | Tazolol is a beta blocker with some utility in the treatment of heart disease.
See also
Timolol
Niridazole
Fenclozic acid
References
Antiarrhythmic agents
Beta blockers
2-Thiazolyl compounds
Ethers
Secondary alcohols
Secondary amines
Abandoned drugs
Isopropylamino compounds | Tazolol | Chemistry | 70 |
525,450 | https://en.wikipedia.org/wiki/List%20of%20spacecraft%20manufacturers |
History
During the early years of spaceflight only nation states had the resources to develop and fly spacecraft. Both the U.S. space program and Soviet space program were operated using mainly military (ex Airforce) pilots as astronauts. During this period, no commercial space launches were available to private operators, and no private organization was able to offer space launches.
In the 1980s, the European Space Agency created Arianespace, the world's first commercial space transportation company, and, following the Challenger disaster, the American government deregulated the American space transportation market as well. In the 1990s the Russian government sold their majority stake in RSC Energia to private investors (although it has recently renationalized the Russian space sector in 2013–2014.)
These events for the first time allowed private organizations to purchase, develop and offer space launch services; beginning the period of private spaceflight in the late-1980s and early-1990s.
Satellite manufacturers
There are 10 major companies that build large, commercial, Geosynchronous satellite platforms:
In addition to those above, the following companies have successfully built and launched (smaller) satellite platforms:
Launch vehicle manufacturers and providers of third party services
Commercial wings of national space agencies:
Aerospace Industrial Development Corporation (AIDC) and Taiwan Aerospace Industry Association (TAIA) Taiwan
Antrix Corporation and NSIL India
China Aerospace Science and Technology Corporation China
Lander, rover and probe manufacturers
Spacecraft component manufacturers
Propulsion manufacturers
See also
List of private spaceflight companies including only companies with primarily private funding and missions ("NewSpace")
Russian aerospace industry
References
spacecraft manufacturers | List of spacecraft manufacturers | Astronomy | 321 |
45,104,054 | https://en.wikipedia.org/wiki/Gordana%20Matic | Gordana Matic is a Croatian-American mathematician who works as a professor at the University of Georgia. Her research concerns low-dimensional topology and contact geometry.
Matic earned her doctorate from the University of Utah in 1986, under the supervision of Ronald J. Stern, and worked as a C.L.E. Moore instructor at the Massachusetts Institute of Technology before joining the University of Georgia faculty.
Matic was the Spring 2012 speaker in the University of Texas Distinguished Women in Mathematics Lecture Series. In 2014, she was elected as a fellow of the American Mathematical Society "for contributions to low-dimensional and contact topology."
References
External links
Home page
Year of birth missing (living people)
Living people
Croatian mathematicians
20th-century American mathematicians
21st-century American mathematicians
University of Utah alumni
University of Georgia faculty
Fellows of the American Mathematical Society
Yugoslav emigrants to the United States
Topologists
20th-century American women mathematicians
21st-century American women mathematicians | Gordana Matic | Mathematics | 190 |
40,458,636 | https://en.wikipedia.org/wiki/Stericantitruncated%20tesseractic%20honeycomb | In four-dimensional Euclidean geometry, the stericantitruncated tesseractic honeycomb is a uniform space-filling honeycomb. It is composed of runcitruncated 16-cell, cantitruncated tesseract, rhombicuboctahedral prism, truncated cuboctahedral prism, and 4-8 duoprism facets, arranged around an irregular 5-cell vertex figure.
Related honeycombs
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
16-cell honeycomb
24-cell honeycomb
Truncated 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs)
x4x3x3o4x - gicartit - O101
5-polytopes
Honeycombs (geometry)
Truncated tilings | Stericantitruncated tesseractic honeycomb | Physics,Chemistry,Materials_science | 353 |
7,146,399 | https://en.wikipedia.org/wiki/Rietdijk%E2%80%93Putnam%20argument | In philosophy, the Rietdijk–Putnam argument, named after and Hilary Putnam, uses 20th-century findings in physicsspecifically in special relativityto support the philosophical position known as four-dimensionalism.
If special relativity is true, then each observer will have their own plane of simultaneity, which contains a unique set of events that constitutes the observer's present moment. Observers moving at different relative velocities have different planes of simultaneity, and hence different sets of events that are present. Each observer considers their set of present events to be a three-dimensional universe, but even the slightest movement of the head or offset in distance between observers can cause the three-dimensional universes to have differing content. If each three-dimensional universe exists, then the existence of multiple three-dimensional universes suggests that the universe is four-dimensional. The argument is named after the discussions by Rietdijk (1966) and Putnam (1967). It is sometimes called the Rietdijk–Putnam–Penrose argument.
Andromeda paradox
Roger Penrose advanced a form of this argument that has been called the Andromeda paradox in which he points out that two people walking past each other on the street could have very different present moments. If one of the people were walking towards the Andromeda Galaxy, then events in this galaxy might be hours or even days advanced of the events on Andromeda for the person walking in the other direction. If this occurs, it would have dramatic effects on our understanding of time. Penrose highlighted the consequences by discussing a potential invasion of Earth by aliens living in the Andromeda Galaxy. As Penrose put it:
The "paradox" consists of two observers who are, from their conscious perspective, in the same place and at the same instant having different sets of events in their "present moment". Notice that neither observer can actually "see" what is happening in Andromeda, because light from Andromeda (and the hypothetical alien fleet) will take 2.5 million years to reach Earth. The argument is not about what can be "seen"; it is purely about what events different observers consider to occur in the present moment.
Criticisms
The interpretations of relativity used in the Rietdijk–Putnam argument and the Andromeda paradox are not universally accepted. Howard Stein and Steven F. Savitt note that in relativity the present is a local concept that cannot be extended to global hyperplanes. Furthermore, N. David Mermin states:
So stressing that the "present moment" cannot be applied to very distant events with any accuracy.
References
Further reading
Vesselin Petkov (2005) "Is There an Alternative to the Block Universe View?" in Dennis Dieks (ed.), The Ontology of Spacetime, Elsevier, Amsterdam, 2006; "Philosophy and Foundations of Physics" Series, pp. 207–228
Wikibook:The relativity of simultaneity and the Andromeda paradox
"Being and Becoming in Modern Physics", Stanford Encyclopedia of Philosophy
Relativistic paradoxes
Special relativity | Rietdijk–Putnam argument | Physics | 631 |
745,714 | https://en.wikipedia.org/wiki/Graphic%20matroid | In the mathematical theory of matroids, a graphic matroid (also called a cycle matroid or polygon matroid) is a matroid whose independent sets are the forests in a given finite undirected graph. The dual matroids of graphic matroids are called co-graphic matroids or bond matroids. A matroid that is both graphic and co-graphic is sometimes called a planar matroid (but this should not be confused with matroids of rank 3, which generalize planar point configurations); these are exactly the graphic matroids formed from planar graphs.
Definition
A matroid may be defined as a family of finite sets (called the "independent sets" of the matroid) that is closed under subsets and that satisfies the "exchange property": if sets and are both independent, and is larger than , then there is an element such that remains independent. If is an undirected graph, and is the family of sets of edges that form forests in , then is clearly closed under subsets (removing edges from a forest leaves another forest). It also satisfies the exchange property: if and are both forests, and has more edges than , then it has fewer connected components, so by the pigeonhole principle there is a component of that contains vertices from two or more components of . Along any path in from a vertex in one component of to a vertex of another component, there must be an edge with endpoints in two components, and this edge may be added to to produce a forest with more edges. Thus, forms the independent sets of a matroid, called the graphic matroid of or . More generally, a matroid is called graphic whenever it is isomorphic to the graphic matroid of a graph, regardless of whether its elements are themselves edges in a graph.
The bases of a graphic matroid are the full spanning forests of , and the circuits of are the simple cycles of . The rank in of a set of edges of a graph is where is the number of vertices in the subgraph formed by the edges in and is the number of connected components of the same subgraph. The corank of the graphic matroid is known as the circuit rank or cyclomatic number.
The lattice of flats
The closure of a set of edges in is a flat consisting of the edges that are not independent of (that is, the edges whose endpoints are connected to each other by a path in ). This flat may be identified with the partition of the vertices of into the connected components of the subgraph formed by : Every set of edges having the same closure as gives rise to the same partition of the vertices, and may be recovered from the partition of the vertices, as it consists of the edges whose endpoints both belong to the same set in the partition. In the lattice of flats of this matroid, there is an order relation whenever the partition corresponding to flat is a refinement of the partition corresponding to flat .
In this aspect of graphic matroids, the graphic matroid for a complete graph is particularly important, because it allows every possible partition of the vertex set to be formed as the set of connected components of some subgraph. Thus, the lattice of flats of the graphic matroid of is naturally isomorphic to the lattice of partitions of an -element set. Since the lattices of flats of matroids are exactly the geometric lattices, this implies that the lattice of partitions is also geometric.
Representation
The graphic matroid of a graph can be defined as the column matroid of any oriented incidence matrix of . Such a matrix has one row for each vertex, and one column for each edge. The column for edge has in the row for one endpoint, in the row for the other endpoint, and elsewhere; the choice of which endpoint to give which sign is arbitrary. The column matroid of this matrix has as its independent sets the linearly independent subsets of columns.
If a set of edges contains a cycle, then the corresponding columns (multiplied by if necessary to reorient the edges consistently around the cycle) sum to zero, and is not independent. Conversely, if a set of edges forms a forest, then by repeatedly removing leaves from this forest it can be shown by induction that the corresponding set of columns is independent. Therefore, the column matrix is isomorphic to .
This method of representing graphic matroids works regardless of the field over which the incidence is defined. Therefore, graphic matroids form a subset of the regular matroids, matroids that have representations over all possible fields.
The lattice of flats of a graphic matroid can also be realized as the lattice of a hyperplane arrangement, in fact as a subset of the braid arrangement, whose hyperplanes are the diagonals . Namely, if the vertices of are we include the hyperplane whenever is an edge of .
Matroid connectivity
A matroid is said to be connected if it is not the direct sum of two smaller matroids; that is, it is connected if and only if there do not exist two disjoint subsets of elements such that the rank function of the matroid equals the sum of the ranks in these separate subsets. Graphic matroids are connected if and only if the underlying graph is both connected and 2-vertex-connected.
Minors and duality
A matroid is graphic if and only if its minors do not include any of five forbidden minors: the uniform matroid , the Fano plane or its dual, or the duals of and defined from the complete graph and the complete bipartite graph . The first three of these are the forbidden minors for the regular matroids, and the duals of and are regular but not graphic.
If a matroid is graphic, its dual (a "co-graphic matroid") cannot contain the duals of these five forbidden minors. Thus, the dual must also be regular, and cannot contain as minors the two graphic matroids and .
Because of this characterization and Wagner's theorem characterizing the planar graphs as the graphs with no or graph minor, it follows that a graphic matroid is co-graphic if and only if is planar; this is Whitney's planarity criterion. If is planar, the dual of is the graphic matroid of the dual graph of . While may have multiple dual graphs, their graphic matroids are all isomorphic.
Algorithms
A minimum weight basis of a graphic matroid is a minimum spanning tree (or minimum spanning forest, if the underlying graph is disconnected). Algorithms for computing minimum spanning trees have been intensively studied; it is known how to solve the problem in linear randomized expected time in a comparison model of computation, or in linear time in a model of computation in which the edge weights are small integers and bitwise operations are allowed on their binary representations. The fastest known time bound that has been proven for a deterministic algorithm is slightly superlinear.
Several authors have investigated algorithms for testing whether a given matroid is graphic. For instance, an algorithm of solves this problem when the input is known to be a binary matroid. solves this problem for arbitrary matroids given access to the matroid only through an independence oracle, a subroutine that determines whether or not a given set is independent.
Related classes of matroids
Some classes of matroid have been defined from well-known families of graphs, by phrasing a characterization of these graphs in terms that make sense more generally for matroids. These include the bipartite matroids, in which every circuit is even, and the Eulerian matroids, which can be partitioned into disjoint circuits. A graphic matroid is bipartite if and only if it comes from a bipartite graph and a graphic matroid is Eulerian if and only if it comes from an Eulerian graph. Within the graphic matroids (and more generally within the binary matroids) these two classes are dual: a graphic matroid is bipartite if and only if its dual matroid is Eulerian, and a graphic matroid is Eulerian if and only if its dual matroid is bipartite.
Graphic matroids are one-dimensional rigidity matroids, matroids describing the degrees of freedom of structures of rigid beams that can rotate freely at the vertices where they meet. In one dimension, such a structure has a number of degrees of freedom equal to its number of connected components (the number of vertices minus the matroid rank) and in higher dimensions the number of degrees of freedom of a d-dimensional structure with n vertices is dn minus the matroid rank. In two-dimensional rigidity matroids, the Laman graphs play the role that spanning trees play in graphic matroids, but the structure of rigidity matroids in dimensions greater than two is not well understood.
References
Matroid theory
Planar graphs
Graph connectivity
Spanning tree | Graphic matroid | Mathematics | 1,815 |
198,579 | https://en.wikipedia.org/wiki/Non-Proliferation%20Trust | The Non-Proliferation Trust (NPT) is a U.S. nonprofit organization that, at the beginning of the 21st century, advocated storing 10,000 tons of U.S. nuclear waste in Russia for a fee of $15 billion paid to the Russian government and $250 million paid to a fund for Russian orphans. The group was headed by Admiral Daniel Murphy. This proposal was endorsed by the Russian atomic energy ministry, MinAtom, which estimated that the proposal could eventually generate $150 billion in revenue for Russia.
See also
Halter Marine
Federal Agency on Atomic Energy (Russia)
References
Energy economics
Non-profit organizations based in the United States
Radioactive waste
Environment of Russia | Non-Proliferation Trust | Physics,Chemistry,Technology,Environmental_science | 137 |
37,793,570 | https://en.wikipedia.org/wiki/Logical%20behaviorism | In the philosophy of mind, logical behaviorism (also known as analytical behaviorism) is the thesis that mental concepts can be explained in terms of behavioral concepts.
Logical behaviorism was first stated by the Vienna Circle, especially Rudolf Carnap. Other philosophers with sympathies for behaviorism included C. G. Hempel, Ludwig Wittgenstein, and W. V. O. Quine (1960). A more moderate form of analytical behaviorism was put forward by the Oxford philosopher Gilbert Ryle in his book The Concept of Mind (1949).
Overview
Generally speaking, analytic behaviourism is the view that propositions about the mind, or about mental states more generally, are reducible to propositions about behaviour. For example, a dualist would take 'Finbarr is in pain' to refer to a private, non-physical mental state within Finbarr's mind. But a behaviourist would say that 'Finbarr is in pain' simply refers to Finbarr's behaviour, or his disposition to behave in a certain way. So, the behaviourist might argue that if Finbarr was crying, and this was the reason that the mental state of pain was attributed to Finbarr, then 'Finbarr is in pain' reduces to 'Finbarr is crying'. In other words, 'Finbarr is in pain' means the same thing as 'Finbarr is crying' as, for a behaviourist, statements about mental states merely refer to people's behaviour, or their dispositions to show certain behaviour (pain behaviour, in Finbarr's case).
Gilbert Ryle
Following Hempel's behaviourist theory (sometimes called hard behaviourism), which alleged that all propositions about mental states were reducible, without loss of meaning, to propositions about bodily states and behaviour, Gilbert Ryle produced a modified, less extreme form of behaviourism (sometimes called soft behaviourism). Ryle sets out in The Concept of Mind to destroy the illusion of Cartesian Dualism, which he says has produced a widespread acceptance of the 'dogma of the ghost in the machine'- the belief that the mind is an immaterial 'thing' caged within a body. To introduce his behaviourism, Ryle proposes his great criticism of Cartesianism: that it performs a category mistake. Ryle believes that mind-body dualism mistakenly puts the mind in the category of a 'thing', a non-physical entity that exists, driving our actions. But, says Ryle, the mind is not a thing. It is simply a way of talking about behaviour, specifically the dispositions of people to act in certain ways. So, whereas for Hempel 'Finbarr is in pain' reduces to 'Finbarr is crying', Ryle's soft dispositional analysis might say that it means 'Finbarr has a disposition to cry, or shout in pain, or hold onto something for support'. In other words, the mind is not a thing- propositions about mental states are instead a way to express the dispositions of people to act in certain ways.
Criticisms
Issues raised by Hilary Putnam
Hilary Putnam criticises behaviourism by arguing that it confuses the symptoms of mental states (behaviour) with the mental states themselves. Mental states, says Putnam, are distinct from behaviour, and this is something that behaviourism overlooks. Putnam proposes a thought experiment to show the distinctness of mental states from behaviour, and therefore show behaviourism to be false. In Brains and Behaviour, Putnam gives the example of 'X-Worlders', sometimes called 'super-super Spartans'. These are great warriors who have so strongly repressed the urge to display signs of pain that they no longer have any pain behaviour, nor any disposition to display pain behaviour. When an X-Worlder is stabbed, they feel a terrible pain, and yet they show no pain behaviour, nor do they feel any disposition to show their pain. This, says Putnam, shows behaviourism to be false- in this situation, there is no corresponding behaviour with the X-Worlder's pain, showing mental states to be distinct from behaviour.
Mental states cannot be defined satisfactorily in terms of behaviour
It has also been argued that behaviourist analysis of mental state terms can never truly be adequately completed. This is due, in large part, to the fact that mental states are multiply realisable in behaviour. In other words, the same mental state can be manifested by an infinite number of behaviours. For example, when Finbarr feels pain, he can scream. Or shout. Or do nothing. Or cry himself to sleep. The number of pain behaviours that Finbarr can display is infinite. But, in order to accurately define a mental state in terms of behaviour, all the possible ways in which a mental state might be manifested need to be taken into account. But the ways mental states can be reflected in behaviour are infinite, showing that mental states cannot be adequately defined in terms of behaviour. This is also an issue for Ryle's soft behaviourism, as someone who is, for example, angry has dispositions to manifest this behaviour in an infinite number of ways.
Another issue, on top of multiple realisability, is the fact that behavioural analysis of mental states becomes circular. According to its critics, behaviourism neglects that whether a mental state is manifested in behaviour depends on its interaction with other mental states. For example, my mental state of wanting to drink my glass of water might not be manifested in behaviour (ie. by me drinking the water) if I believe that my water is poisoned. Ryle says that 'John wants to drink' the water means 'John will drink the water if... or if... or if...', where the 'if...' expresses a condition for John drinking the water- for example, John might drink the water if he's thirsty, or if he's tired, or if he's hallucinating and thinks that it's magical water. But whether John drinks the water is not as straightforward as one disposition to drink water or not- the desire to drink the water also depends on other mental states. If John thinks the water is poisoned, he will not drink. If John does not want to be needing the toilet within the hour, he will not drink the water. Behaviourism needs to recognise all of these other mental states which affect our dispositions, and yet this simply introduces more mental state terminology into the behavioural analysis. This mental state terminology must then be analysed in terms of behaviour to complete the analysis, which will in turn introduce more mental state terminology, as these other mental states that have been introduced also depend on other mental states. Therefore, the analysis from mental state to behaviour is circular and cannot be completed.
The 'Asymmetry' Objection
Behaviourists hold that propositions about mental states are about behaviour. But this does not seem to account for the asymmetry between how I talk about my own mental states and the mental states of others. For a hard behaviourist, the proposition 'I'm afraid' is apparently reducible to a proposition about behaviour- in other words, it means the same thing as a proposition about behaviour. But when I say that I'm afraid, I am not referring to my behaviour. I don't mean 'I'm shivering and crying and have a frown on my face.' I'm referring to my mental state of fear. Therefore, it would seem that the way I talk about my own mental states and the mental states of others are radically different. I do not need to observe my own behaviour to know I'm in pain. I just know, through introspection, which would show that a purely behavioural analysis of mental states is inadequate. I apply mental states to myself not because of my behaviour but because I am experiencing the mental states themselves.
Etymology
Logical behaviorism is called "logical", after the idea adopted by Bertrand Russell, that mathematics can be described in terms of formal logic, using Set Theory, and thus make it "scientific", "provable", "specific", consistent and "truthful". In a similar way, it was thought by the Vienna Circle that the phenomena of human mental states such as feelings, perceptions, imaginations etc. can be described in terms of a tendency to behave in a certain way, which could then be tested and explained scientifically through the methods of Behaviorism, whereby everything consists of stimulus-response pairs, with various types of origins and different types of reinforcement.
See also
Behaviorism
Methodological behaviorism
References
Behaviorism
Metaphysics of mind | Logical behaviorism | Biology | 1,789 |
3,064,285 | https://en.wikipedia.org/wiki/Mutual%20authentication | Mutual authentication or two-way authentication (not to be confused with two-factor authentication) refers to two parties authenticating each other at the same time in an authentication protocol. It is a default mode of authentication in some protocols (IKE, SSH) and optional in others (TLS).
Mutual authentication is a desired characteristic in verification schemes that transmit sensitive data, in order to ensure data security. Mutual authentication can be accomplished with two types of credentials: usernames and passwords, and public key certificates.
Mutual authentication is often employed in the Internet of Things (IoT). Writing effective security schemes in IoT systems is challenging, especially when schemes are desired to be lightweight and have low computational costs. Mutual authentication is a crucial security step that can defend against many adversarial attacks, which otherwise can have large consequences if IoT systems (such as e-Healthcare servers) are hacked. In scheme analyses done of past works, a lack of mutual authentication had been considered a weakness in data transmission schemes.
Process steps and verification
Schemes that have a mutual authentication step may use different methods of encryption, communication, and verification, but they all share one thing in common: each entity involved in the communication is verified. If Alice wants to communicate with Bob, they will both authenticate the other and verify that it is who they are expecting to communicate with before any data or messages are transmitted. A mutual authentication process that exchanges user IDs may be implemented as follows:
Alice sends a message encrypted with Bob's public key to Bob to show that Alice is a valid user.
Bob verifies the message:
Bob checks the format and timestamp. If either is incorrect or invalid, the session is aborted.
The message is then decrypted with Bob's secret key, giving Alice's ID.
Bob checks if the message matches a valid user. If not, the session is aborted.
Bob sends Alice a message back to show that Bob is a valid user.
Alice verifies the message:
Alice checks the format and timestamp. If either is incorrect or invalid, the session is aborted.
The message is then decrypted with Alice's secret key, giving Bob's ID.
Alice checks if the message matches a valid user. If not, the session is aborted.
At this point, both parties are verified to be who they claim to be and safe for the other to communicate with. Lastly, Alice and Bob will create a shared secret key so that they can continue communicating in a secure manner.
To verify that mutual authentication has occurred successfully, Burrows-Abadi-Needham logic (BAN logic) is a well regarded and widely accepted method to use, because it verifies that a message came from a trustworthy entity. BAN logic first assumes an entity is not to be trusted, and then will verify its legality.
Defenses
Mutual authentication supports zero trust networking because it can protect communications against adversarial attacks, notably:
Man-in-the-middle attack Man-in-the-middle (MITM) attacks are when a third party wishes to eavesdrop or intercept a message, and sometimes alter the intended message for the recipient. The two parties openly receive messages without verifying the sender, so they do not realize an adversary has inserted themselves into the communication line. Mutual authentication can prevent MITM attacks because both the sender and recipient verify each other before sending them their message keys, so if one of the parties is not verified to be who they claim they are, the session will end.
Replay attack A replay attack is similar to a MITM attack in which older messages are replayed out of context to fool the server. However, this does not work against schemes using mutual authentication because timestamps are a verification factor that are used in the protocols. If the change in time is greater than the maximum allowed time delay, the session will be aborted. Similarly, messages can include a randomly generated number to keep track of when a message was sent.
Spoofing attack Spoofing attacks rely on using false data to pose as another user in order to gain access to a server or be identified as someone else. Mutual authentication can prevent spoofing attacks because the server will authenticate the user as well, and verify that they have the correct session key before allowing any further communication and access.
Impersonation attacks Impersonation attacks refer to malicious attacks where a user or individual pretends to be an authorized user to gain unauthorized access to a system while feigning permission. When each party authenticates the other, they send each other a certificate that only the other party knows how to unscramble, verifying themselves as a trusted source. In this way, adversaries cannot use impersonation attacks because they do not have the correct certificate to act as if they are the other party.
Mutual authentication also ensures information integrity because if the parties are verified to be the correct source, then the information received is reliable as well.
mTLS
By default the TLS protocol only proves the identity of the server to the client using X.509 certificates, and the authentication of the client to the server is left to the application layer. TLS also offers client-to-server authentication using client-side X.509 authentication. As it requires provisioning of the certificates to the clients and involves less user-friendly experience, it's rarely used in end-user applications.
Mutual TLS authentication (mTLS) is more often used in business-to-business (B2B) applications, where a limited number of programmatic and homogeneous clients are connecting to specific web services, the operational burden is limited, and security requirements are usually much higher as compared to consumer environments.
mTLS is also used in microservices-based applications based on runtimes such as Dapr, via systems like SPIFFE.
Lightweight schemes vs. secured schemes
While lightweight schemes and secure schemes are not mutually exclusive, adding a mutual authentication step to data transmissions protocols can often increase performance runtime and computational costs. This can become an issue for network systems that cannot handle large amounts of data or those that constantly have to update for new real-time data (e.g. location tracking, real-time health data).
Thus, it becomes a desired characteristic of many mutual authentication schemes to have lightweight properties (e.g. have a low memory footprint) in order to accommodate the system that is storing a lot of data. Many systems implement cloud computing, which allows quick access to large amounts of data, but sometimes large amounts of data can slow down communication. Even with edge-based cloud computing, which is faster than general cloud computing due to a closer proximity between the server and user, lightweight schemes allow for more speed when managing larger amounts of data. One solution to keep schemes lightweight during the mutual authentication process is to limit the number of bits used during communication.
Applications that solely rely on device-to-device (D2D) communication, where multiple devices can communicate locally in close proximities, removes the third party network. This in turn can speed up communication time. However, the authentication still occurs through insecure channels, so researchers believe it is still important to ensure mutual authentication occurs in order to keep a secure scheme.
Schemes may sacrifice a better runtime or storage cost when ensuring mutual authentication in order to prioritize protecting the sensitive data.
Password-based schemes
In mutual authentication schemes that require a user's input password as part of the verification process, there is a higher vulnerability to hackers because the password is human-made rather than a computer-generated certificate. While applications could simply require users to use a computer-generated password, it is inconvenient for people to remember. User-made passwords and the ability to change one's password are important for making an application user-friendly, so many schemes work to accommodate the characteristic. Researchers note that a password based protocol with mutual authentication is important because user identities and passwords are still protected, as the messages are only readable to the two parties involved.
However, a negative aspect about password-based authentication is that password tables can take up a lot of memory space. One way around using a lot of memory during a password-based authentication scheme is to implement one-time passwords (OTP), which is a password sent to the user via SMS or email. OTPs are time-sensitive, which means that they will expire after a certain amount of time and that memory does not need to be stored.
Multi-factor authentication
Recently, more schemes have higher level authentication than password based schemes. While password-based authentication is considered as "single-factor authentication," schemes are beginning to implement smart card (two-factor) or biometric-based (three-factor) authentication schemes. Smart cards are simpler to implement and easy for authentication, but still have risks of being tampered with. Biometrics have grown more popular over password-based schemes because it is more difficult to copy or guess session keys when using biometrics, but it can be difficult to encrypt noisy data. Due to these security risks and limitations, schemes can still employ mutual authentication regardless of how many authentication factors are added.
Certificate based schemes and system applications
Mutual authentication is often found in schemes employed in the Internet of Things (IoT), where physical objects are incorporated into the Internet and can communicate via IP address. Authentication schemes can be applied to many types of systems that involve data transmission. As the Internet's presence in mechanical systems increases, writing effective security schemes for large numbers of users, objects, and servers can become challenging, especially when needing schemes to be lightweight and have low computational costs. Instead of password-based authentication, devices will use certificates to verify each other's identities.
Radio networks
Mutual authentication can be satisfied in radio network schemes, where data transmissions through radio frequencies are secure after verifying the sender and receiver.
Radio frequency identification (RFID) tags are commonly used for object detection, which many manufacturers are implementing into their warehouse systems for automation. This allows for a faster way to keep up with inventory and track objects. However, keeping track of items in a system with RFID tags that transmit data to a cloud server increases the chances of security risks, as there are now more digital elements to keep track of. A three way mutual authentication can occur between RFID tags, the tag readers, and the cloud network that stores this data in order to keep RFID tag data secure and unable to be manipulated.
Similarly, an alternate RFID tag and reader system that assigns designated readers to tags has been proposed for extra security and low memory cost. Instead of considering all tag readers as one entity, only certain readers can read specific tags. With this method, if a reader is breached, it will not affect the whole system. Individual readers will communicate with specific tags during mutual authentication, which runs in constant time as readers use the same private key for the authentication process.
Many e-Healthcare systems that remotely monitor patient health data use wireless body area networks (WBAN) that transmit data through radio frequencies. This is beneficial for patients that should not be disturbed while being monitored, and can reduced the workload for medical worker and allow them to focus on the more hands-on jobs. However, a large concern for healthcare providers and patients about using remote health data tracking is that sensitive patient data is being transmitted through unsecured channels, so authentication occurs between the medical body area network user (the patient), the Healthcare Service Provider (HSP) and the trusted third party.
Cloud based computing
e-Healthcare clouds are another way to store patient data collected remotely. Clouds are useful for storing large amounts of data, such as medical information, that can be accessed by many devices whenever needed. Telecare Medical Information Systems (TMIS), an important way for medical patients to receive healthcare remotely, can ensure secured data with mutual authentication verification schemes. Blockchain is one way that has been proposed to mutually authenticate the user to the database, by authenticating with the main mediBchain node and keeping patient anonymity.
Fog-cloud computing is a networking system that can handle large amounts of data, but still has limitations regarding computational and memory cost. Mobile edge computing (MEC) is considered to be an improved, more lightweight fog-cloud computing networking system, and can be used for medical technology that also revolves around location-based data. Due to the large physical range required of locational tracking, 5G networks can send data to the edge of the cloud to store data. An application like smart watches that track patient health data can be used to call the nearest hospital if the patient shows a negative change in vitals.
Fog node networks can be implemented in car automation, keeping data about the car and its surrounding states secure. By authenticating the fog nodes and the vehicle, vehicular handoff becomes a safe process and the car’s system is safe from hackers.
Machine to machine verification
Many systems that do not require a human user as part of the system also have protocols that mutually authenticate between parties. In unmanned aerial vehicle (UAV) systems, a platform authentication occurs rather than user authentication. Mutual authentication during vehicle communication prevents one vehicle's system from being breached, which can then affect the whole system negatively. For example, a system of drones can be employed for agriculture work and cargo delivery, but if one drone were to be breached, the whole system has the potential to collapse.
External links
Two types of Mutual Authentication
References
Authentication methods
Computer access control | Mutual authentication | Engineering | 2,772 |
66,307,448 | https://en.wikipedia.org/wiki/Neural%20network%20quantum%20states | Neural Network Quantum States (NQS or NNQS) is a general class of variational quantum states parameterized in terms of an artificial neural network. It was first introduced in 2017 by the physicists Giuseppe Carleo and Matthias Troyer to approximate wave functions of many-body quantum systems.
Given a many-body quantum state comprising degrees of freedom and a choice of associated quantum numbers , then an NQS parameterizes the wave-function amplitudes
where is an artificial neural network of parameters (weights) , input variables () and one complex-valued output corresponding to the wave-function amplitude.
This variational form is used in conjunction with specific stochastic learning approaches to approximate quantum states of interest.
Learning the Ground-State Wave Function
One common application of NQS is to find an approximate representation of the ground state wave function of a given Hamiltonian . The learning procedure in this case consists in finding the best neural-network weights that minimize the variational energy
Since, for a general artificial neural network, computing the expectation value is an exponentially costly operation in , stochastic techniques based, for example, on the Monte Carlo method are used to estimate , analogously to what is done in Variational Monte Carlo, see for example for a review. More specifically, a set of samples , with , is generated such that they are uniformly distributed according to the Born probability density . Then it can be shown that the sample mean of the so-called "local energy" is a statistical estimate of the quantum expectation value , i.e.
Similarly, it can be shown that the gradient of the energy with respect to the network weights is also approximated by a sample mean
where and can be efficiently computed, in deep networks through backpropagation.
The stochastic approximation of the gradients is then used to minimize the energy typically using a stochastic gradient descent approach. When the neural-network parameters are updated at each step of the learning procedure, a new set of samples is generated, in an iterative procedure similar to what done in unsupervised learning.
Connection with Tensor Networks
Neural-Network representations of quantum wave functions share some similarities with variational quantum states based on tensor networks. For example, connections with matrix product states have been established. These studies have shown that NQS support volume law scaling for the entropy of entanglement. In general, given a NQS with fully-connected weights, it corresponds, in the worse case, to a matrix product state of exponentially large bond dimension in .
See also
Differentiable programming
References
Quantum mechanics
Quantum Monte Carlo
Machine learning | Neural network quantum states | Physics,Chemistry,Engineering | 530 |
16,735,326 | https://en.wikipedia.org/wiki/Stave%20Puzzles | Stave Puzzles is an American jigsaw puzzle company located in Norwich, Vermont. The company was started in 1974 by Steve Richardson and Dave Tibbetts and was called Stave—a portmanteau of their first names. They manufacture hand cut jigsaw puzzles made from cherry-backed, 5-layered, wood. Stave produces several different puzzles types ranging from traditional puzzles, teaser puzzles which can have many open areas within the puzzles, trick puzzles in which the puzzles can be put together in two or more ways of which only one is correct. They also create three-dimensional puzzles, limited edition puzzles, and complete custom puzzles. Each puzzle is provided in a green and blue box and does not include a picture of the completed puzzle. Stave Puzzles is the largest hand-cut jigsaw puzzle company in the United States and competes with laser-cutting companies like Liberty Puzzles and Artifact Puzzles.
Company history
Steve Richardson moved from New Jersey to Vermont in 1969 and started a game design business with Dave Tibbetts. In 1974, Richardson was offered US$300 to make a wooden jigsaw puzzle. Richardson and Tibbetts founded Stave Puzzles in the same year. In 1976, Richardson bought out Tibbetts' share of the company for US$1 and a jigsaw. He built a small shop behind his garage and hired his first employee.
In 1983, Stave introduced their first 2-Way Trick Puzzle, called Go Fish. In 1989, Stave Puzzles released an April Fools' Day joke puzzle called 5 Easy Pieces, which had no solution. The puzzle's first thirty buyers were refunded their purchase price. Owners of Stave Puzzles include Queen Elizabeth II, Barbara Bush, Stephen King, Julie Andrews, Tom Peters, and Bill Gates. In 1990, Stave Puzzles was listed in the Guinness Book of World Records as having the most expensive jigsaw puzzle. Stave Puzzles was named by Tom Peters as the 1991 Product of the Year.
Products
Traditional puzzles
Stave produces traditional rectangular puzzles that range in size from (75 pieces) to (1000 pieces). For every hundred pieces, five custom pieces (such as dates, names, or silhouettes) can be cut into the puzzle.
Teaser puzzles
Stave's Teaser puzzles are designed in such a way as to make assembly of the jigsaw puzzle harder than in a traditional jigsaw creation. Stave commissions original artwork for these puzzles. Illustrators and Stave craftspeople work together on the design to reduce the number of visual cues that would normally make it easy to put together a traditional puzzle.
A typical Teaser design has some areas that are similar to traditional puzzles, making it easier to assemble some of the puzzle. However, in the center of the puzzle, or in other separate areas, holes are left into which many pieces have to fit. These pieces may be silhouettes of shapes that are representative of objects, people, animals, etc. It's not apparent how they fit together in the holes of the puzzle until they are played with and studied. The difficult rating system for Teasers is measured on a scale of one to four swords, with four swords being the most difficult.
Trick puzzles
Steve Richardson earned the name Chief Tormentor for inventing the Trick puzzle, a puzzle genre in which some pieces fit in two or more different places, but only one of the solutions is considered correct. The object of a Trick puzzle is detailed on a small block of wood that accompanies Trick puzzles.
An example of a Stave Trick puzzle is Champ, which is made up of 44 blue pieces and fits together 32 different ways, only one of which is correct where the serpent eats its own tail. The difficulty rating system for Trick puzzles is measured on a scale of one to five lightning bolts, with five being the most difficult.
Limited-Edition puzzles
Stave Limited Edition puzzles are produced from custom-commissioned artwork and sold in a limited quantity. A typical limited edition has included only 50 units, although some runs have included as many as 100 units. The four main types of limited edition puzzles include: Double Deckers; Riddle; Mystery Story; and Trick. Some of the limited edition puzzles are hand painted (as opposed to a print affixed to the wood).
In general, the limited editions also include items that fit the theme and help guide the player through the puzzle and additional games. For example, the Limited Edition Trick Puzzle Time Traveler comes with several hand crafted booklets to link the puzzle to the time-travel theme underlying the puzzle. The theme of the puzzle is major events (cultural, historical, scientific discoveries, etc.) from 1000 CE to 2000 CE. The booklets guide you through untangling a set of chronological mishaps caused by an evildoer.
Trick: The limited edition trick puzzles are similar in general challenge types to the other trick puzzles sold by Stave. The main difference is greater theming and integration. Taking Time Traveler as an example, the entire puzzle has a theme, and solving the tricks and re-arranging the pieces is linked to a story and learning about history. Other puzzles such as Knight at Stavely Castle include 3D pop-ups such as an entire castle façade that goes together multiple ways and a 3D sword that you need to remove from a stone. These trick puzzles are thus more intense than the generally available trick puzzles because of the themed linking between the puzzle and accompanying materials. Olivia the octopus has 10,000 possible solutions.
Custom puzzles
Custom puzzles are full custom work designed in conjunction with the crafters at Stave.
Samples of unique features
List of Limited Editions
Some of the Limited Editions which have been released by Stave include:
References
External links
Stave Puzzles Official Web Site: www.stave.com
Design companies established in 1974
Jigsaw puzzle manufacturers
Manufacturing companies based in Vermont
Privately held companies based in Vermont
Toy companies of the United States
Wooden toys
1974 establishments in Vermont
Mechanical puzzles | Stave Puzzles | Mathematics | 1,207 |
10,399,425 | https://en.wikipedia.org/wiki/Thermostatic%20mixing%20valve | A thermostatic mixing valve (TMV) is a valve that blends hot water with cold water to ensure constant, safe shower and bath outlet temperatures to prevent scalding.
The storage of water at high temperature removes one possible breeding ground for Legionella; the use of a thermostat, rather than a static mixing valve, provides increased safety against scalding, and increased user comfort, because the hot-water temperature remains constant.
Many TMVs use a wax thermostat for regulation. They also shut off rapidly in the event of a hot or cold supply failure to prevent scalding or thermal shock.
It is increasingly common practice around the world to regulate the storage water temperature to above , and to circulate or distribute water at a temperature less than . Water above these temperatures can cause scald injuries. Many countries, states, or municipalities now require that the temperature of all bath water in newly built and extensively refurbished domestic properties be controlled to a maximum of . Installing thermostatic mixing valves can ensure that water is delivered at the required temperature, thereby reducing the risk of scalding accidents; it also reduces hot water consumption from a supply that is maintained at a higher temperature.
There are three main categories for water temperature controlling devices: Heat Source, Group Control, and Point-of-Use.
Heat Source
These are used with central heating systems that use water as a medium.
Tempering valves for use on hot water heat distribution systems
High flow rates suitable for use in under floor (radiant) heating applications
Allows water to be stored at a higher temperature
Group Control
These provide a uniform distribution temperature for all hot water outlets in a household.
Designed for multi-point applications
High flow rates (from at )
Temperature stability
Point-of-Use
These are single Outlet Thermostatic Mixing Valves, often called "thermostatic faucets", "thermostat taps" or "thermostat valves".
Designed for single point applications, such as individual showering, hand wash basin mixers, bath or tub fillers
High level protection against scalding and thermal shock
Although other temperature regulating valves exist, thermostatic mixing valves are the preferred type in health care facilities, as they limit maximum outlet temperature, regardless of pressure or flow.
See also
Thermostatic radiator valve
Pressure-balanced valve
References
Thermostatic mixing valve patent
External links
Thermostatic Mixing Valve Manufacturers Association
Plumbing
Temperature control | Thermostatic mixing valve | Technology,Engineering | 503 |
49,295,575 | https://en.wikipedia.org/wiki/JExcel | JExcel is a library (API) to read, write, display, and modify Excel files with .xls or .xlsx formats. API can be embedded with Java Swing and AWT.
JExcel support is discontinued as of May 31, 2020.
Some features
Some main features are as follows:
Automate Excel application, workbooks, spreadsheets, etc.
Embed workbooks in a Java Swing application as ordinary Swing component
Add event listeners to workbooks and spreadsheets
Add event handlers to handle the behavior of workbook and spreadsheet events
Add native peers to develop custom functionality.
Usage
Primary usage is handling Excel files through its API.
Example
Sample code for reading/writing workbook attributes, setting password, and saving MS Excel 2003 format, might look like as follows:
import com.jniwrapper.win32.jexcel.Application;
import com.jniwrapper.win32.jexcel.FileFormat;
import com.jniwrapper.win32.jexcel.GenericWorkbook;
import com.jniwrapper.win32.jexcel.Workbook;
import java.io.File;
/**
* This sample shows how to read/modify workbook attributes, how to save workbook in Excel 2003 format,
* and how to reopen workbook.
*
* The sample works with MS Excel in non-embedded mode.
*/
public class WorkbookSample
{
public static void main(String[] args) throws Exception
{
//Start MS Excel application, crate workbook and make it visible.
// Application starts invisible and without any workbooks
Application application = new Application();
Workbook workbook = application.createWorkbook("Custom title");
printWorkbookAttributes(workbook);
modifyWorkbookAttributes(workbook);
File newFile = new File("Workbook.xls");
//Save workbook in Excel 2003, to save in Excel 2007 format use FileFormat.OPENXMLWORKBOOK
// format specificator and *.xlsx extension
workbook.saveAs(newFile, FileFormat.WORKBOOKNORMAL, true);
File workbookCopy = new File("WorkbookCopy.xls");
workbook.saveCopyAs(workbookCopy);
//Close workbook saving changes
workbook.close(true);
//Reopening the workbook
workbook = application.openWorkbook(newFile, true, "xxx001");
printWorkbookAttributes(workbook);
//Perform cleanup after yourself and close the MS Excel application forcing it to quit
application.close(true);
}
/**
* Prints workbook attributes to console
* @param workbook - workbook to print information about
*/
public static void printWorkbookAttributes(GenericWorkbook workbook)
{
String fileName = workbook.getFile().getAbsolutePath();
String name = workbook.getWorkbookName();
String title = workbook.getTitle();
String author = workbook.getAuthor();
System.out.println("\n[Workbook Information]");
System.out.println("File path: " + fileName);
System.out.println("Name: " + name);
System.out.println("Title: " + title);
System.out.println("Author: " + author);
if (workbook.hasPassword())
{
System.out.println("The workbook is protected with a password");
}
else
{
System.out.println("The workbook is not protected with a password");
}
if (workbook.isReadOnly())
{
System.out.println("Read only mode");
}
}
/**
* Modify workbook title, author and set password
* @param workbook - workbook to modify attributes
*/
public static void modifyWorkbookAttributes(GenericWorkbook workbook)
{
workbook.setTitle("X-files");
workbook.setPassword("xxx001");
workbook.setAuthor("Agent Smith");
}
}
See also
Apache POI
JXL (API)
Open Packaging Conventions
Office Open XML software
References
External links
– the official JExcel page.
- the JExcel Support website containing documentation, release notes and examples.
Microsoft Office-related software
Java platform
Java (programming language) libraries | JExcel | Technology | 998 |
9,758,356 | https://en.wikipedia.org/wiki/Latticework |
Latticework is an openwork framework consisting of a criss-crossed pattern of strips of building material, typically wood or metal. The design is created by crossing the strips to form a grid or weave.
Latticework may be functional – for example, to allow airflow to or through an area; structural, as a truss in a lattice girder;
used to add privacy, as through a lattice screen; purely decorative; or some combination of these.
Latticework in stone or wood from the classical period is also called Roman lattice or transenna (plural transenne).
In India, the house of a rich or noble person may be built with a baramdah or verandah surrounding every level leading to the living area. The upper floors often have balconies overlooking the street that are shielded by latticed screens carved in stone called jalis which keep the area cool and give privacy.
Examples
See also
Brise soleil
Jali
Lattice tower
Lattice truss bridge
Lattice stool
Mashrabiya
Mesh
Pergola
Reticulum
Tessellation
Trellis (architecture)
Truss
Wattle (construction)
Yurt
Notes
External links
Architectural elements
Garden features
Gardening aids
Construction | Latticework | Technology,Engineering | 233 |
42,786,057 | https://en.wikipedia.org/wiki/75V-2621%20virus | The 75V-2621 virus (Pueblo Viejo virus) is a strain of Gamboa virus in the genus Bunyavirus. It was first isolated in the mosquito Aedeomyia squamipennis in Vinces, Ecuador in 1974. Ad. squamipennis appears to be the vector and birds a host, including the chicken Gallus gallus domesticus under experimental conditions. It has only been isolated in the tropical regions of Central and South America. It has not be shown to cause disease in humans, or domestic and wild animals; however, in a 2018 study, antibodies against the Gamboa virus were found in birds (6.2%), humans (1.5%), and other wild animals (2.6%).
References
Orthobunyaviruses | 75V-2621 virus | Biology | 168 |
15,448,809 | https://en.wikipedia.org/wiki/Rain-out%20model | The rain-out model is a model of planetary science that describes the first stage of planetary differentiation and core formation. According to this model, a planetary body is assumed to be composed primarily of silicate minerals and NiFe (i.e. a mixture of nickel and iron). If temperatures within this body reach about 1500 K, the minerals and the metals will melt. This will produce an emulsion in which globules of liquid NiFe are dispersed in a magma of liquid silicates, the two being immiscible. Because the NiFe globules are denser than the silicates, they will sink under the influence of gravity to the centre of the planetary body—in effect, the globules of metal will "rain out" from the emulsion to the centre, forming a core.
According to the rain-out model, core formation was a relatively rapid process, taking a few dozen millennia to reach completion. This occurred at the end of a lengthy process in which the planets were assembled from colliding planetary embryos. Only the collisions of such large embryos could generate enough heat to melt entire bodies. Furthermore, it was only after all of the iron and nickel delivered by impacting bodies had arrived that core formation could proceed to completion.
However, this process of core formation was preceded by a long period of partial differentiation, in which some of the nickel and iron within the planetary embryos had begun to separate.
The rain-out model can be invoked to explain core formation in all the terrestrial planets, given that these consist primarily of silicates, nickel and iron. It can also be adapted to account for core formation in smaller bodies composed of ices and silicates. In such a case, it would be the denser silicates which would rain out to form a rocky core, while the volatile components would form an icy mantle.
See also
Iron catastrophe
References
Planetary science
Scientific models | Rain-out model | Astronomy | 397 |
22,603,634 | https://en.wikipedia.org/wiki/Symmetric%20closure | In mathematics, the symmetric closure of a binary relation on a set is the smallest symmetric relation on that contains
For example, if is a set of airports and means "there is a direct flight from airport to airport ", then the symmetric closure of is the relation "there is a direct flight either from to or from to ". Or, if is the set of humans and is the relation 'parent of', then the symmetric closure of is the relation " is a parent or a child of ".
Definition
The symmetric closure of a relation on a set is given by
In other words, the symmetric closure of is the union of with its converse relation,
See also
References
Franz Baader and Tobias Nipkow, Term Rewriting and All That, Cambridge University Press, 1998, p. 8
Binary relations
Closure operators
Rewriting systems | Symmetric closure | Mathematics | 169 |
42,983,741 | https://en.wikipedia.org/wiki/Julian%20Parkhill | Julian Parkhill (born 1964) is Professor of Bacterial Evolution in the Department of Veterinary Medicine at the University of Cambridge. He previously served as head of pathogen genomics at the Wellcome Sanger Institute.
Education
Parkhill was educated at Westcliff High School for Boys, the University of Birmingham and the University of Bristol where he was awarded a PhD in 1991 for research into the regulation of transcription of the mercury resistance operon.
Career and research
Parkhill uses high throughput sequencing and phenotyping to study pathogen diversity and variation, how they affect virulence and transmission, and what they tell us about the evolution of pathogenicity and host–pathogen interaction. Research in the Parkill Laboratory has been funded the Wellcome Trust, the Biotechnology and Biological Sciences Research Council (BBSRC) and the Medical Research Council (MRC).
Awards and honours
Parkhill was elected a Fellow of the Academy of Medical Sciences (FMedSci) in 2009, and a Fellow of the American Academy of Microbiology (FAAM) in 2012.
Parkhill was elected a Fellow of the Royal Society (FRS) in 2014, his certificate of election reads:
References
Fellows of the Royal Society
Living people
Academics of the University of Birmingham
Pathogen genomics
People educated at Westcliff High School for Boys
1964 births
Alumni of the University of Birmingham | Julian Parkhill | Biology | 271 |
858,993 | https://en.wikipedia.org/wiki/Dependency%20hell | Dependency hell is a colloquial term for the frustration of some software users who have installed software packages which have dependencies on specific versions of other software packages.
The dependency issue arises when several packages have dependencies on the same shared packages or libraries, but they depend on different and incompatible versions of the shared packages. If the shared package or library can only be installed in a single version, the user may need to address the problem by obtaining newer or older versions of the dependent packages. This, in turn, may break other dependencies and push the problem to another set of packages.
Problems
Dependency hell takes several forms:
Many dependencies
An application depends on many libraries, requiring lengthy downloads, large amounts of disk space, and being very portable (all libraries are already ported enabling the application itself to be ported easily). It can also be difficult to locate all the dependencies, which can be fixed by having a repository (see below). This is partly inevitable; an application built on a given computing platform (such as Java) requires that platform to be installed, but further applications do not require it. This is a particular problem if an application uses a small part of a big library (which can be solved by code refactoring), or a simple application relies on many libraries.
Long chains of dependencies
If depends on , which depends on , ..., which depends on . This is distinct from "many dependencies" if the dependencies must be resolved manually, e.g., on attempting to install , the user is prompted to install first and on attempting to install , the user is then prompted to install , and so on. Sometimes, however, during this long chain of dependencies, conflicts arise where two different versions of the same package are required (see conflicting dependencies below). These long chains of dependencies can be solved by having a package manager that resolves all dependencies automatically. Other than being a hassle (to resolve all the dependencies manually), manual resolution can mask dependency cycles or conflicts.
Conflicting dependencies
Solving the dependencies for one software may break the compatibility of another in a similar fashion to whack-a-mole. If depends on , and depends on , and different versions of cannot be simultaneously installed, then and cannot simultaneously be used (or installed, if the installer checks dependencies). When possible, this is solved by allowing simultaneous installations of the different dependencies. Alternatively, the existing dependency, along with all software that depends on it, must be uninstalled in order to install the new dependency. A problem on Linux systems with installing packages from a different distributor is that the resulting long chain of dependencies may lead to a conflicting version of the C standard library (e.g. the GNU C Library), on which thousands of packages depend. If this happens, the user will be prompted to uninstall all of those packages.
Circular dependencies
If depends upon and can't run without a specific version of , but , in turn, depends upon and can't run without a specific version of , then upgrading any application will break another. This scheme can be deeper in branching. Its impact can be quite heavy if it affects core systems or update software itself: a package manager (A), which requires specific run-time library (B) to function, may break itself (A) in the middle of the process when upgrading this library (B) to next version. Due to incorrect library (B) version, the package manager (A) is now broken, thus no rollback or downgrade of library (B) is possible. The usual solution is to download and deploy both applications, sometimes from within a temporary environment.
Package manager dependencies
It is possible for dependency hell to result from installing a prepared package via a package manager (e.g. APT), but this is unlikely since major package managers have matured and official repositories are well maintained. This is the case with current releases of Debian and major derivatives such as Ubuntu. Dependency hell, however, can result from installing a package directly via a package installer (e.g. RPM or dpkg).
Diamond dependency
When a library A depends on libraries B and C, both B and C depend on library D, but B requires version D.1 and C requires version D.2. The build fails because only one version of D can exist in the final executable.
Package managers like yum are prone to have conflicts between packages of their repositories, causing dependency hell in Linux distributions such as CentOS and Red Hat Enterprise Linux.
Solutions
Removing dependencies
Many software libraries are written in a generous way, in an attempt to fulfill most users' needs, but sometimes only a small portion of functions are required in the host code. By examining the source, the functionality can be rewritten in a much more compact way (with respect to the license). In general, this can significantly reduce the application code, reduce later maintenance costs, and improve the software writing skills of programmers.
Version numbering
A very common solution to this problem is to have a standardized numbering system, wherein software uses a specific number for each version (aka major version), and also a subnumber for each revision (aka minor version), e.g.: 10.1, or 5.7. The major version only changes when programs that used that version will no longer be compatible. The minor version might change with even a simple revision that does not prevent other software from working with it. In cases like this, software packages can then simply request a component that has a particular major version, and any minor version (greater than or equal to a particular minor version). As such, they will continue to work, and dependencies will be resolved successfully, even if the minor version changes. Semantic Versioning (aka "SemVer") is one example of an effort to generate a technical specification that employs specifically formatted numbers to create a software versioning scheme.
Private per application versions
Windows File Protection introduced in Windows 2000 prevented applications from overwriting system DLLs. Developers were instead encouraged to use "Private DLLs", copies of libraries per application in the directory of the application. This uses the Windows search path characteristic that the local path is always prioritized before the system directory with the system wide libraries. This allows easy and effective shadowing of library versions by specific application ones, therefore preventing dependency hell.
PC-BSD, up to and including version 8.2, a predecessor of TrueOS (an operating system based on FreeBSD) places packages and dependencies into self-contained directories in /Programs, which avoids breakage if system libraries are upgraded or changed. It uses its own "PBI" (Push Button Installer) for package management.
Side-by-side installation of multiple versions
The version numbering solution can be improved upon by elevating the version numbering to an operating system supported feature. This allows an application to request a module/library by a unique name and version number constraints, effectively transferring the responsibility for brokering library/module versions from the applications to the operating system. A shared module can then be placed in a central repository without the risk of breaking applications which are dependent on previous or later versions of the module. Each version gets its own entry, side by side with other versions of the same module.
This solution is used in Microsoft Windows operating systems since Windows Vista, where the Global Assembly Cache is an implementation of such a central registry with associated services and integrated with the installation system/package manager. Gentoo Linux solves this problem with a concept called slotting, which allows multiple versions of shared libraries to be installed.
Smart package management
Some package managers can perform smart upgrades, in which interdependent software components are upgraded at the same time, thereby resolving the major number incompatibility issue too.
Many current Linux distributions have also implemented repository-based package management systems to try to solve the dependency problem. These systems are a layer on top of the RPM, dpkg, or other packaging systems that are designed to automatically resolve dependencies by searching in predefined software repositories. Examples of these systems include Apt, Yum, Urpmi, ZYpp, Portage, Pacman and others. Typically, the software repositories are FTP sites or websites, directories on the local computer or shared across a network or, much less commonly, directories on removable media such as CDs or DVDs. This eliminates dependency hell for software packaged in those repositories, which are typically maintained by the Linux distribution provider and mirrored worldwide. Although these repositories are often huge, it is not possible to have every piece of software in them, so dependency hell can still occur. In all cases, dependency hell is still faced by the repository maintainers.
Installer options
Because different pieces of software have different dependencies, it is possible to get into a vicious circle of dependency requirements, or an ever-expanding tree of requirements, as each new package demands several more be installed. Systems such as Debian's Advanced Packaging Tool can resolve this by presenting the user with a range of solutions, and allowing the user to accept or reject the solutions, as desired.
Easy adaptability in programming
If application software is designed in such a way that its programmers are able to easily adapt the interface layer that deals with the OS, window manager or desktop environment to new or changing standards, then the programmers would only have to monitor notifications from the environment creators or component library designers and quickly adjust their software with updates for their users, all with minimal effort and a lack of costly and time-consuming redesign. This method would encourage programmers to pressure those upon whom they depend to maintain a reasonable notification process that is not onerous to anyone involved.
Strict compatibility requirement in code development and maintenance
If the applications and libraries are developed and maintained with guaranteed downward compatibility in mind, any application or library can be replaced with a newer version at any time without breaking anything. While this does not alleviate the multitude of dependency, it does make the jobs of package managers or installers much easier.
Software appliances
Another approach to avoiding dependency issues is to deploy applications as a software appliance. A software appliance encapsulates dependencies in a pre-integrated self-contained unit such that users no longer have to worry about resolving software dependencies. Instead the burden is shifted to developers of the software appliance. Containers and their images (such as those provided by Docker and Docker Hub) can be seen as an implementation of software appliances.
Portable applications
An application (or version of an existing conventional application) that is completely self-contained and requires nothing to be already installed. It is coded to have all necessary components included, or is designed to keep all necessary files within its own directory, and will not create a dependency problem. These are often able to run independently of the system to which they are connected. Applications in RISC OS and the ROX Desktop for Linux use application directories, which work in much the same way: programs and their dependencies are self-contained in their own directories (folders).
This method of distribution has also proven useful when porting applications designed for Unix-like platforms to Windows, the most noticeable drawback being multiple installations of the same shared libraries. For example, Windows installers for gedit, GIMP, and HexChat all include identical copies of the GTK toolkit, which these programs use to render widgets. On the other hand, if different versions of GTK are required by each application, then this is the correct behavior and successfully avoids dependency hell.
Platform-specific
On specific computing platforms, "dependency hell" often goes by a local specific name, generally the name of components.
DLL Hell a form of dependency hell occurring on 16-bit Microsoft Windows.
Extension conflict a form of dependency hell occurring on the classic Mac OS.
JAR hell a form of dependency hell occurring in the Java Runtime Environment before build tools like Apache Maven solved this problem in 2004.
RPM hell a form of dependency hell occurring in the Red Hat distribution of Linux and other distributions that use RPM as a package manager.
See also
Catch-22 – a situation in which solving a problem depends on contradictory circumstances, named after a concept described in a 1961 novel
Configuration management techniques and tools for managing software versions
Coupling forms of dependency among software artifacts
Dynamic dead code elimination
Package manager
PBI
Software appliance
Static library
Supply chain attack
Nix package manager
Left-pad
References
Package management systems
Version control systems
Computer errors
Software engineering folklore | Dependency hell | Technology,Engineering | 2,580 |
16,796,749 | https://en.wikipedia.org/wiki/HD%20121504%20b | HD 121504 b is an exoplanet that is likely to be slightly less massive than Jupiter. Although the radial velocity method that was used to detect the planet can only measure the minimum mass of the planet, it is very unlikely that its true mass would be much higher.
HD 121504 b orbits the star at a distance of about one third of Earth's distance from the Sun, and has a slightly eccentric orbit.
References
External links
Exoplanets discovered in 2000
Giant planets
Centaurus
Exoplanets detected by radial velocity | HD 121504 b | Astronomy | 112 |
7,170,579 | https://en.wikipedia.org/wiki/Thermophoresis | Thermophoresis (also thermomigration, thermodiffusion, the Soret effect, or the Ludwig–Soret effect) is a phenomenon observed in mixtures of mobile particles where the different particle types exhibit different responses to the force of a temperature gradient. This phenomenon tends to move light molecules to hot regions and heavy molecules to cold regions. The term thermophoresis most often applies to aerosol mixtures whose mean free path is comparable to its characteristic length scale , but may also commonly refer to the phenomenon in all phases of matter. The term Soret effect normally applies to liquid mixtures, which behave according to different, less well-understood mechanisms than gaseous mixtures. Thermophoresis may not apply to thermomigration in solids, especially multi-phase alloys.
Thermophoretic force
The phenomenon is observed at the scale of one millimeter or less. An example that may be observed by the naked eye with good lighting is when the hot rod of an electric heater is surrounded by tobacco smoke: the smoke goes away from the immediate vicinity of the hot rod. As the small particles of air nearest the hot rod are heated, they create a fast flow away from the rod, down the temperature gradient. While the kinetic energy of the particles is similar at the same temperature, lighter particles acquire higher velocity compared to the heavy ones. When they collide with the large, slower-moving particles of the tobacco smoke they push the latter away from the rod. The force that has pushed the smoke particles away from the rod is an example of a thermophoretic force, as the mean free path of air at ambient conditions is 68 nm and the characteristic length scales are between 100–1000 nm.
Thermodiffusion is labeled "positive" when particles move from a hot to cold region and "negative" when the reverse is true. Typically the heavier/larger species in a mixture exhibit positive thermophoretic behavior while the lighter/smaller species exhibit negative behavior. In addition to the sizes of the various types of particles and the steepness of the temperature gradient, the heat conductivity and heat absorption of the particles play a role. Recently, Braun and coworkers have suggested that the charge and entropy of the hydration shell of molecules play a major role for the thermophoresis of biomolecules in aqueous solutions.
The quantitative description is given by:
particle concentration; diffusion coefficient; and the thermodiffusion coefficient. The quotient of both coefficients
is called Soret coefficient.
The thermophoresis factor has been calculated from molecular interaction potentials derived from known molecular models.
Applications
The thermophoretic force has a number of practical applications. The basis for applications is that, because different particle types move differently under the force of the temperature gradient, the particle types can be separated by that force after they have been mixed together, or prevented from mixing if they are already separated.
Impurity ions may move from the cold side of a semiconductor wafer towards the hot side, since the higher temperature makes the transition structure required for atomic jumps more achievable. The diffusive flux may occur in either direction (either up or down the temperature gradient), dependent on the materials involved. Thermophoretic force has been used in commercial precipitators for applications similar to electrostatic precipitators. It is exploited in the manufacturing of optical fiber in vacuum deposition processes. It can be important as a transport mechanism in fouling. Thermophoresis has also been shown to have potential in facilitating drug discovery by allowing the detection of aptamer binding by comparison of the bound versus unbound motion of the target molecule. This approach has been termed microscale thermophoresis. Furthermore, thermophoresis has been demonstrated as a versatile technique for manipulating single biological macromolecules, such as genomic-length DNA, and HIV virus in micro- and nanochannels by means of light-induced local heating. Thermophoresis is one of the methods used to separate different polymer particles in field flow fractionation.
History
Thermophoresis in gas mixtures was first observed and reported by John Tyndall in 1870 and further understood by John Strutt (Baron Rayleigh) in 1882. Thermophoresis in liquid mixtures was first observed and reported by Carl Ludwig in 1856 and further understood by Charles Soret in 1879.
James Clerk Maxwell wrote in 1873 concerning mixtures of different types of molecules (and this could include small particulates larger than molecules):
"This process of diffusion... goes on in gases and liquids and even in some solids.... The dynamical theory also tells us what will happen if molecules of different masses are allowed to knock about together. The greater masses will go slower than the smaller ones, so that, on an average, every molecule, great or small, will have the same energy of motion. The proof of this dynamical theorem, in which I claim the priority, has recently been greatly developed and improved by Dr. Ludwig Boltzmann."
It has been analyzed theoretically by Sydney Chapman.
Thermophoresis at solids interfaces was numerically discovered by Schoen et al. in 2006 and was experimentally confirmed by Barreiro et al.
Negative thermophoresis in fluids was first noticed in 1967 by Dwyer in a theoretical solution, and the name was coined by Sone. Negative thermophoresis at solids interfaces was first observed by Leng et al. in 2016.
See also
Deposition (aerosol physics)
Dufour effect
Maxwell–Stefan diffusion
Microscale thermophoresis
References
External links
A short introduction to thermophoresis, including helpful animated graphics, is at aerosols.wustl.edu
Ternary mixtures
HCl
Alkali bromides
Non-equilibrium thermodynamics
Aerosols | Thermophoresis | Chemistry,Mathematics | 1,222 |
55,607 | https://en.wikipedia.org/wiki/Discriminant | In mathematics, the discriminant of a polynomial is a quantity that depends on the coefficients and allows deducing some properties of the roots without computing them. More precisely, it is a polynomial function of the coefficients of the original polynomial. The discriminant is widely used in polynomial factoring, number theory, and algebraic geometry.
The discriminant of the quadratic polynomial is
the quantity which appears under the square root in the quadratic formula. If this discriminant is zero if and only if the polynomial has a double root. In the case of real coefficients, it is positive if the polynomial has two distinct real roots, and negative if it has two distinct complex conjugate roots. Similarly, the discriminant of a cubic polynomial is zero if and only if the polynomial has a multiple root. In the case of a cubic with real coefficients, the discriminant is positive if the polynomial has three distinct real roots, and negative if it has one real root and two distinct complex conjugate roots.
More generally, the discriminant of a univariate polynomial of positive degree is zero if and only if the polynomial has a multiple root. For real coefficients and no multiple roots, the discriminant is positive if the number of non-real roots is a multiple of 4 (including none), and negative otherwise.
Several generalizations are also called discriminant: the discriminant of an algebraic number field; the discriminant of a quadratic form; and more generally, the discriminant of a form, of a homogeneous polynomial, or of a projective hypersurface (these three concepts are essentially equivalent).
Origin
The term "discriminant" was coined in 1851 by the British mathematician James Joseph Sylvester.
Definition
Let
be a polynomial of degree (this means ), such that the coefficients belong to a field, or, more generally, to a commutative ring. The resultant of and its derivative,
is a polynomial in with integer coefficients, which is the determinant of the Sylvester matrix of and . The nonzero entries of the first column of the Sylvester matrix are and and the resultant is thus a multiple of Hence the discriminant—up to its sign—is defined as the quotient of the resultant of and by :
Historically, this sign has been chosen such that, over the reals, the discriminant will be positive when all the roots of the polynomial are real. The division by may not be well defined if the ring of the coefficients contains zero divisors. Such a problem may be avoided by replacing by 1 in the first column of the Sylvester matrix—before computing the determinant. In any case, the discriminant is a polynomial in with integer coefficients.
Expression in terms of the roots
When the above polynomial is defined over a field, it has roots, , not necessarily all distinct, in any algebraically closed extension of the field. (If the coefficients are real numbers, the roots may be taken in the field of complex numbers, where the fundamental theorem of algebra applies.)
In terms of the roots, the discriminant is equal to
It is thus the square of the Vandermonde polynomial times .
This expression for the discriminant is often taken as a definition. It makes clear that if the polynomial has a multiple root, then its discriminant is zero, and that, in the case of real coefficients, if all the roots are real and simple, then the discriminant is positive. Unlike the previous definition, this expression is not obviously a polynomial in the coefficients, but this follows either from the fundamental theorem of Galois theory, or from the fundamental theorem of symmetric polynomials and Vieta's formulas by noting that this expression is a symmetric polynomial in the roots of .
Low degrees
The discriminant of a linear polynomial (degree 1) is rarely considered. If needed, it is commonly defined to be equal to 1 (using the usual conventions for the empty product and considering that one of the two blocks of the Sylvester matrix is empty). There is no common convention for the discriminant of a constant polynomial (i.e., polynomial of degree 0).
For small degrees, the discriminant is rather simple (see below), but for higher degrees, it may become unwieldy. For example, the discriminant of a general quartic has 16 terms, that of a quintic has 59 terms, and that of a sextic has 246 terms.
This is OEIS sequence .
Degree 2
The quadratic polynomial has discriminant
The square root of the discriminant appears in the quadratic formula for the roots of the quadratic polynomial:
where the discriminant is zero if and only if the two roots are equal. If are real numbers, the polynomial has two distinct real roots if the discriminant is positive, and two complex conjugate roots if it is negative.
The discriminant is the product of and the square of the difference of the roots.
If are rational numbers, then the discriminant is the square of a rational number if and only if the two roots are rational numbers.
Degree 3
The cubic polynomial has discriminant
In the special case of a depressed cubic polynomial , the discriminant simplifies to
The discriminant is zero if and only if at least two roots are equal. If the coefficients are real numbers, and the discriminant is not zero, the discriminant is positive if the roots are three distinct real numbers, and negative if there is one real root and two complex conjugate roots.
The square root of a quantity strongly related to the discriminant appears in the formulas for the roots of a cubic polynomial. Specifically, this quantity can be times the discriminant, or its product with the square of a rational number; for example, the square of in the case of Cardano formula.
If the polynomial is irreducible and its coefficients are rational numbers (or belong to a number field), then the discriminant is a square of a rational number (or a number from the number field) if and only if the Galois group of the cubic equation is the cyclic group of order three.
Degree 4
The quartic polynomial
has discriminant
The depressed quartic polynomial
has discriminant
The discriminant is zero if and only if at least two roots are equal. If the coefficients are real numbers and the discriminant is negative, then there are two real roots and two complex conjugate roots. Conversely, if the discriminant is positive, then the roots are either all real or all non-real.
Properties
Zero discriminant
The discriminant of a polynomial over a field is zero if and only if the polynomial has a multiple root in some field extension.
The discriminant of a polynomial over an integral domain is zero if and only if the polynomial and its derivative have a non-constant common divisor.
In characteristic 0, this is equivalent to saying that the polynomial is not square-free (i.e., it is divisible by the square of a non-constant polynomial).
In nonzero characteristic , the discriminant is zero if and only if the polynomial is not square-free or it has an irreducible factor which is not separable (i.e., the irreducible factor is a polynomial in ).
Invariance under change of the variable
The discriminant of a polynomial is, up to a scaling, invariant under any projective transformation of the variable. As a projective transformation may be decomposed into a product of translations, homotheties and inversions, this results in the following formulas for simpler transformations, where denotes a polynomial of degree , with as leading coefficient.
Invariance by translation:
This results from the expression of the discriminant in terms of the roots
Invariance by homothety:
This results from the expression in terms of the roots, or of the quasi-homogeneity of the discriminant.
Invariance by inversion:
when Here, denotes the reciprocal polynomial of ; that is, if and then
Invariance under ring homomorphisms
Let be a homomorphism of commutative rings. Given a polynomial
in , the homomorphism acts on for producing the polynomial
in .
The discriminant is invariant under in the following sense. If then
As the discriminant is defined in terms of a determinant, this property results immediately from the similar property of determinants.
If then may be zero or not. One has, when
When one is only interested in knowing whether a discriminant is zero (as is generally the case in algebraic geometry), these properties may be summarised as:
if and only if either or
This is often interpreted as saying that if and only if has a multiple root (possibly at infinity).
Product of polynomials
If is a product of polynomials in , then
where denotes the resultant with respect to the variable , and and are the respective degrees of and .
This property follows immediately by substituting the expression for the resultant, and the discriminant, in terms of the roots of the respective polynomials.
Homogeneity
The discriminant is a homogeneous polynomial in the coefficients; it is also a homogeneous polynomial in the roots and thus quasi-homogeneous in the coefficients.
The discriminant of a polynomial of degree is homogeneous of degree in the coefficients. This can be seen in two ways. In terms of the roots-and-leading-term formula, multiplying all the coefficients by does not change the roots, but multiplies the leading term by . In terms of its expression as a determinant of a matrix (the Sylvester matrix) divided by , the determinant is homogeneous of degree in the entries, and dividing by makes the degree .
The discriminant of a polynomial of degree is homogeneous of degree in the roots. This follows from the expression of the discriminant in terms of the roots, which is the product of a constant and squared differences of roots.
The discriminant of a polynomial of degree is quasi-homogeneous of degree in the coefficients, if, for every , the coefficient of is given the weight . It is also quasi-homogeneous of the same degree, if, for every , the coefficient of is given the weight . This is a consequence of the general fact that every polynomial which is homogeneous and symmetric in the roots may be expressed as a quasi-homogeneous polynomial in the elementary symmetric functions of the roots.
Consider the polynomial
It follows from what precedes that the exponents in every monomial appearing in the discriminant satisfy the two equations
and
and also the equation
which is obtained by subtracting the second equation from the first one multiplied by .
This restricts the possible terms in the discriminant. For the general quadratic polynomial, the discriminant is a homogeneous polynomial of degree 2 which has only two there are only two terms, while the general homogeneous polynomial of degree two in three variables has 6 terms. The discriminant of the general cubic polynomial is a homogeneous polynomial of degree 4 in four variables; it has five terms, which is the maximum allowed by the above rules, while the general homogeneous polynomial of degree 4 in 4 variables has 35 terms.
For higher degrees, there may be monomials which satisfy above rules and do not appear in the discriminant. The first example is for the quartic polynomial , in which case the monomial satisfies the rules without appearing in the discriminant.
Real roots
In this section, all polynomials have real coefficients.
It has been seen in that the sign of the discriminant provides useful information on the nature of the roots for polynomials of degree 2 and 3. For higher degrees, the information provided by the discriminant is less complete, but still useful. More precisely, for a polynomial of degree , one has:
The polynomial has a multiple root if and only if its discriminant is zero.
If the discriminant is positive, the number of non-real roots is a multiple of 4. That is, there is a nonnegative integer such that there are pairs of complex conjugate roots and real roots.
If the discriminant is negative, the number of non-real roots is not a multiple of 4. That is, there is a nonnegative integer such that there are pairs of complex conjugate roots and real roots.
Homogeneous bivariate polynomial
Let
be a homogeneous polynomial of degree in two indeterminates.
Supposing, for the moment, that and are both nonzero, one has
Denoting this quantity by
one has
and
Because of these properties, the quantity is called the discriminant or the homogeneous discriminant of .
If and are permitted to be zero, the polynomials and may have a degree smaller than . In this case, above formulas and definition remain valid, if the discriminants are computed as if all polynomials would have the degree . This means that the discriminants must be computed with and indeterminate, the substitution for them of their actual values being done after this computation. Equivalently, the formulas of must be used.
Use in algebraic geometry
The typical use of discriminants in algebraic geometry is for studying plane algebraic curves, and more generally algebraic hypersurfaces. Let be such a curve or hypersurface; is defined as the zero set of a multivariate polynomial. This polynomial may be considered as a univariate polynomial in one of the indeterminates, with polynomials in the other indeterminates as coefficients. The discriminant with respect to the selected indeterminate defines a hypersurface in the space of the other indeterminates. The points of are exactly the projection of the points of (including the points at infinity), which either are singular or have a tangent hyperplane that is parallel to the axis of the selected indeterminate.
For example, let be a bivariate polynomial in and with real coefficients, so that is the implicit equation of a real plane algebraic curve. Viewing as a univariate polynomial in with coefficients depending on , then the discriminant is a polynomial in whose roots are the -coordinates of the singular points, of the points with a tangent parallel to the -axis and of some of the asymptotes parallel to the -axis. In other words, the computation of the roots of the -discriminant and the -discriminant allows one to compute all of the remarkable points of the curve, except the inflection points.
Generalizations
There are two classes of the concept of discriminant. The first class is the discriminant of an algebraic number field, which, in some cases including quadratic fields, is the discriminant of a polynomial defining the field.
Discriminants of the second class arise for problems depending on coefficients, when degenerate instances or singularities of the problem are characterized by the vanishing of a single polynomial in the coefficients. This is the case for the discriminant of a polynomial, which is zero when two roots collapse. Most of the cases, where such a generalized discriminant is defined, are instances of the following.
Let be a homogeneous polynomial in indeterminates over a field of characteristic 0, or of a prime characteristic that does not divide the degree of the polynomial. The polynomial defines a projective hypersurface, which has singular points if and only the partial derivatives of have a nontrivial common zero. This is the case if and only if the multivariate resultant of these partial derivatives is zero, and this resultant may be considered as the discriminant of . However, because of the integer coefficients resulting of the derivation, this multivariate resultant may be divisible by a power of , and it is better to take, as a discriminant, the primitive part of the resultant, computed with generic coefficients. The restriction on the characteristic is needed because otherwise a common zero of the partial derivative is not necessarily a zero of the polynomial (see Euler's identity for homogeneous polynomials).
In the case of a homogeneous bivariate polynomial of degree , this general discriminant is times the discriminant defined in . Several other classical types of discriminants, that are instances of the general definition are described in next sections.
Quadratic forms
A quadratic form is a function over a vector space, which is defined over some basis by a homogeneous polynomial of degree 2:
or, in matrix form,
for the symmetric matrix , the row vector , and the column vector . In characteristic different from 2, the discriminant or determinant of is the determinant of .
The Hessian determinant of is times its discriminant. The multivariate resultant of the partial derivatives of is equal to its Hessian determinant. So, the discriminant of a quadratic form is a special case of the above general definition of a discriminant.
The discriminant of a quadratic form is invariant under linear changes of variables (that is a change of basis of the vector space on which the quadratic form is defined) in the following sense: a linear change of variables is defined by a nonsingular matrix , changes the matrix into and thus multiplies the discriminant by the square of the determinant of . Thus the discriminant is well defined only up to the multiplication by a square. In other words, the discriminant of a quadratic form over a field is an element of , the quotient of the multiplicative monoid of by the subgroup of the nonzero squares (that is, two elements of are in the same equivalence class if one is the product of the other by a nonzero square). It follows that over the complex numbers, a discriminant is equivalent to 0 or 1. Over the real numbers, a discriminant is equivalent to −1, 0, or 1. Over the rational numbers, a discriminant is equivalent to a unique square-free integer.
By a theorem of Jacobi, a quadratic form over a field of characteristic different from 2 can be expressed, after a linear change of variables, in diagonal form as
More precisely, a quadratic forms on may be expressed as a sum
where the are independent linear forms and is the number of the variables (some of the may be zero). Equivalently, for any symmetric matrix , there is an elementary matrix such that is a diagonal matrix.
Then the discriminant is the product of the , which is well-defined as a class in .
Geometrically, the discriminant of a quadratic form in three variables is the equation of a quadratic projective curve. The discriminant is zero if and only if the curve is decomposed in lines (possibly over an algebraically closed extension of the field).
A quadratic form in four variables is the equation of a projective surface. The surface has a singular point if and only its discriminant is zero. In this case, either the surface may be decomposed in planes, or it has a unique singular point, and is a cone or a cylinder. Over the reals, if the discriminant is positive, then the surface either has no real point or has everywhere a negative Gaussian curvature. If the discriminant is negative, the surface has real points, and has a negative Gaussian curvature.
Conic sections
A conic section is a plane curve defined by an implicit equation of the form
where are real numbers.
Two quadratic forms, and thus two discriminants may be associated to a conic section.
The first quadratic form is
Its discriminant is the determinant
It is zero if the conic section degenerates into two lines, a double line or a single point.
The second discriminant, which is the only one that is considered in many elementary textbooks, is the discriminant of the homogeneous part of degree two of the equation. It is equal to
and determines the shape of the conic section. If this discriminant is negative, the curve either has no real points, or is an ellipse or a circle, or, if degenerated, is reduced to a single point. If the discriminant is zero, the curve is a parabola, or, if degenerated, a double line or two parallel lines. If the discriminant is positive, the curve is a hyperbola, or, if degenerated, a pair of intersecting lines.
Real quadric surfaces
A real quadric surface in the Euclidean space of dimension three is a surface that may be defined as the zeros of a polynomial of degree two in three variables. As for the conic sections there are two discriminants that may be naturally defined. Both are useful for getting information on the nature of a quadric surface.
Let be a polynomial of degree two in three variables that defines a real quadric surface. The first associated quadratic form, depends on four variables, and is obtained by homogenizing ; that is
Let us denote its discriminant by
The second quadratic form, depends on three variables, and consists of the terms of degree two of ; that is
Let us denote its discriminant by
If and the surface has real points, it is either a hyperbolic paraboloid or a one-sheet hyperboloid. In both cases, this is a ruled surface that has a negative Gaussian curvature at every point.
If the surface is either an ellipsoid or a two-sheet hyperboloid or an elliptic paraboloid. In all cases, it has a positive Gaussian curvature at every point.
If the surface has a singular point, possibly at infinity. If there is only one singular point, the surface is a cylinder or a cone. If there are several singular points the surface consists of two planes, a double plane or a single line.
When the sign of if not 0, does not provide any useful information, as changing into does not change the surface, but changes the sign of However, if and the surface is a paraboloid, which is elliptic or hyperbolic, depending on the sign of
Discriminant of an algebraic number field
The discriminant of an algebraic number field measures the size of the (ring of integers of the) algebraic number field.
More specifically, it is proportional to the squared volume of the fundamental domain of the ring of integers, and it regulates which primes are ramified.
The discriminant is one of the most basic invariants of a number field, and occurs in several important analytic formulas such as the functional equation of the Dedekind zeta function of K, and the analytic class number formula for K. A theorem of Hermite states that there are only finitely many number fields of bounded discriminant, however determining this quantity is still an open problem, and the subject of current research.
Let K be an algebraic number field, and let OK be its ring of integers. Let b1, ..., bn be an integral basis of OK (i.e. a basis as a Z-module), and let {σ1, ..., σn} be the set of embeddings of K into the complex numbers (i.e. injective ring homomorphisms K → C). The discriminant of K is the square of the determinant of the n by n matrix B whose (i,j)-entry is σi(bj). Symbolically,
The discriminant of K can be referred to as the absolute discriminant of K to distinguish it from the of an extension K/L of number fields. The latter is an ideal in the ring of integers of L, and like the absolute discriminant it indicates which primes are ramified in K/L. It is a generalization of the absolute discriminant allowing for L to be bigger than Q; in fact, when L = Q, the relative discriminant of K/Q is the principal ideal of Z generated by the absolute discriminant of K.
Fundamental discriminants
A specific type of discriminant useful in the study of quadratic fields is the fundamental discriminant. It arises in the theory of integral binary quadratic forms, which are expressions of the form:
where , , and are integers. The discriminant of is given by:Not every integer can arise as a discriminant of an integral binary quadratic form. An integer is a fundamental discriminant if and only if it meets one of the following criteria:
Case 1: is congruent to 1 modulo 4 () and is square-free, meaning it is not divisible by the square of any prime number.
Case 2: is equal to four times an integer () where is congruent to 2 or 3 modulo 4 () and is square-free.
These conditions ensure that every fundamental discriminant corresponds uniquely to a specific type of quadratic form.
The first eleven positive fundamental discriminants are:
1, 5, 8, 12, 13, 17, 21, 24, 28, 29, 33 (sequence A003658 in the OEIS).
The first eleven negative fundamental discriminants are:
−3, −4, −7, −8, −11, −15, −19, −20, −23, −24, −31 (sequence A003657 in the OEIS).
Quadratic number fields
A quadratic field is a field extension of the rational numbers that has degree 2. The discriminant of a quadratic field plays a role analogous to the discriminant of a quadratic form.
There exists a fundamental connection: an integer is a fundamental discriminant if and only if:
, or
is the discriminant of a quadratic field.
For each fundamental discriminant , there exists a unique (up to isomorphism) quadratic field with as its discriminant. This connects the theory of quadratic forms and the study of quadratic fields.
Prime factorization
Fundamental discriminants can also be characterized by their prime factorization. Consider the set consisting of the prime numbers congruent to 1 modulo 4, and the additive inverses of the prime numbers congruent to 3 modulo 4:An integer is a fundamental discriminant if and only if it is a product of elements of that are pairwise coprime.
References
External links
Wolfram Mathworld: Polynomial Discriminant
Planetmath: Discriminant
Polynomials
Conic sections
Quadratic forms
Determinants
Algebraic number theory | Discriminant | Mathematics | 5,603 |
37,591,765 | https://en.wikipedia.org/wiki/Fuel%20factor | The fuel factor, fo, is the ratio of created CO2 to depleted oxygen in a combustion reaction, used to check the accuracy of an emission measurement system. It can be calculated using the equation
fo = (20.9 - %O2)/%CO2,
Where %O2 is the percent O2 by volume, dry basis, %CO2 is the percent CO2 by volume, dry basis, and 20.9 is the percent O2 by volume in ambient air. The Fuel factor can be corrected for the amount of CO, by adding the percent CO on a dry basis to the CO2, and subtracting half of the percent CO from the O2.
See also
Portable emissions measurement system
Air–fuel ratio
References
Fuels | Fuel factor | Chemistry | 155 |
71,106,368 | https://en.wikipedia.org/wiki/Crepidotus%20cinnabarinus | Crepidotus cinnabarinus is a species of saprophytic fungus in the family Crepidotaceae with a stipeless sessile cap distributed in North America and Europe. It is highly conspicuous and often found on fallen branches and rotting trunks of broad leafed trees. In England it appears from late summer to autumn.
Description
Cap: Bright orangish red, the cap (pileus) of C. cinnabarinus is generally about 2 to 18mm in diameter and is convex, shell or fan shaped with a finely down felted surface when fresh, especially at its base, becoming minutely pitted or more or less bald and dry. The margin is irregular to fibrous and initially inrolled.
Stipe (stem): Absent, but a pale, lateral pseudostem is sometimes present.
Gills: Coloured pale brown with a red-orange edge, are crowded and adnexed.
Spores: The spore print is buff. Spore shape is broadly elliptical to subspherical with a finely spiny to warty surface, measuring 8-8.5–8.5
× 5.5–6/5 μm in size.
Absent features- No annulus (ring).
References
Crepidotaceae
Fungi described in 1895
Fungi of Europe
Fungi of North America
Fungus species | Crepidotus cinnabarinus | Biology | 272 |
55,877,721 | https://en.wikipedia.org/wiki/NGC%201994 | NGC 1994 (also known as ESO 56-SC136) is an open cluster in the Dorado constellation which is located in the Large Magellanic Cloud. It was discovered by John Herschel on 16 December 1835. It has an apparent magnitude of 9.8 and its size is 0.60 arc minutes.
References
Open clusters
1994
56-SC136
Dorado
Large Magellanic Cloud
Astronomical objects discovered in 1835
Discoveries by John Herschel | NGC 1994 | Astronomy | 93 |
1,427,906 | https://en.wikipedia.org/wiki/Exploring%20the%20Earth%20and%20the%20Cosmos | Exploring the Earth and the Cosmos is a book written by Isaac Asimov in 1982.
References
Books by Isaac Asimov
1982 non-fiction books
Astronomy books | Exploring the Earth and the Cosmos | Astronomy | 32 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.